1. Explainable ML
- Author
-
Zack Xuereb Conti and Sawako Kaijima
- Subjects
Intersection (set theory) ,Computer science ,business.industry ,Probabilistic logic ,Bayesian network ,Machine learning ,computer.software_genre ,Simulation software ,Metamodeling ,Surrogate model ,Artificial intelligence ,Engineering design process ,business ,computer ,Interpretability - Abstract
This chapter questions the interpretability of numerical simulation tools such as finite element analysis for assisting human intelligence in architectural and engineering design. The intricacy of numerical code underlying simulation software can render the analysis output difficult to interpret into an intuitive format. In response, this chapter presents a machine learning (ML) based approach using Bayesian networks (BNs), to build interpretable representations of the simulation code, also referred to as a simulation metamodel or surrogate model. BNs are a probabilistic technique that lie at the intersection of statistics and ML, where input–output relationships are captured from data using ML techniques and subsequently represented in an interpretable form such that they can be explored intuitively using statistical techniques. A metamodel or surrogate model in the form of Bayesian Network facilitates the intuitive exploration of cause-effect relationships, in multi-simulation input, multi-simulation output scenarios. As a result, BNs absorb the cognitive effort to keep track of intricate input–output relationships while freeing-up the cognitive capacity to control design outcomes intuitively.
- Published
- 2021
- Full Text
- View/download PDF