1. Explainable artificial intelligence in transport Logistics: Risk analysis for road accidents.
- Author
-
Abdulrashid, Ismail, Zanjirani Farahani, Reza, Mammadov, Shamkhal, Khalafalla, Mohamed, and Chiang, Wen-Chyuan
- Subjects
- *
ARTIFICIAL neural networks , *ARTIFICIAL intelligence , *TRAUMA registries , *RISK assessment , *INJURY risk factors , *TRAFFIC accidents , *RANDOM forest algorithms - Abstract
• A comprehensive, explainable accident-data-based AI artifact design is presented. • Utilize aggregate SHAP scores to classify crash data into high-level causal factors. • A systems-level taxonomy of driver behaviors linked to injury severity is discussed. • Separating explanation from models yields managerial insights. Automobile traffic accidents represent a significant threat to global public safety, resulting in numerous injuries and fatalities annually. This paper introduces a comprehensive, explainable artificial intelligence (XAI) artifact design, integrating accident data for utilization by diverse stakeholders and decision-makers. It proposes responsible, explanatory, and interpretable models with a systems-level taxonomy categorizing aspects of driver-related behaviors associated with varying injury severity levels, thereby contributing theoretically to explainable analytics. In the initial phase, we employed various advanced techniques such as data missing at random (MAR) with Bayesian dynamic conditional imputation for addressing missing records, synthetic minority oversampling technique for data imbalance issues, and categorical boosting (CatBoost) combined with SHapley Additive exPlanations (SHAP) for determining and analyzing the importance and dependence of risk factors on injury severity. Additionally, exploratory feature analysis was conducted to uncover hidden spatiotemporal elements influencing traffic accidents and injury severity levels. We developed several predictive models in the second phase, including eXtreme Gradient Boosting (XGBoost), random forest (RF), deep neural networks (DNN), and fine-tuned parameters. Using the SHAP approach, we employed model-agnostic interpretation techniques to separate explanations from models. In the final phase, we provided an analysis and summary of the system-level taxonomy across feature categories. This involved classifying crash data into high-level causal factors using aggregate SHAP scores, illustrating how each risk factor contributes to different injury severity levels. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF