A0282
Title: Inside the black-box models through explainable decision tree ensembles
Authors: Carmela Iorio - University of Naples, Federico II (Italy) [presenting]
Agostino Gnasso - University of Naples Federico II (Italy)
Massimo Aria - University of Naples Federico II (Italy)
Abstract: Models that can predict outcomes and explain the process by which they are produced are urgently needed in the social sciences. Explainable machine learning is about making the inner workings of models easy to understand, from input to output. Ensemble methods are popular because they combine multiple models to achieve accurate solutions. Thanks to its impressive ability to accurately predict outcomes, Random Forest (RF) is a common tool for regression and classification problems. Despite their apparent simplicity, RF models are often perceived as black-box models due to the complexity of the decision trees they generate. We have developed a solution to this problem: Explainable Ensemble Trees. This methodology provides explainable decision trees within the RF framework. It offers both predictive performance and a visual representation that is intuitively understandable. The aim is to represent the relationships between variables to improve explainability. These models are intended to explain decision processes occurring in domains where results may have important consequences. Acknowledgment: This research has been financed by the following research projects: PRIN-2022 "SCIK-HEALTH" (Project Code: 2022825Y5E; CUP: E53D2300611006); PRIN-2022 PNRR "The value of scientific production for patient care in Academic Health Science Centres" (Project Code: P2022RF38Y; CUP: E53D23016650001)