CMStatistics 2023: Start Registration
View Submission - CFE
A1977
Title: Feature importance for deep neural networks: A comparison of predictive power, infidelity, and sensitivity Authors:  Lars Fluri - University of Basel (Switzerland) [presenting]
Abstract: A thorough examination of feature importance algorithms in neural networks targets prediction tasks involving synthetic data of varying complexities. DeepLIFT, Shapley Value Sampling, Integrated Gradients, LIME, and GradientSHAP are used for feature importance estimation. Key insights include statements about the predictive strength of relationships between features and targets, as well as algorithmic fidelity and sensitivity. DeepLIFT performs strongly in a majority of synthetic data scenarios, while Shapley Value Sampling gains an edge in more complex data contexts. Strong correlation in data sets significantly worsens feature importance estimation accuracy, but spurious and irrelevant features are generally handled effectively. When subjected to empirical applications, DeepLIFT and Integrated Gradients exhibit lower sensitivity and infidelity compared to other methods. Further applications of feature importance and explainable machine learning in econometrics, economics, and finance are proposed and highlighted.