B0764
Title: Functional decomposition through orthogonalization of neural additive models
Authors: David Koehler - Universitaetsklinikum Bonn (Germany) [presenting]
David Ruegamer - LMU Munich (Germany)
Matthias Schmid - University of Bonn (Germany)
Abstract: Machine-learning-based prediction models such as deep neural networks (DNN) often lack interpretability due to their black-box nature. Functional decomposition is a well-explored tool that improves the interpretability of black-box models by splitting the prediction function into a sum of main and interaction effects, thereby facilitating the application of such models in fields involving critical decision processes (e.g. finance or healthcare). However, computation of existing methods is often computationally infeasible, especially when analyzing higher dimensional continuous data. A novel method is presented for deriving a functional decomposition of arbitrary continuous prediction functions. This is done by fitting a neural additive model (NAM) with DNN-based main-effects and interaction submodels using the model predictions as outcome variables. The submodels are orthogonalized against higher-order terms to ensure interpretable, identifiable low-order feature effects. By having minimal prerequisites on DNN architecture and model fitting, the method can be widely applied without constraining the learning algorithm and model predictive performance. The empirical results demonstrate the algorithm's ability to correctly identify the shape and size of the contributions of single features, yielding insights into the contribution of features to model predictions.