CMStatistics 2023: Start Registration
View Submission - CMStatistics
B0786
Title: Neural additive models: Bridging the gap between interpretability and deep learning for enhanced predictive power Authors:  Anton Thielmann - TU-Clausthal (Germany) [presenting]
Benjamin Saefken - Clausthal University of Technology (Germany)
Abstract: Neural additive models (NAMs) offer a powerful and interpretable framework for understanding neural network predictions. By combining the flexibility of neural networks with interpretability, NAMs bridge the gap between classical statistics and deep learning. Drawing inspiration from generalized additive models (GAMs), NAMs capture the individual effects of features, enabling a transparent understanding of prediction mechanisms. Unlike traditional neural networks, NAMs incorporate additive structures that break down complex interactions into interpretable components, facilitating nuanced interpretations of feature-prediction relationships. NAMs surpass GAMs by accommodating structured and unstructured effects, empowering researchers to model various data types and capture intricate relationships often overlooked by conventional approaches. To further enhance interpretability, the NAM framework is extended in multiple ways, such as accounting for distributional regression and modelling beyond mean predictions. Intelligible image interpretability is achieved through interpolation in the semantic space, while the incorporation of transformer architectures enables the consideration of categorical features, bolstering predictive power. In summary, leveraging the additive structure from GAMs, NAMs offer flexibility, interpretability, and predictive power, making them indispensable tools for unravelling complexities within diverse datasets.