B1619
Title: Explainable generalized additive neural networks with independent neural network training
Authors: Ines Ortega-Fernandez - Galician Research and Development Center in Advanced Telecommunications (GRADIANT) (Spain) [presenting]
Marta Sestelo - University of Vigo (Spain)
Abstract: Neural networks have become increasingly popular due to their remarkable performance across various domains, including computer vision, anomaly detection, and cybersecurity. However, the inherent black-box nature of neural networks poses challenges in understanding their decision-making processes. Recent trends in AI systems emphasise interpretability and explainability to increase trust in their decisions. A neural network topology inspired by generalized additive models (GAM) is presented, which trains independent neural networks to estimate the effect of each covariate on the response variable, leading to the creation of a highly accurate and interpretable deep learning-based generalized additive neural network (GANN) model. The effectiveness of this method is showcased to detect and explain three different types of cyberattacks in an industrial network, achieving high detection rates while providing interpretable results.