EcoSta 2024: Start Registration
View Submission - EcoSta 2025
A0606
Title: When do graph neural networks fail: Exact generalization error from signal processing perspective Authors:  Nil Ayday - Technical University of Munich (Germany)
Mahalakshmi Sabanayagam - Technical University of Munich (Germany)
Debarghya Ghoshdastidar - Technical University of Munich (Germany) [presenting]
Abstract: Graph neural networks (GNNs) are widely used in learning on graph-structured data, yet a principled understanding of why they succeed or fail remains elusive. While prior work has examined architectural limitations, such as over-smoothing, these do not explain what enables GNNs to extract meaningful representations or why performance varies between architectures. This gap in understanding relates to the role of generalisation: the ability of a model to make accurate predictions on unseen data. Although several works have derived generalization error bounds for GNNs, these are typically loose, architecture-specific, and offer limited insight into what governs generalization in practice. A fundamentally different approach is taken by deriving the exact generalisation error for GNNs in a semi-supervised learning setting through the lens of signal processing. From this viewpoint, GNNs can be interpreted as graph filter operators that act on node features via the graph structure. Consequently, the first exact generalization error is derived for a broad range of GNNs. It is shown that only the aligned information between node features and the graph structure contributes to generalization, and the effect of homophily is quantified in generalization in a principled way across GNNs. A framework that explains when and why GNNs can effectively leverage structural and feature information is provided, offering practical guidance for model selection and data analysis.