CMStatistics 2022: Start Registration
View Submission - CMStatistics
B0451
Title: A universal trade-off between the model size, test loss, and training loss of linear predictors Authors:  Nikhil Ghosh - University of California, Berkeley (United States) [presenting]
Mikhail Belkin - University of California San Diego (United States)
Abstract: The aim is to establish an algorithm and distribution-independent non-asymptotic trade-off between the model size, excess test loss, and training loss of linear predictors. Specifically, we show that models that perform well on the test data (have a low excess loss) are either ``classical'' -- have training loss close to the noise level, or are ``modern'' -- have a much larger number of parameters compared to the minimum needed to fit the training data exactly. We also provide a more precise asymptotic analysis when the limiting spectral distribution of the whitened features is Marchenko-Pastur. Remarkably, while the Marchenko-Pastur analysis is far more precise near the interpolation peak, where the number of parameters is just enough to fit the training data, in settings of most practical interest, it differs from the distribution independent bound by only a modest multiplicative constant.