CMStatistics 2023: Start Registration
View Submission - CMStatistics
B0289
Title: Towards inferential reproducibility of machine learning research Authors:  Stefan Riezler - Heidelberg University (Germany) [presenting]
Abstract: The reliability of machine learning evaluation - the consistency of observed evaluation scores across replicated model training runs - is affected by several sources of nondeterminism which can be regarded as measurement noise. Current tendencies to remove noise in order to enforce the reproducibility of research results neglect inherent nondeterminism at the implementation level and disregard crucial interaction effects between algorithmic noise factors and data properties. This limits the scope of conclusions that can be drawn from such experiments. Instead of removing noise, several sources of variance are proposed, including their interaction with data properties, into an analysis of the significance and reliability of machine learning evaluation, with the aim to draw inferences beyond particular instances of trained models. It is shown how to use linear mixed-effects models (LMEMs) to analyze performance evaluation scores and to conduct statistical inference with a generalized likelihood ratio test (GLRT). This allows incorporating arbitrary sources of noise like meta-parameter variations into statistical significance testing, and to assess performance differences conditional on data properties. Furthermore, a variance component analysis (VCA) enables the analysis of the contribution of noise sources to overall variance and the computation of a reliability coefficient by the ratio of substantial to total variance.