CMStatistics 2023: Start Registration
View Submission - CMStatistics
B0988
Title: Beyond Neyman-Pearson: Setting alpha after the fact Authors:  Peter Grunwald - CWI and Leiden University (Netherlands) [presenting]
Abstract: A standard practice in statistical hypothesis testing is to mention the p-value alongside the accept/reject decision. It is shown that a major advantage of mentioning an e-value instead. With p-values, an extreme observation (e.g. p << alpha) cannot be used for getting better frequentist decisions. With e-values it is possible since they provide Type-I risk control in a generalized Neyman-Pearson setting with the decision task (a general loss function) determined post-hoc, after observation of the data, thereby providing a handle on the age-old "roving alpha" problem in statistics: robust "Type-I risk bounds" is obtained which hold independently of any preset alpha or loss function. The reasoning can be extended to confidence intervals. E-values were originally introduced because of their ability to deal with optional continuation, i.e. gathering additional data whenever one sees fit. Their ability to deal with post-hoc decision tasks provides a second, independent argument for embracing them.