CMStatistics 2021: Start Registration
View Submission - CMStatistics
B0153
Title: Most powerful inference after model selection via confidence distributions (virtual) Authors:  Gerda Claeskens - KU Leuven (Belgium) [presenting]
Abstract: Sometimes a model for statistical analysis is not given before the analysis but is the result of a model selection method using the same data. Thus, the uncertainty about the model used for inference has consequences for hypothesis testing and the construction of confidence intervals for the model parameters of interest. Ignoring this uncertainty leads to over-optimistic results, implying that computed $p$-values are too small and that confidence intervals are too narrow for the intended coverage. Confidence distributions and confidence curves need to be adjusted to account for the selection of the model in order to provide valid inference after model selection for the parameters of interest. Under some assumptions, uniformly most powerful post-selection confidence curves are obtained that are finite sample exact.