CMStatistics 2022: Start Registration
View Submission - CMStatistics
B0851
Title: Analyzing randomized experiments subject to outcome misclassification via integer programming Authors:  Siyu Heng - New York University (United States) [presenting]
Pamela Shaw - Kaiser Permanente Washington Health Research Institute (United States)
Abstract: Results from randomized experiments (trials) can be severely distorted by outcome misclassification, such as from measurement error or reporting bias in binary outcomes. All existing approaches to outcome misclassification rely on some data-generating (super-population) model and, therefore, may not be applicable to randomized experiments without additional assumptions. We propose a model-free and finite-population-exact framework for randomized experiments subject to outcome misclassification. A central quantity in our framework is ``warning accuracy," defined as the threshold such that the causal conclusion drawn from the measured outcomes may differ from that based on the true outcomes if the outcome measurement accuracy did not surpass that threshold. We show how learning the warning accuracy and related concepts can benefit a randomized experiment subject to outcome misclassification. We show that the warning accuracy can be computed efficiently (even for large datasets) by adaptively reformulating an integer program with respect to the randomization design. Our framework covers both Fisher's sharp null and Neyman's weak null, works for a wide range of randomization designs, and can also be applied to observational studies adopting randomization-based inference. We apply our framework to a large randomized clinical trial for the prevention of prostate cancer.