CMStatistics 2023: Start Registration
View Submission - CMStatistics
B1586
Title: Efficiency and robustness of Rosenbaum's regression (un)-adjusted rank-based estimator in randomized experiments Authors:  Aditya Ghosh - Stanford University (United States) [presenting]
Nabarun Deb - University of British Columbia, Vancouver (Canada)
Bikram Karmakar - University of Florida (United States)
Bodhisattva Sen - Columbia University (United States)
Abstract: Mean-based estimators of the causal effect in a completely randomized experiment (e.g., the difference-in-means estimator) may behave poorly if the potential outcomes have a heavy tail or contain outliers. An alternative estimator by Rosenbaum is studied that estimates the constant additive treatment effect by inverting a randomization test using ranks. By investigating the breakdown point and asymptotic relative efficiency of this rank-based estimator, it is shown that it is provably robust against heavy-tailed potential outcomes and has an asymptotic variance that is, in the worst case, at most about 1.16 times that of the difference-in-means estimator, and much smaller when the potential outcomes are not light-tailed. A consistent estimator of the asymptotic standard error of Rosenbaum's estimator is also derived, yielding a readily computable confidence interval for the treatment effect. Moreover, a regression-adjusted version of Rosenbaum's estimator is studied to incorporate additional covariate information in randomization inference. The gain is proved in efficiency by this regression adjustment method under a linear regression model. It is illustrated through synthetic and real-world data that, unlike the mean-based estimators, these rank-based estimators (whether regression-adjusted or not) are efficient and robust against heavy-tailed distributions, contamination, and model misspecification.