CMStatistics 2023: Start Registration
View Submission - CMStatistics
B1172
Title: Interpretability, regularization and uncertainty quantification in Bayesian causal inference Authors:  Alberto Caron - The Alan Turing Institute (United Kingdom) [presenting]
Ioanna Manolopoulou - University College London (United Kingdom)
Gianluca Baio - University College London (United Kingdom)
Abstract: The problem of interpretability, uncertainty quantification and regularization in individual causal effects (ITE) estimation is addressed under observed confounding via non-parametric regression adjustment. High-dimensional observational data are abundant in many applied disciplines where exploration of policies in the real world is costly and can be leveraged to estimate ITE for highly personalized decision-making. Black-box statistical learning models adjusted for the causal setting generally perform well in the task of ITE estimation. However, they often lack three relevant components when it comes to designing personalized policies: i) Interpretability: they do not produce any interpretable measure of importance as to what are the main moderators of the heterogeneity behind the response to treatment; ii) Targeted Regularization: they are unable to convey carefully tailored shrinkage directly on the quantity of interest (Conditional Average Treatment Effects) and often end up generating unintended bias in the estimates; iii) Uncertainty Quantification: for similar reasons to point ii), they also fail to directly produce appropriate uncertainty intervals around point estimates. A novel Bayesian non-parametric regression method, Shrinkage Bayesian Causal Forests (SH-BCF), is presented that tackles these three issues by exploiting an equivalent parametrization of the outcome surface. The performance in simulated studies and in a real-world example is illustrated.