CFE 2015: Start Registration
View Submission - CFE
A1215
Title: Block hyper-$g$ priors in Bayesian regression Authors:  Christopher Hans - The Ohio State University (United States) [presenting]
Abstract: Thick-tailed mixtures of $g$ priors have gained traction as a default choice of prior distribution in Bayesian regression. The motivation for these priors usually focuses on properties of model comparison and variable selection as well as computational considerations. Standard mixtures of $g$ priors mix over a single, common scale parameter that shrinks all regression coefficients in the same manner. The particular form of the mixture distribution determines the model comparison properties. We focus on the effect of the mono-shrinkage induced by use of a single scale parameter and propose new mixtures of $g$ priors that allow for differential shrinkage across collections of coefficients. We introduce a new ``conditional information asymptotic'' that is motivated by the common data analysis setting where at least one regression coefficient is much larger than others. We analyze existing mixtures of $g$ priors under this limit and reveal two new behaviors, ``Essentially Least Squares (ELS)'' estimation and a ``Conditional Lindleys Paradox (CLP)'', and argue that these behaviors are undesirable. As the driver behind both of these behaviors is the use of a single, latent scale parameter that is common to all coefficients, we propose a block hyper$-g$ prior that allows for differential shrinkage across collections of covariates and provide conditions under which ELS and the CLP are avoided by the new class of priors.