A1446
Title: Fast, efficient, and automatic tuning parameter selection for LASSO
Authors: Sumanta Basu - Cornell University (United States) [presenting]
Abstract: Tuning parameter selection for penalized regression methods such as LASSO is an important issue in practice, albeit less explored in the literature of statistical methodology. Most common choices include cross-validation (CV), which is computationally expensive, or information criterions such as AIC or BIC, which are known to perform worse in high-dimensional scenarios. Guided by the asymptotic theory of LASSO that connects the choice of tuning parameter to estimation of error standard deviation, autotune is proposed, a procedure that alternately maximizes a (restricted) penalized log-likelihood over regression coefficients and the nuisance parameter, resulting in an automatic tuning algorithm. The core insight behind autotune is that under exact or approximate sparsity conditions, estimation of the scalar nuisance parameter may often be statistically and computationally easier than estimation of the high-dimensional regression parameter, leading to a gain in efficiency. Using simulated and real data sets, it is shown that autotune is faster, and provides superior estimation, variable selection, and prediction performance than existing tuning strategies for LASSO as well as alternatives such as the scaled LASSO. The algorithm can be extended naturally to high-dimensional time series problems, and this is illustrated in the context of estimating large vector autoregression (VAR).