CMStatistics 2021: Start Registration
View Submission - CFE
A1692
Title: Generalized linear models with structured sparsity estimators Authors:  Mehmet Caner - North Carolina State University (United States) [presenting]
Abstract: Structured sparsity estimators in Generalized Linear Models are introduced. Structured sparsity estimators in the least-squares loss have been previously introduced. The proofs exclusively depended on their fixed design and normal errors. We extend the results to debiased structured sparsity estimators with Generalized Linear Model-based loss with random design and non-sub Gaussian data. Structured sparsity estimation means penalized loss functions with a possible sparsity structure in a norm. These norms include weighted group lasso, lasso and norms generated from convex cones. The contributions are fivefold: 1. We generalize the existing oracle inequality results in penalized Generalized Linear Models by proving the underlying conditions rather than assuming them. One of the key issues is the proof of a sample one-point margin condition and its use in an oracle inequality. 2. The results cover even non-sub-Gaussian errors and random regressors. 3. We provide a feasible weighted nodewise regression proof that generalizes the results in the literature from a simple l1 norm usage to norms generated from convex cones. 4. We realize that norms used in feasible nodewise regression proofs should be weaker or equal to the norms in penalized Generalized Linear Model loss. 5. We can debias the first step estimator via getting an approximate inverse of the singular-sample second-order partial derivative of Generalized Linear Model loss.