A0954
Title: Learning sparse mixture-of-experts generalized linear models in ultrahigh dimensions
Authors: Pengqi Liu - McGill University (Canada) [presenting]
Abbas Khalili - McGill University (Canada)
Abstract: Mixture-of-experts generalized linear models (MoE-GLM) are used for analyzing data that arise from populations with unobserved heterogeneity. In recent applications of MoE-GLM, data are often collected on a large number of features. However, fitting an MoE-GLM to such high-dimensional data is numerically challenging. To cope with the high-dimensionality in estimation, it is often assumed that the model is sparse and only a handful of features are relevant to the analysis. Most of the existing development on sparse estimation is in the context of homogeneous regression or supervised learning problems. The focus is on some of the challenges and also recent computational and theoretical developments for sparse estimation in MoE-GLM when the number of features can be in exponential order of the sample size (ultrahigh-dimensional setting). The asymptotic properties of the proposed methodology and its performance in finite-sample settings are also presented and discussed via simulations and a real data analysis.