CMStatistics 2022: Start Registration
View Submission - CMStatistics
B1698
Title: Nonsmooth low-rank matrix recovery: Methodology, theory and algorithm Authors:  Peng Liu - University of Kent (United Kingdom) [presenting]
Abstract: Many interesting problems in statistics and machine learning can be written as $\min_x F(x) = f(x) + g(x)$, where $x$ is the model parameter, $f$ is the loss and $g$ is the regularizer. Examples include regularized regression in high-dimensional feature selection and low-rank matrix/tensor factorization. Sometimes the loss function and/or the regularizer is nonsmooth due to the nature of the problem; for example, $f(x)$ could be quantile loss to induce some robustness or to put more focus on different parts of the distribution other than the mean. We propose a general framework to deal with situations when you have nonsmooth loss or regularizer. Specifically, we use low-rank matrix recovery as an example to demonstrate the main idea. The framework involves two main steps: the optimal smoothing of the loss function or regularizer and then a gradient-based algorithm to solve the smoothed loss. The proposed smoothing pipeline is highly flexible, computationally efficient, easy to implement and well-suited for problems with high-dimensional data. A strong theoretical convergence guarantee has also been established. In the numerical studies, we used $L_1$ loss as an example to illustrate the practicability of the proposed pipeline. Various state-of-the-art algorithms such as Adam, NAG and YellowFin all show promising results for the smoothed loss.