CMStatistics 2022: Start Registration
View Submission - CFE
A0192
Title: Analytic natural gradient updates for Cholesky factor in Gaussian variational approximation Authors:  Siew Li Linda Tan - National University of Singapore (Singapore) [presenting]
Abstract: Stochastic gradient methods have enabled variational inference for high-dimensional models and large datasets. However, the steepest ascent direction in the parameter space of a statistical model is actually given by the natural gradient, which premultiplies the widely used Euclidean gradient by the inverse of the Fisher information matrix. The use of natural gradients can improve convergence, but inverting the Fisher information matrix is daunting in high dimensions. In Gaussian variational approximation, natural gradient updates of the mean and precision matrix of the Gaussian distribution can be derived analytically, but do not ensure the precision matrix remains positively definite. To tackle this issue, we consider the Cholesky decomposition of the covariance or precision matrix, and derive analytic natural gradient updates of the Cholesky factor, which depend only on the first derivative of the log posterior density. Efficient natural gradient updates of the Cholesky factor are also derived under sparsity constraints representing different posterior correlation structures. As Adam's adaptive learning rate does not seem to pair well with natural gradients, we propose using stochastic normalized natural gradient ascent with momentum. The efficiency of the proposed methods is demonstrated using generalized linear mixed models.