EcoSta 2024: Start Registration
View Submission - EcoSta2024
A0372
Title: On convergence of AdaGrad for stochastic optimization under relaxed assumptions Authors:  Junhong Lin - Zhejiang University (China) [presenting]
Abstract: The convergence of AdaGrad with momentum is revisited (covering AdaGrad as a special case) on non-convex smooth optimization problems. A general noise model is considered where the noise magnitude is controlled by the function value gap together with the gradient magnitude. This model encompasses a broad range of noises including bounded noise, sub-Gaussian noise, affine variance noise and the expected smoothness, and it has been shown to be more realistic in many practical applications. The analysis yields a probabilistic convergence rate which, under the general noise, could reach $(\tilde{\mathcal(\tilde{\mathcal{O}}(1/\sqrt{T})){O}}(1/\sqrt{T}))$. This rate does not rely on prior knowledge of problem parameters and could accelerate to $(\tilde{\mathcal{O}}(1/T))$ where (T) denotes the total number of iterations when the noise parameters related to the function value gap and noise level are sufficiently small. The convergence rate thus matches the lower rate for stochastic first-order methods over non-convex smooth landscapes up to logarithm terms. A convergence bound for AdaGrad is further derived with momentum, considering the generalized smoothness where the local smoothness is controlled by a first-order function of the gradient norm.