EcoSta 2023: Start Registration
View Submission - EcoSta2023
A0404
Title: Rates of convergence in Bayesian meta-learning Authors:  Pierre Alquier - ESSEC Business School (Singapore) [presenting]
Badr Eddine Cherief Abdellatif - CNRS (France)
Charles Riou - University of Tokyo (Japan)
Abstract: The rate of convergence of Bayesian learning algorithms is determined by two conditions: the behaviour of the loss function around the optimal parameter (Bernstein condition) and the probability mass given by the prior neighbourhoods of the optimal parameter. In meta-learning, multiple learning tasks are faced that are independent but are still expected to be related in some way. For example, the optimal parameters of all the tasks can be close to each other. It is then tempting to use the past tasks to build a better prior that we use to solve future tasks more efficiently. From a theoretical point of view, we hope to improve the prior mass condition in future tasks and, thus, the rate of convergence. It is proved that this is indeed the case. Interestingly, it is also proved that the optimal prior can be learned at a fast rate of convergence, regardless of the rate of convergence within the tasks (in other words, the Bernstein condition is always satisfied for learning the prior, even when it is not satisfied within tasks).