CFE 2019: Start Registration
View Submission - CMStatistics
Title: A framework for online meta-learning Authors:  Massimiliano Pontil - Istituto Italiano di Tecnologia and University College London (Italy) [presenting]
Abstract: The focus is on the problem in which a series of learning tasks are observed sequentially and the goal is to incrementally adapt a learning algorithm in order to improve its performance on future tasks. We consider both stochastic and adversarial settings. The algorithm may be parametrized by either a representation matrix applied to the raw inputs or by a bias vector. We develop a computational efficient meta-algorithm to incrementally adapt the learning algorithm after a task dataset is observed. The meta-algorithm performs online convex optimization on a proxy objective of the risk of the learning algorithm. We derive bounds on the performance of the meta-algorithm, measured by either the average risk of the learning algorithm on random tasks from the environment or by an average regret bound. Our analysis leverages ideas from multitask learning and learning-to-learn with tools from online learning and stochastic optimization. Lastly, we discuss extensions of the framework to nonlinear models such a deep neural nets and draw links between meta-learning, bilevel optimization and gradient-based hyperparameter optimization. A framework for online meta-learning