EcoSta 2023: Start Registration
View Submission - EcoSta2023
A1023
Title: Learning from similar linear representations: Adaptivity, minimaxity, and robustness Authors:  Yang Feng - NYU (United States) [presenting]
Abstract: Representation multi-task learning (MTL) and transfer learning (TL) have achieved tremendous success in practice. However, the theoretical understanding of these methods is still lacking. Most existing theoretical works focus on cases where all tasks share the same representation and claim that MTL and TL almost always improve performance. However, assuming all tasks share the same representation as the number of tasks grows is unrealistic. Also, this does not always match empirical findings, which suggest that a shared representation may not necessarily improve single-task or target-only learning performance. The aim is to understand how to learn from tasks with similar but not exactly the same linear representations while dealing with outlier tasks. Two algorithms are proposed that are adaptive to the similarity structure and robust to outlier tasks under both MTL and TL settings. The algorithms outperform single-task or target-only learning when representations across tasks are sufficiently similar and the fraction of outlier tasks is small. Furthermore, they always perform no worse than single-task learning or target-only learning, even when the representations are dissimilar. Information-theoretic lower bounds are provided to show that the algorithms are nearly minimaxed optimal in a large regime.