A1037
Title: Transfer learning for high-dimensional reduced rank time series models
Authors: Abolfazl Safikhani - George Mason University (United States) [presenting]
Abstract: The objective of transfer learning is to enhance estimation and inference in target data by leveraging knowledge gained from additional sources. Recent studies have explored transfer learning for independent observations in complex, high-dimensional models assuming sparsity, yet research on time series models remains limited. The focus is on transfer learning for sequences of observations with temporal dependencies and a more intricate model parameter structure. Specifically, the vector autoregressive model (VAR), a widely recognized model for time series data, is investigated where the transition matrix can be deconstructed into a combination of a sparse matrix and a low-rank one. A new transfer learning algorithm is proposed and tailored to estimate high-dimensional VAR models characterized by low-rank and sparse structures. Additionally, a novel approach is presented for selecting informative observations from auxiliary datasets. Theoretical guarantees are established, encompassing model parameter consistency, informative set selection, and the asymptotic distribution of estimators under mild conditions. The latter facilitates the construction of entry-wise confidence intervals for model parameters. Finally, the empirical efficacy of the methodologies is demonstrated through both simulated and real-world datasets.