A0829
Title: Factor augmented tensor-on-tensor neural networks
Authors: Xiufan Yu - University of Notre Dame (United States) [presenting]
Abstract: The prediction task of tensor-on-tensor regression in which both covariates and responses are multi-dimensional arrays (a.k.a., tensors) are studied across time with arbitrary tensor order and data dimension. Existing methods either focused on linear models without accounting for possibly nonlinear relationships between covariates and responses or directly employed black-box deep learning algorithms that failed to utilize the inherent tensor structure. A factor-augmented tensor-on-tensor neural network (FATTNN) is proposed to integrate tensor factor models into deep neural networks. It begins with summarizing and extracting useful predictive information (represented by the "factor tensor'') from the complex structured tensor covariates. Then, it proceeds with the prediction task using the estimated factor tensor as input for a temporal convolutional neural network. The proposed methods effectively handle nonlinearity between complex data structures and improve over traditional statistical models and conventional deep learning approaches in both prediction accuracy and computational cost. By leveraging tensor factor models, our proposed methods exploit the underlying latent factor structure to enhance the prediction and drastically reduce the data dimensionality that speeds up the computation. Numerical results show the proposed methods achieve substantial increases in prediction accuracy and significant reductions in computational time compared to benchmark methods.