B0947
Title: Deep and interpretable probabilistic forecasts
Authors: Philipp Baumann - ETH Zurich (Switzerland) [presenting]
Abstract: Quantifying uncertainty plays a crucial role in many high-stakes decision-making processes. The advent of deep learning has led to a proliferation of novel probabilistic forecasting tools. However, the increasing complexity of deep learning models has compromised their interpretability. Moreover, the question of whether deep learning is truly beneficial for probabilistic forecasting remains unanswered. In light of these developments, an extension is proposed to autoregressive transformation models (a semi-parametric probabilistic forecasting method) using deep learning. The approach aims to improve predictive performance while enhancing interpretability with a newly developed interpretability score. To achieve this, the new model class is embedded in a multi-objective optimization framework and the optimization problem is tackled using a modified version of NSGA-2, an evolutionary algorithm. The effectiveness of the approach is demonstrated by applying it to widely used time series benchmark datasets.