Paper quickie -- N-BEATS: Neural Basis Expansion Analysis For Interpretable Time Series Forecasting
Get the paper here
Authors: Boris Oreshkin, Dmitry Carpov, Nicolas Chapados and Yoshua Bengio
Summary
The paper outlines an approach to univariate time series forecasting built on its decomposition over a set of basis functions, and provides a method to compute the decomposition factors using deep learning. These functions can be either fully learned themselves, or chosen by hand to provide interpretability of the decomposition. The novelty of the paper is its model architecture, relying on a deep sequence of fully connected blocks that put out both a forecast and a backcast (a prediction of a suffix of the input time series); the residual between the backcast and input is fed into the next block, whose responsibility becomes the approximation of that residual, and so on recursively. Putting a bunch of configurations in an ensemble predictor yields state of the art on one of the most interesting time series prediction problems, the M4 dataset.
My thoughts
Given that I deal in anomaly detection on multivariate time series, I am super interested in this approach of basis decomposition. The paper would show an ensemble variant that would mix together learned basis models, and chosen basis models, and get thereby SOTA for two datasets out of three the authors validated on. I wonder what performance you if you provide some N_! basis functions, and have the model learn N_? more basis functions. Also, if one were to generalize this work to multivariate time series, how they could embed in the model correlation structures between the multiple variates at each time step. One idea would be to choose "vertically" correlated basis functions, and learn basis functions for phenomena that differ between the series.