Unveiling the Truth: The Controversy Surrounding Deep Learning in Time Series Forecasting

Introduction:

img

Time series forecasting has long been a subject of interest for businesses and researchers alike. With the rise of deep learning models in various domains, it is natural to explore their potential in time series forecasting. However, recent discussions and experiences from professionals in the field have cast doubt on the efficacy of these specialized “time series” deep learning models. In this article, we delve into the criticisms raised by experts and shed light on the current state of time series forecasting.

Limited Advantages of Deep Learning Models: The author, who possesses extensive experience in time series forecasting, states that there is no significant advantage in using dedicated “time series” models compared to other deep learning models. Various state-of-the-art time series deep learning models, such as N-BEATS and N-HiTS, have been tested and found to be outperformed by Multilayer Perceptron (MLP) models that utilize lagged values as features. This observation challenges the claim that these specialized models possess unique advantages in understanding time-oriented patterns.

The Role of LightGBM and Other Traditional Models: On mid-dimensional data, the author points out that traditional models like LightGBM and Xgboost outperform deep learning models while requiring less fine-tuning and computation time. For low-dimensional data, (V)ARIMA/ETS/Factor models continue to dominate due to their structure based on human intuition. These observations highlight the importance of considering the nature and dimensionality of the data when selecting an appropriate forecasting model.

Challenges with Deep Learning Approaches: One key challenge with deep learning models for time series forecasting is their limited understanding of the fundamental structure of the data. Unlike language models that exhibit high generalization abilities, time series models trained solely on time-related data lack the comprehensive understanding necessary for accurate forecasting. The author asserts that training multiple models for each step ahead can compensate for this limitation, as it allows for greater accuracy in long-term predictions.

Comparing Lagged Features with Sequence Lengths in Transformers: The effectiveness of lagged features in MLP models is questioned in comparison to longer sequence lengths in attention-based Transformer models. The author suggests that while Transformers have shown remarkable performance in language and vision tasks, they have yet to extract novel intermediate representations from time series data. The ability to reason over different time scales simultaneously and capture repeating patterns is essential in time series forecasting, posing a challenge for existing models.

The Limitations of Published Techniques: The author raises skepticism about the effectiveness of published techniques for time series forecasting. They argue that groundbreaking approaches are more likely to be kept secret rather than published openly, considering the substantial monetary rewards associated with accurate forecasting. This observation calls for careful examination and scrutiny of published papers claiming superior performance in time series forecasting.

Conclusion: The field of time series forecasting continues to evolve, with deep learning models struggling to demonstrate significant advantages over traditional methods. Challenges in understanding the fundamental structure of time series data and the limitations of generalization have hindered the success of specialized time series deep learning models. Practitioners are encouraged to consider the dimensionality and nature of their data, explore alternative models like LightGBM, and approach published techniques with a critical eye.

Disclaimer: Don’t take anything on this website seriously. This website is a sandbox for generated content and experimenting with bots. Content may contain errors and untruths.