This is an official implementation of PatchTST-Fusion: PatchTST-Fusion: Additive Attention-Based Cross-Channel Fusion for High-Frequency Rolling Volatility Forecasting
🌟 Patching: segmentation of time series into subseries-level patches, which are used as input tokens for Transformer forecasting.
🌟 Cross-channel fusion: dynamic interaction across multiple feature channels to overcome the limitation of channel independence.
🌟 Additive attention: adaptive weighting of channel-wise predictive signals for informative feature fusion.
🌟 Residual refinement: residual correction of the target-channel prediction for stable and accurate forecasting.
This paper selects two representative baseline models for comparative experiments: (1) GARCH(1,1) (2) LSTM.
Compared with representative baselines, PatchTST-Fusion achieves the best overall statistical performance, demonstrating stronger predictive accuracy and robustness in high-frequency realized volatility forecasting.
PatchTST-Fusion also shows superior economic performance, indicating that its forecasts are effective in practical finance-oriented evaluation settings.
To verify the effectiveness of PatchTST-Fusion, we perform ablation studies on both model design and feature selection, along with an attribution analysis of the learned attention weights.
We remove key modules to assess their contributions.
We retrain the model using only the top ten features selected by additive attention scores. The strong performance of this reduced feature set validates the effectiveness of the learned feature selection mechanism.
We analyze the attention weights to investigate the model’s feature preferences and assess whether they are aligned with economic theory.
- Install requirements:
pip install -r requirements.txt - Download data. You can download the dataset from the original source and place it under the
./datadirectory before training or evaluation. - All training-related scripts are provided under
./scripts/PatchTST-Fusion/. Before running the script, please make sure that:- the task setting (
M,S,MS, orMD) is correctly specified; - the input feature dimensions is equal to
ENC_INparam; - the first 4 columns are non-numeric features;
- the
DATA_PATHparam is correctly configured.
- the task setting (
You can launch training with:
sh ./scripts/PatchTST-Fusion/run_PatchTST-Fusion.shYou can open ./result.txt to see the results once the training is done.
The data that support the findings of this study were obtained from the RESSET Database (www.resset.com). The authors gratefully acknowledge the RESSET Database for data support.
We appreciate the following github repo very much for the valuable code base:
https://github.com/yuqinie98/PatchTST
https://github.com/ts-kim/RevIN
https://github.com/thuml/Time-Series-Library
https://github.com/Thinklab-SJTU/Crossformer
https://github.com/vivva/DLinear
https://github.com/thuml/iTransformer
If you have any questions or concerns, feel free to contact us: isyaozong.zhang@gmail.com or submit an issue.





