当前共找到 1 篇文献分享。
1.
张浩彬
(2022-08-11 12:06):
#paper 10.1137/1.9781611976700.60
Attention-Based Autoregression for Accurate and Efficient Multivariate Time Series Forecasting
AttnAR:提出一个新模型,结合了注意力机制将变量的相关转化为时不变注意力图。并且由于其当中使用了共线参数,比一般的深度神经网络时序模型的参数量降低到了1%左右,并且对模型有较好的解释性。总结来看,在1块1080ti跑完了所有模型,确实很有亲切感。
具体结构中:
(1)使用深度卷积层和浅的全连接层分别对每个序列提取模式。(这里应该是共享了权重)
(2)结合注意力机制,从前面的序列模式中生成注意力图(序列模式可直接输入,也可考虑经过embedding再输入)
最后把序列模式ui以及经过注意力机制提取的vi链接在一起,并通过全连接层产生最终输出
Abstract:
Given a multivariate time series, how can we forecast all of its variables efficiently and accurately? The multivariate forecasting, which is to predict the future observations of a multivariate time …
>>>
Given a multivariate time series, how can we forecast all of its variables efficiently and accurately? The multivariate forecasting, which is to predict the future observations of a multivariate time series, is a fundamental problem closely related to many real-world applications. However, previous multivariate models suffer from large model sizes due to the inefficiency of capturing complex intra-variable patterns and inter-variable correlations, resulting in poor accuracy. In this work, we propose AttnAR (attention-based autoregression), a novel approach for general multivariate forecasting which maximizes its model efficiency via separable structure. AttnAR first extracts variable-wise patterns by a mixed convolution extractor that efficiently combines deep convolution layers and shallow dense layers. Then, AttnAR aggregates the patterns by learning time-invariant attention maps between the target variables. AttnAR accomplishes the state-of-the-art forecasting accuracy in four datasets with up to 117.3 times fewer parameters than the best competitors.
<<<
翻译