Loading…
Effect of Sparse Representation of Time Series Data on Learning Rate of Time-Delay Neural Networks
In this paper, we examine how sparsifying input to a time-delay neural network (TDNN) can significantly improve the learning time and accuracy of the TDNN for time series data. The sparsifying of input is done through a sparse transform input layer. Many applications that involve prediction or forec...
Saved in:
Published in: | Circuits, systems, and signal processing systems, and signal processing, 2021-06, Vol.40 (6), p.3007-3032 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In this paper, we examine how sparsifying input to a time-delay neural network (TDNN) can significantly improve the learning time and accuracy of the TDNN for time series data. The sparsifying of input is done through a sparse transform input layer. Many applications that involve prediction or forecasting of the state of a dynamic system can be formulated as a time series forecasting problem. Here, the task is to forecast some state variable, which is represented as a time series in applications such as weather forecasting, energy consumption prediction or predicting future state of a moving object. While there are many tools for time-delay forecasting, TDNNs have recently received more attention. We show that through applying a sparsifying input transform layer to the TDNN, we can considerably improve the learning time and accuracy. Through analyzing the learning process, we demonstrate the mathematical reasons for this improvement. Experiments with several datasets are used to show the improvement and the reason behind it. We use data from national weather forecast datasets, vehicle speed time series and synthetic data. Several different sparse representations are evaluated including principal component analysis (PCA), discrete cosine transform (DCT) and a mixture of DCT and Haar transforms. It is observed that the higher sparsity leads to better performance. The relative simplicity of TDNNs, compared with deep networks, and the use of sparse transforms for quicker learning open up possibilities for online learning in small embedded devices that do not have powerful computing capabilities. |
---|---|
ISSN: | 0278-081X 1531-5878 |
DOI: | 10.1007/s00034-020-01610-8 |