Loading…
Effect of Sparse Representation of Time Series Data on Learning Rate of Time-Delay Neural Networks
In this paper, we examine how sparsifying input to a time-delay neural network (TDNN) can significantly improve the learning time and accuracy of the TDNN for time series data. The sparsifying of input is done through a sparse transform input layer. Many applications that involve prediction or forec...
Saved in:
Published in: | Circuits, systems, and signal processing systems, and signal processing, 2021-06, Vol.40 (6), p.3007-3032 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c319t-1fbab9f1a0ec609c9b3e80a575a70a087e998767ce0d1aca98ad3ec2a90511f83 |
---|---|
cites | cdi_FETCH-LOGICAL-c319t-1fbab9f1a0ec609c9b3e80a575a70a087e998767ce0d1aca98ad3ec2a90511f83 |
container_end_page | 3032 |
container_issue | 6 |
container_start_page | 3007 |
container_title | Circuits, systems, and signal processing |
container_volume | 40 |
creator | Kalantari Khandani, Masoumeh Mikhael, Wasfy B. |
description | In this paper, we examine how sparsifying input to a time-delay neural network (TDNN) can significantly improve the learning time and accuracy of the TDNN for time series data. The sparsifying of input is done through a sparse transform input layer. Many applications that involve prediction or forecasting of the state of a dynamic system can be formulated as a time series forecasting problem. Here, the task is to forecast some state variable, which is represented as a time series in applications such as weather forecasting, energy consumption prediction or predicting future state of a moving object. While there are many tools for time-delay forecasting, TDNNs have recently received more attention. We show that through applying a sparsifying input transform layer to the TDNN, we can considerably improve the learning time and accuracy. Through analyzing the learning process, we demonstrate the mathematical reasons for this improvement. Experiments with several datasets are used to show the improvement and the reason behind it. We use data from national weather forecast datasets, vehicle speed time series and synthetic data. Several different sparse representations are evaluated including principal component analysis (PCA), discrete cosine transform (DCT) and a mixture of DCT and Haar transforms. It is observed that the higher sparsity leads to better performance. The relative simplicity of TDNNs, compared with deep networks, and the use of sparse transforms for quicker learning open up possibilities for online learning in small embedded devices that do not have powerful computing capabilities. |
doi_str_mv | 10.1007/s00034-020-01610-8 |
format | article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2528641914</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2528641914</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-1fbab9f1a0ec609c9b3e80a575a70a087e998767ce0d1aca98ad3ec2a90511f83</originalsourceid><addsrcrecordid>eNp9kE1Lw0AQhhdRsFb_gKcFz6sz-drdo7R-QVFoK3hbJumkpLZJ3U2R_ntTo3jzNDDzvO_AI8QlwjUC6JsAAHGiIAIFmCEocyQGmMaoUqPNsRhApI0Cg2-n4iyEFQDaxEYDkd-VJRetbEo525IPLKe89Ry4bqmtmvpwmFcbljP2FQc5ppZkt54w-bqql3JKLf9Casxr2stn3nlad6P9bPx7OBcnJa0DX_zMoXi9v5uPHtXk5eFpdDtRRYy2VVjmlNsSCbjIwBY2j9kApTolDQRGs7VGZ7pgWCAVZA0tYi4ispAiliYeiqu-d-ubjx2H1q2ana-7ly5KI5MlaDHpqKinCt-E4Ll0W19tyO8dgju4dL1L17l03y7doTruQ6GD6yX7v-p_Ul_IW3bL</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2528641914</pqid></control><display><type>article</type><title>Effect of Sparse Representation of Time Series Data on Learning Rate of Time-Delay Neural Networks</title><source>Springer Nature</source><creator>Kalantari Khandani, Masoumeh ; Mikhael, Wasfy B.</creator><creatorcontrib>Kalantari Khandani, Masoumeh ; Mikhael, Wasfy B.</creatorcontrib><description>In this paper, we examine how sparsifying input to a time-delay neural network (TDNN) can significantly improve the learning time and accuracy of the TDNN for time series data. The sparsifying of input is done through a sparse transform input layer. Many applications that involve prediction or forecasting of the state of a dynamic system can be formulated as a time series forecasting problem. Here, the task is to forecast some state variable, which is represented as a time series in applications such as weather forecasting, energy consumption prediction or predicting future state of a moving object. While there are many tools for time-delay forecasting, TDNNs have recently received more attention. We show that through applying a sparsifying input transform layer to the TDNN, we can considerably improve the learning time and accuracy. Through analyzing the learning process, we demonstrate the mathematical reasons for this improvement. Experiments with several datasets are used to show the improvement and the reason behind it. We use data from national weather forecast datasets, vehicle speed time series and synthetic data. Several different sparse representations are evaluated including principal component analysis (PCA), discrete cosine transform (DCT) and a mixture of DCT and Haar transforms. It is observed that the higher sparsity leads to better performance. The relative simplicity of TDNNs, compared with deep networks, and the use of sparse transforms for quicker learning open up possibilities for online learning in small embedded devices that do not have powerful computing capabilities.</description><identifier>ISSN: 0278-081X</identifier><identifier>EISSN: 1531-5878</identifier><identifier>DOI: 10.1007/s00034-020-01610-8</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Accuracy ; Circuits and Systems ; Datasets ; Delay ; Discrete cosine transform ; Distance learning ; Electrical Engineering ; Electronic devices ; Electronics and Microelectronics ; Energy consumption ; Engineering ; Haar transformations ; Instrumentation ; Machine learning ; Neural networks ; Predictions ; Principal components analysis ; Representations ; Signal,Image and Speech Processing ; Time series ; Traffic speed ; Weather forecasting</subject><ispartof>Circuits, systems, and signal processing, 2021-06, Vol.40 (6), p.3007-3032</ispartof><rights>Springer Science+Business Media, LLC, part of Springer Nature 2021</rights><rights>Springer Science+Business Media, LLC, part of Springer Nature 2021.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-1fbab9f1a0ec609c9b3e80a575a70a087e998767ce0d1aca98ad3ec2a90511f83</citedby><cites>FETCH-LOGICAL-c319t-1fbab9f1a0ec609c9b3e80a575a70a087e998767ce0d1aca98ad3ec2a90511f83</cites><orcidid>0000-0001-9322-9821</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Kalantari Khandani, Masoumeh</creatorcontrib><creatorcontrib>Mikhael, Wasfy B.</creatorcontrib><title>Effect of Sparse Representation of Time Series Data on Learning Rate of Time-Delay Neural Networks</title><title>Circuits, systems, and signal processing</title><addtitle>Circuits Syst Signal Process</addtitle><description>In this paper, we examine how sparsifying input to a time-delay neural network (TDNN) can significantly improve the learning time and accuracy of the TDNN for time series data. The sparsifying of input is done through a sparse transform input layer. Many applications that involve prediction or forecasting of the state of a dynamic system can be formulated as a time series forecasting problem. Here, the task is to forecast some state variable, which is represented as a time series in applications such as weather forecasting, energy consumption prediction or predicting future state of a moving object. While there are many tools for time-delay forecasting, TDNNs have recently received more attention. We show that through applying a sparsifying input transform layer to the TDNN, we can considerably improve the learning time and accuracy. Through analyzing the learning process, we demonstrate the mathematical reasons for this improvement. Experiments with several datasets are used to show the improvement and the reason behind it. We use data from national weather forecast datasets, vehicle speed time series and synthetic data. Several different sparse representations are evaluated including principal component analysis (PCA), discrete cosine transform (DCT) and a mixture of DCT and Haar transforms. It is observed that the higher sparsity leads to better performance. The relative simplicity of TDNNs, compared with deep networks, and the use of sparse transforms for quicker learning open up possibilities for online learning in small embedded devices that do not have powerful computing capabilities.</description><subject>Accuracy</subject><subject>Circuits and Systems</subject><subject>Datasets</subject><subject>Delay</subject><subject>Discrete cosine transform</subject><subject>Distance learning</subject><subject>Electrical Engineering</subject><subject>Electronic devices</subject><subject>Electronics and Microelectronics</subject><subject>Energy consumption</subject><subject>Engineering</subject><subject>Haar transformations</subject><subject>Instrumentation</subject><subject>Machine learning</subject><subject>Neural networks</subject><subject>Predictions</subject><subject>Principal components analysis</subject><subject>Representations</subject><subject>Signal,Image and Speech Processing</subject><subject>Time series</subject><subject>Traffic speed</subject><subject>Weather forecasting</subject><issn>0278-081X</issn><issn>1531-5878</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNp9kE1Lw0AQhhdRsFb_gKcFz6sz-drdo7R-QVFoK3hbJumkpLZJ3U2R_ntTo3jzNDDzvO_AI8QlwjUC6JsAAHGiIAIFmCEocyQGmMaoUqPNsRhApI0Cg2-n4iyEFQDaxEYDkd-VJRetbEo525IPLKe89Ry4bqmtmvpwmFcbljP2FQc5ppZkt54w-bqql3JKLf9Casxr2stn3nlad6P9bPx7OBcnJa0DX_zMoXi9v5uPHtXk5eFpdDtRRYy2VVjmlNsSCbjIwBY2j9kApTolDQRGs7VGZ7pgWCAVZA0tYi4ispAiliYeiqu-d-ubjx2H1q2ana-7ly5KI5MlaDHpqKinCt-E4Ll0W19tyO8dgju4dL1L17l03y7doTruQ6GD6yX7v-p_Ul_IW3bL</recordid><startdate>20210601</startdate><enddate>20210601</enddate><creator>Kalantari Khandani, Masoumeh</creator><creator>Mikhael, Wasfy B.</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7SP</scope><scope>7XB</scope><scope>88I</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>L6V</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0N</scope><scope>M2P</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>Q9U</scope><scope>S0W</scope><orcidid>https://orcid.org/0000-0001-9322-9821</orcidid></search><sort><creationdate>20210601</creationdate><title>Effect of Sparse Representation of Time Series Data on Learning Rate of Time-Delay Neural Networks</title><author>Kalantari Khandani, Masoumeh ; Mikhael, Wasfy B.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-1fbab9f1a0ec609c9b3e80a575a70a087e998767ce0d1aca98ad3ec2a90511f83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Accuracy</topic><topic>Circuits and Systems</topic><topic>Datasets</topic><topic>Delay</topic><topic>Discrete cosine transform</topic><topic>Distance learning</topic><topic>Electrical Engineering</topic><topic>Electronic devices</topic><topic>Electronics and Microelectronics</topic><topic>Energy consumption</topic><topic>Engineering</topic><topic>Haar transformations</topic><topic>Instrumentation</topic><topic>Machine learning</topic><topic>Neural networks</topic><topic>Predictions</topic><topic>Principal components analysis</topic><topic>Representations</topic><topic>Signal,Image and Speech Processing</topic><topic>Time series</topic><topic>Traffic speed</topic><topic>Weather forecasting</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kalantari Khandani, Masoumeh</creatorcontrib><creatorcontrib>Mikhael, Wasfy B.</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Science Database (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Computing Database</collection><collection>ProQuest Science Journals</collection><collection>Engineering Database</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>ProQuest Central Basic</collection><collection>DELNET Engineering & Technology Collection</collection><jtitle>Circuits, systems, and signal processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kalantari Khandani, Masoumeh</au><au>Mikhael, Wasfy B.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Effect of Sparse Representation of Time Series Data on Learning Rate of Time-Delay Neural Networks</atitle><jtitle>Circuits, systems, and signal processing</jtitle><stitle>Circuits Syst Signal Process</stitle><date>2021-06-01</date><risdate>2021</risdate><volume>40</volume><issue>6</issue><spage>3007</spage><epage>3032</epage><pages>3007-3032</pages><issn>0278-081X</issn><eissn>1531-5878</eissn><abstract>In this paper, we examine how sparsifying input to a time-delay neural network (TDNN) can significantly improve the learning time and accuracy of the TDNN for time series data. The sparsifying of input is done through a sparse transform input layer. Many applications that involve prediction or forecasting of the state of a dynamic system can be formulated as a time series forecasting problem. Here, the task is to forecast some state variable, which is represented as a time series in applications such as weather forecasting, energy consumption prediction or predicting future state of a moving object. While there are many tools for time-delay forecasting, TDNNs have recently received more attention. We show that through applying a sparsifying input transform layer to the TDNN, we can considerably improve the learning time and accuracy. Through analyzing the learning process, we demonstrate the mathematical reasons for this improvement. Experiments with several datasets are used to show the improvement and the reason behind it. We use data from national weather forecast datasets, vehicle speed time series and synthetic data. Several different sparse representations are evaluated including principal component analysis (PCA), discrete cosine transform (DCT) and a mixture of DCT and Haar transforms. It is observed that the higher sparsity leads to better performance. The relative simplicity of TDNNs, compared with deep networks, and the use of sparse transforms for quicker learning open up possibilities for online learning in small embedded devices that do not have powerful computing capabilities.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s00034-020-01610-8</doi><tpages>26</tpages><orcidid>https://orcid.org/0000-0001-9322-9821</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0278-081X |
ispartof | Circuits, systems, and signal processing, 2021-06, Vol.40 (6), p.3007-3032 |
issn | 0278-081X 1531-5878 |
language | eng |
recordid | cdi_proquest_journals_2528641914 |
source | Springer Nature |
subjects | Accuracy Circuits and Systems Datasets Delay Discrete cosine transform Distance learning Electrical Engineering Electronic devices Electronics and Microelectronics Energy consumption Engineering Haar transformations Instrumentation Machine learning Neural networks Predictions Principal components analysis Representations Signal,Image and Speech Processing Time series Traffic speed Weather forecasting |
title | Effect of Sparse Representation of Time Series Data on Learning Rate of Time-Delay Neural Networks |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T04%3A09%3A17IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Effect%20of%20Sparse%20Representation%20of%20Time%20Series%20Data%20on%20Learning%20Rate%20of%20Time-Delay%20Neural%20Networks&rft.jtitle=Circuits,%20systems,%20and%20signal%20processing&rft.au=Kalantari%20Khandani,%20Masoumeh&rft.date=2021-06-01&rft.volume=40&rft.issue=6&rft.spage=3007&rft.epage=3032&rft.pages=3007-3032&rft.issn=0278-081X&rft.eissn=1531-5878&rft_id=info:doi/10.1007/s00034-020-01610-8&rft_dat=%3Cproquest_cross%3E2528641914%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c319t-1fbab9f1a0ec609c9b3e80a575a70a087e998767ce0d1aca98ad3ec2a90511f83%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2528641914&rft_id=info:pmid/&rfr_iscdi=true |