Loading…
Hierarchical linear and nonlinear adaptive learning model for system identification and prediction
In this paper, we propose a method to increase the model accuracy with linear and nonlinear sub-models. The linear sub-model applies the least square error (LSE) algorithm and the nonlinear sub-model uses neural networks (NN). The two sub-models are updated hierarchically using the Lyapunov function...
Saved in:
Published in: | Applied intelligence (Dordrecht, Netherlands) Netherlands), 2020-06, Vol.50 (6), p.1699-1710 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c319t-e7c2ab3cbd57900bdd3f41b5e443d10987997675371bac8df7e9297efef329873 |
---|---|
cites | cdi_FETCH-LOGICAL-c319t-e7c2ab3cbd57900bdd3f41b5e443d10987997675371bac8df7e9297efef329873 |
container_end_page | 1710 |
container_issue | 6 |
container_start_page | 1699 |
container_title | Applied intelligence (Dordrecht, Netherlands) |
container_volume | 50 |
creator | Jami’in, Mohammad Abu Anam, Khairul Rulaningtyas, Riries Mudjiono, Urip Adianto, Adianto Wee, Hui-Ming |
description | In this paper, we propose a method to increase the model accuracy with linear and nonlinear sub-models. The linear sub-model applies the least square error (LSE) algorithm and the nonlinear sub-model uses neural networks (NN). The two sub-models are updated hierarchically using the Lyapunov function. The proposed method has two advantages: 1) The neural networks is a multi-parametric model. Using the proposed model, the weights of NN model can be summarized into the coefficients or parameters of auto-regressive eXogenous/auto-regressive moving average (ARX/ARMA) model structure, making it easier to establish control laws, 2) learning rate is updated to ensure the convergence of errors at each training epoch. One can improve the accuracy of model and the whole control system. We have demonstrated by the experimental studies that the proposed technique gives better results when compared to the existing studies. |
doi_str_mv | 10.1007/s10489-019-01615-0 |
format | article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2399630703</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2399630703</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-e7c2ab3cbd57900bdd3f41b5e443d10987997675371bac8df7e9297efef329873</originalsourceid><addsrcrecordid>eNp9kEFLAzEQhYMoWKt_wFPA8-pks7tpjlLUCgUvCt5CNpnUlG12TbZC_71pq3jzMAyPmfeG-Qi5ZnDLAMRdYlDNZAFsXw2rCzghE1YLXohKilMyAVlWRdPI93NykdIaADgHNiHtwmPU0Xx4ozva-YA6Uh0sDX34VVYPo_9C2mUVfFjRTW-xo66PNO3SiBvqLYbRu5wx-j4c_ENE681eXpIzp7uEVz99St4eH17ni2L58vQ8v18WhjM5FihMqVtuWlsLCdBay13F2hqrilsGciakFI2ouWCtNjPrBMpSCnToeJmnfEpujrlD7D-3mEa17rcx5JOq5FI2HER-ekrK45aJfUoRnRqi3-i4UwzUnqU6slSZpTqwVJBN_GhKeTmsMP5F_-P6BgOreK0</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2399630703</pqid></control><display><type>article</type><title>Hierarchical linear and nonlinear adaptive learning model for system identification and prediction</title><source>ABI/INFORM Collection</source><source>Springer Nature</source><creator>Jami’in, Mohammad Abu ; Anam, Khairul ; Rulaningtyas, Riries ; Mudjiono, Urip ; Adianto, Adianto ; Wee, Hui-Ming</creator><creatorcontrib>Jami’in, Mohammad Abu ; Anam, Khairul ; Rulaningtyas, Riries ; Mudjiono, Urip ; Adianto, Adianto ; Wee, Hui-Ming</creatorcontrib><description>In this paper, we propose a method to increase the model accuracy with linear and nonlinear sub-models. The linear sub-model applies the least square error (LSE) algorithm and the nonlinear sub-model uses neural networks (NN). The two sub-models are updated hierarchically using the Lyapunov function. The proposed method has two advantages: 1) The neural networks is a multi-parametric model. Using the proposed model, the weights of NN model can be summarized into the coefficients or parameters of auto-regressive eXogenous/auto-regressive moving average (ARX/ARMA) model structure, making it easier to establish control laws, 2) learning rate is updated to ensure the convergence of errors at each training epoch. One can improve the accuracy of model and the whole control system. We have demonstrated by the experimental studies that the proposed technique gives better results when compared to the existing studies.</description><identifier>ISSN: 0924-669X</identifier><identifier>EISSN: 1573-7497</identifier><identifier>DOI: 10.1007/s10489-019-01615-0</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Adaptive learning ; Adaptive systems ; Algorithms ; Artificial Intelligence ; Autoregressive moving average ; Autoregressive moving-average models ; Computer Science ; Control theory ; Learning ; Liapunov functions ; Machines ; Manufacturing ; Mechanical Engineering ; Model accuracy ; Neural networks ; Processes ; System identification</subject><ispartof>Applied intelligence (Dordrecht, Netherlands), 2020-06, Vol.50 (6), p.1699-1710</ispartof><rights>Springer Science+Business Media, LLC, part of Springer Nature 2019</rights><rights>Springer Science+Business Media, LLC, part of Springer Nature 2019.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-e7c2ab3cbd57900bdd3f41b5e443d10987997675371bac8df7e9297efef329873</citedby><cites>FETCH-LOGICAL-c319t-e7c2ab3cbd57900bdd3f41b5e443d10987997675371bac8df7e9297efef329873</cites><orcidid>0000-0003-0663-2108</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2399630703/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$H</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2399630703?pq-origsite=primo$$EHTML$$P50$$Gproquest$$H</linktohtml><link.rule.ids>314,780,784,11688,27924,27925,36060,44363,74767</link.rule.ids></links><search><creatorcontrib>Jami’in, Mohammad Abu</creatorcontrib><creatorcontrib>Anam, Khairul</creatorcontrib><creatorcontrib>Rulaningtyas, Riries</creatorcontrib><creatorcontrib>Mudjiono, Urip</creatorcontrib><creatorcontrib>Adianto, Adianto</creatorcontrib><creatorcontrib>Wee, Hui-Ming</creatorcontrib><title>Hierarchical linear and nonlinear adaptive learning model for system identification and prediction</title><title>Applied intelligence (Dordrecht, Netherlands)</title><addtitle>Appl Intell</addtitle><description>In this paper, we propose a method to increase the model accuracy with linear and nonlinear sub-models. The linear sub-model applies the least square error (LSE) algorithm and the nonlinear sub-model uses neural networks (NN). The two sub-models are updated hierarchically using the Lyapunov function. The proposed method has two advantages: 1) The neural networks is a multi-parametric model. Using the proposed model, the weights of NN model can be summarized into the coefficients or parameters of auto-regressive eXogenous/auto-regressive moving average (ARX/ARMA) model structure, making it easier to establish control laws, 2) learning rate is updated to ensure the convergence of errors at each training epoch. One can improve the accuracy of model and the whole control system. We have demonstrated by the experimental studies that the proposed technique gives better results when compared to the existing studies.</description><subject>Adaptive learning</subject><subject>Adaptive systems</subject><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>Autoregressive moving average</subject><subject>Autoregressive moving-average models</subject><subject>Computer Science</subject><subject>Control theory</subject><subject>Learning</subject><subject>Liapunov functions</subject><subject>Machines</subject><subject>Manufacturing</subject><subject>Mechanical Engineering</subject><subject>Model accuracy</subject><subject>Neural networks</subject><subject>Processes</subject><subject>System identification</subject><issn>0924-669X</issn><issn>1573-7497</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>M0C</sourceid><recordid>eNp9kEFLAzEQhYMoWKt_wFPA8-pks7tpjlLUCgUvCt5CNpnUlG12TbZC_71pq3jzMAyPmfeG-Qi5ZnDLAMRdYlDNZAFsXw2rCzghE1YLXohKilMyAVlWRdPI93NykdIaADgHNiHtwmPU0Xx4ozva-YA6Uh0sDX34VVYPo_9C2mUVfFjRTW-xo66PNO3SiBvqLYbRu5wx-j4c_ENE681eXpIzp7uEVz99St4eH17ni2L58vQ8v18WhjM5FihMqVtuWlsLCdBay13F2hqrilsGciakFI2ouWCtNjPrBMpSCnToeJmnfEpujrlD7D-3mEa17rcx5JOq5FI2HER-ekrK45aJfUoRnRqi3-i4UwzUnqU6slSZpTqwVJBN_GhKeTmsMP5F_-P6BgOreK0</recordid><startdate>20200601</startdate><enddate>20200601</enddate><creator>Jami’in, Mohammad Abu</creator><creator>Anam, Khairul</creator><creator>Rulaningtyas, Riries</creator><creator>Mudjiono, Urip</creator><creator>Adianto, Adianto</creator><creator>Wee, Hui-Ming</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L6V</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PSYQQ</scope><scope>PTHSS</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0003-0663-2108</orcidid></search><sort><creationdate>20200601</creationdate><title>Hierarchical linear and nonlinear adaptive learning model for system identification and prediction</title><author>Jami’in, Mohammad Abu ; Anam, Khairul ; Rulaningtyas, Riries ; Mudjiono, Urip ; Adianto, Adianto ; Wee, Hui-Ming</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-e7c2ab3cbd57900bdd3f41b5e443d10987997675371bac8df7e9297efef329873</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Adaptive learning</topic><topic>Adaptive systems</topic><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>Autoregressive moving average</topic><topic>Autoregressive moving-average models</topic><topic>Computer Science</topic><topic>Control theory</topic><topic>Learning</topic><topic>Liapunov functions</topic><topic>Machines</topic><topic>Manufacturing</topic><topic>Mechanical Engineering</topic><topic>Model accuracy</topic><topic>Neural networks</topic><topic>Processes</topic><topic>System identification</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Jami’in, Mohammad Abu</creatorcontrib><creatorcontrib>Anam, Khairul</creatorcontrib><creatorcontrib>Rulaningtyas, Riries</creatorcontrib><creatorcontrib>Mudjiono, Urip</creatorcontrib><creatorcontrib>Adianto, Adianto</creatorcontrib><creatorcontrib>Wee, Hui-Ming</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Database (1962 - current)</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Collection</collection><collection>Computing Database</collection><collection>ProQuest Engineering Database</collection><collection>ProQuest Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>One Business (ProQuest)</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest One Psychology</collection><collection>Engineering Collection</collection><collection>ProQuest Central Basic</collection><jtitle>Applied intelligence (Dordrecht, Netherlands)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Jami’in, Mohammad Abu</au><au>Anam, Khairul</au><au>Rulaningtyas, Riries</au><au>Mudjiono, Urip</au><au>Adianto, Adianto</au><au>Wee, Hui-Ming</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Hierarchical linear and nonlinear adaptive learning model for system identification and prediction</atitle><jtitle>Applied intelligence (Dordrecht, Netherlands)</jtitle><stitle>Appl Intell</stitle><date>2020-06-01</date><risdate>2020</risdate><volume>50</volume><issue>6</issue><spage>1699</spage><epage>1710</epage><pages>1699-1710</pages><issn>0924-669X</issn><eissn>1573-7497</eissn><abstract>In this paper, we propose a method to increase the model accuracy with linear and nonlinear sub-models. The linear sub-model applies the least square error (LSE) algorithm and the nonlinear sub-model uses neural networks (NN). The two sub-models are updated hierarchically using the Lyapunov function. The proposed method has two advantages: 1) The neural networks is a multi-parametric model. Using the proposed model, the weights of NN model can be summarized into the coefficients or parameters of auto-regressive eXogenous/auto-regressive moving average (ARX/ARMA) model structure, making it easier to establish control laws, 2) learning rate is updated to ensure the convergence of errors at each training epoch. One can improve the accuracy of model and the whole control system. We have demonstrated by the experimental studies that the proposed technique gives better results when compared to the existing studies.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s10489-019-01615-0</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0003-0663-2108</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0924-669X |
ispartof | Applied intelligence (Dordrecht, Netherlands), 2020-06, Vol.50 (6), p.1699-1710 |
issn | 0924-669X 1573-7497 |
language | eng |
recordid | cdi_proquest_journals_2399630703 |
source | ABI/INFORM Collection; Springer Nature |
subjects | Adaptive learning Adaptive systems Algorithms Artificial Intelligence Autoregressive moving average Autoregressive moving-average models Computer Science Control theory Learning Liapunov functions Machines Manufacturing Mechanical Engineering Model accuracy Neural networks Processes System identification |
title | Hierarchical linear and nonlinear adaptive learning model for system identification and prediction |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T20%3A27%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Hierarchical%20linear%20and%20nonlinear%20adaptive%20learning%20model%20for%20system%20identification%20and%20prediction&rft.jtitle=Applied%20intelligence%20(Dordrecht,%20Netherlands)&rft.au=Jami%E2%80%99in,%20Mohammad%20Abu&rft.date=2020-06-01&rft.volume=50&rft.issue=6&rft.spage=1699&rft.epage=1710&rft.pages=1699-1710&rft.issn=0924-669X&rft.eissn=1573-7497&rft_id=info:doi/10.1007/s10489-019-01615-0&rft_dat=%3Cproquest_cross%3E2399630703%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c319t-e7c2ab3cbd57900bdd3f41b5e443d10987997675371bac8df7e9297efef329873%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2399630703&rft_id=info:pmid/&rfr_iscdi=true |