Loading…

Adaptive Neural-PID Visual Servoing Tracking Control via Extreme Learning Machine

The vision-guided robot is intensively embedded in modern industry, but it is still a challenge to track moving objects in real time accurately. In this paper, a hybrid adaptive control scheme combined with an Extreme Learning Machine (ELM) and proportional–integral–derivative (PID) is proposed for...

Full description

Saved in:
Bibliographic Details
Published in:Machines (Basel) 2022-09, Vol.10 (9), p.782
Main Authors: Luo, Junqi, Zhu, Liucun, Wu, Ning, Chen, Mingyou, Liu, Daopeng, Zhang, Zhenyu, Liu, Jiyuan
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c418t-2ef81ccd54527e3df50bdb12498baa676bae3f61fbae2ffcc09ef7e2b683a3b43
cites cdi_FETCH-LOGICAL-c418t-2ef81ccd54527e3df50bdb12498baa676bae3f61fbae2ffcc09ef7e2b683a3b43
container_end_page
container_issue 9
container_start_page 782
container_title Machines (Basel)
container_volume 10
creator Luo, Junqi
Zhu, Liucun
Wu, Ning
Chen, Mingyou
Liu, Daopeng
Zhang, Zhenyu
Liu, Jiyuan
description The vision-guided robot is intensively embedded in modern industry, but it is still a challenge to track moving objects in real time accurately. In this paper, a hybrid adaptive control scheme combined with an Extreme Learning Machine (ELM) and proportional–integral–derivative (PID) is proposed for dynamic visual tracking of the manipulator. The scheme extracts line features on the image plane based on a laser-camera system and determines an optimal control input to guide the robot, so that the image features are aligned with their desired positions. The observation and state–space equations are first determined by analyzing the motion features of the camera and the object. The system is then represented as an autoregressive moving average with extra input (ARMAX) and a valid estimation model. The adaptive predictor estimates online the relevant 3D parameters between the camera and the object, which are subsequently used to calculate the system sensitivity of the neural network. The ELM–PID controller is designed for adaptive adjustment of control parameters, and the scheme was validated on a physical robot platform. The experimental results showed that the proposed method’s vision-tracking control displayed superior performance to pure P and PID controllers.
doi_str_mv 10.3390/machines10090782
format article
fullrecord <record><control><sourceid>gale_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_10de34eda65a468b83d12b9b36dd66b7</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A744730365</galeid><doaj_id>oai_doaj_org_article_10de34eda65a468b83d12b9b36dd66b7</doaj_id><sourcerecordid>A744730365</sourcerecordid><originalsourceid>FETCH-LOGICAL-c418t-2ef81ccd54527e3df50bdb12498baa676bae3f61fbae2ffcc09ef7e2b683a3b43</originalsourceid><addsrcrecordid>eNpdUU1v2zAMNYYNWNH13qOBnd3py5J8DNJ2DZB9YV2vAiVRmTLHSmUn6P59lbkYhpEHEuTjwwNfVV1ScsV5Rz7swP2MA46UkI4ozV5VZ4yotqGKsNf_9G-ri3HckhId5Vros-rbwsN-ikesP-MhQ998XV3XD3E8QF9_x3xMcdjU9xncr1OzTMOUU18fI9Q3T1PGHdZrhDyclp9mEe-qNwH6ES9e6nn14_bmfnnXrL98XC0X68YJqqeGYdDUOd-KlinkPrTEekuZ6LQFkEpaQB4kDaWyEJwjHQaFzErNgVvBz6vVzOsTbM0-xx3k3yZBNH8GKW8M5Cm6Hg0lHrlAD7IFIbXV3FNmO8ul91JaVbjez1z7nB4POE5mmw55KPINU1S2SjDNCupqRm2gkMYhpKk8pqTHXXRpwBDLfKGEUJxw2ZYDMh-4nMYxY_grkxJzMs78bxx_Bm5ajRg</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2716574282</pqid></control><display><type>article</type><title>Adaptive Neural-PID Visual Servoing Tracking Control via Extreme Learning Machine</title><source>Publicly Available Content (ProQuest)</source><creator>Luo, Junqi ; Zhu, Liucun ; Wu, Ning ; Chen, Mingyou ; Liu, Daopeng ; Zhang, Zhenyu ; Liu, Jiyuan</creator><creatorcontrib>Luo, Junqi ; Zhu, Liucun ; Wu, Ning ; Chen, Mingyou ; Liu, Daopeng ; Zhang, Zhenyu ; Liu, Jiyuan</creatorcontrib><description>The vision-guided robot is intensively embedded in modern industry, but it is still a challenge to track moving objects in real time accurately. In this paper, a hybrid adaptive control scheme combined with an Extreme Learning Machine (ELM) and proportional–integral–derivative (PID) is proposed for dynamic visual tracking of the manipulator. The scheme extracts line features on the image plane based on a laser-camera system and determines an optimal control input to guide the robot, so that the image features are aligned with their desired positions. The observation and state–space equations are first determined by analyzing the motion features of the camera and the object. The system is then represented as an autoregressive moving average with extra input (ARMAX) and a valid estimation model. The adaptive predictor estimates online the relevant 3D parameters between the camera and the object, which are subsequently used to calculate the system sensitivity of the neural network. The ELM–PID controller is designed for adaptive adjustment of control parameters, and the scheme was validated on a physical robot platform. The experimental results showed that the proposed method’s vision-tracking control displayed superior performance to pure P and PID controllers.</description><identifier>ISSN: 2075-1702</identifier><identifier>EISSN: 2075-1702</identifier><identifier>DOI: 10.3390/machines10090782</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Accuracy ; Adaptive control ; adaptive visual tracking ; Algorithms ; Artificial neural networks ; Autoregressive moving average ; Cameras ; Control systems design ; Controllers ; ELM–PID control ; Feature extraction ; Kinematics ; laser-camera system ; Lasers ; Machine learning ; Machine vision ; Management science ; Masonry ; Moving object recognition ; Neural networks ; Optical tracking ; Optimal control ; Parameters ; Proportional integral derivative ; Robotics ; Robots ; Servocontrol ; Tracking control ; Velocity ; Vision ; Visual control ; visual servoing</subject><ispartof>Machines (Basel), 2022-09, Vol.10 (9), p.782</ispartof><rights>COPYRIGHT 2022 MDPI AG</rights><rights>2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c418t-2ef81ccd54527e3df50bdb12498baa676bae3f61fbae2ffcc09ef7e2b683a3b43</citedby><cites>FETCH-LOGICAL-c418t-2ef81ccd54527e3df50bdb12498baa676bae3f61fbae2ffcc09ef7e2b683a3b43</cites><orcidid>0000-0002-4951-6337 ; 0000-0003-0044-5328 ; 0000-0001-5260-295X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2716574282/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2716574282?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,25752,27923,27924,37011,44589,74997</link.rule.ids></links><search><creatorcontrib>Luo, Junqi</creatorcontrib><creatorcontrib>Zhu, Liucun</creatorcontrib><creatorcontrib>Wu, Ning</creatorcontrib><creatorcontrib>Chen, Mingyou</creatorcontrib><creatorcontrib>Liu, Daopeng</creatorcontrib><creatorcontrib>Zhang, Zhenyu</creatorcontrib><creatorcontrib>Liu, Jiyuan</creatorcontrib><title>Adaptive Neural-PID Visual Servoing Tracking Control via Extreme Learning Machine</title><title>Machines (Basel)</title><description>The vision-guided robot is intensively embedded in modern industry, but it is still a challenge to track moving objects in real time accurately. In this paper, a hybrid adaptive control scheme combined with an Extreme Learning Machine (ELM) and proportional–integral–derivative (PID) is proposed for dynamic visual tracking of the manipulator. The scheme extracts line features on the image plane based on a laser-camera system and determines an optimal control input to guide the robot, so that the image features are aligned with their desired positions. The observation and state–space equations are first determined by analyzing the motion features of the camera and the object. The system is then represented as an autoregressive moving average with extra input (ARMAX) and a valid estimation model. The adaptive predictor estimates online the relevant 3D parameters between the camera and the object, which are subsequently used to calculate the system sensitivity of the neural network. The ELM–PID controller is designed for adaptive adjustment of control parameters, and the scheme was validated on a physical robot platform. The experimental results showed that the proposed method’s vision-tracking control displayed superior performance to pure P and PID controllers.</description><subject>Accuracy</subject><subject>Adaptive control</subject><subject>adaptive visual tracking</subject><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Autoregressive moving average</subject><subject>Cameras</subject><subject>Control systems design</subject><subject>Controllers</subject><subject>ELM–PID control</subject><subject>Feature extraction</subject><subject>Kinematics</subject><subject>laser-camera system</subject><subject>Lasers</subject><subject>Machine learning</subject><subject>Machine vision</subject><subject>Management science</subject><subject>Masonry</subject><subject>Moving object recognition</subject><subject>Neural networks</subject><subject>Optical tracking</subject><subject>Optimal control</subject><subject>Parameters</subject><subject>Proportional integral derivative</subject><subject>Robotics</subject><subject>Robots</subject><subject>Servocontrol</subject><subject>Tracking control</subject><subject>Velocity</subject><subject>Vision</subject><subject>Visual control</subject><subject>visual servoing</subject><issn>2075-1702</issn><issn>2075-1702</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNpdUU1v2zAMNYYNWNH13qOBnd3py5J8DNJ2DZB9YV2vAiVRmTLHSmUn6P59lbkYhpEHEuTjwwNfVV1ScsV5Rz7swP2MA46UkI4ozV5VZ4yotqGKsNf_9G-ri3HckhId5Vros-rbwsN-ikesP-MhQ998XV3XD3E8QF9_x3xMcdjU9xncr1OzTMOUU18fI9Q3T1PGHdZrhDyclp9mEe-qNwH6ES9e6nn14_bmfnnXrL98XC0X68YJqqeGYdDUOd-KlinkPrTEekuZ6LQFkEpaQB4kDaWyEJwjHQaFzErNgVvBz6vVzOsTbM0-xx3k3yZBNH8GKW8M5Cm6Hg0lHrlAD7IFIbXV3FNmO8ul91JaVbjez1z7nB4POE5mmw55KPINU1S2SjDNCupqRm2gkMYhpKk8pqTHXXRpwBDLfKGEUJxw2ZYDMh-4nMYxY_grkxJzMs78bxx_Bm5ajRg</recordid><startdate>20220901</startdate><enddate>20220901</enddate><creator>Luo, Junqi</creator><creator>Zhu, Liucun</creator><creator>Wu, Ning</creator><creator>Chen, Mingyou</creator><creator>Liu, Daopeng</creator><creator>Zhang, Zhenyu</creator><creator>Liu, Jiyuan</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7TB</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FR3</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-4951-6337</orcidid><orcidid>https://orcid.org/0000-0003-0044-5328</orcidid><orcidid>https://orcid.org/0000-0001-5260-295X</orcidid></search><sort><creationdate>20220901</creationdate><title>Adaptive Neural-PID Visual Servoing Tracking Control via Extreme Learning Machine</title><author>Luo, Junqi ; Zhu, Liucun ; Wu, Ning ; Chen, Mingyou ; Liu, Daopeng ; Zhang, Zhenyu ; Liu, Jiyuan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c418t-2ef81ccd54527e3df50bdb12498baa676bae3f61fbae2ffcc09ef7e2b683a3b43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Accuracy</topic><topic>Adaptive control</topic><topic>adaptive visual tracking</topic><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Autoregressive moving average</topic><topic>Cameras</topic><topic>Control systems design</topic><topic>Controllers</topic><topic>ELM–PID control</topic><topic>Feature extraction</topic><topic>Kinematics</topic><topic>laser-camera system</topic><topic>Lasers</topic><topic>Machine learning</topic><topic>Machine vision</topic><topic>Management science</topic><topic>Masonry</topic><topic>Moving object recognition</topic><topic>Neural networks</topic><topic>Optical tracking</topic><topic>Optimal control</topic><topic>Parameters</topic><topic>Proportional integral derivative</topic><topic>Robotics</topic><topic>Robots</topic><topic>Servocontrol</topic><topic>Tracking control</topic><topic>Velocity</topic><topic>Vision</topic><topic>Visual control</topic><topic>visual servoing</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Luo, Junqi</creatorcontrib><creatorcontrib>Zhu, Liucun</creatorcontrib><creatorcontrib>Wu, Ning</creatorcontrib><creatorcontrib>Chen, Mingyou</creatorcontrib><creatorcontrib>Liu, Daopeng</creatorcontrib><creatorcontrib>Zhang, Zhenyu</creatorcontrib><creatorcontrib>Liu, Jiyuan</creatorcontrib><collection>CrossRef</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Engineering Research Database</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Machines (Basel)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Luo, Junqi</au><au>Zhu, Liucun</au><au>Wu, Ning</au><au>Chen, Mingyou</au><au>Liu, Daopeng</au><au>Zhang, Zhenyu</au><au>Liu, Jiyuan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Adaptive Neural-PID Visual Servoing Tracking Control via Extreme Learning Machine</atitle><jtitle>Machines (Basel)</jtitle><date>2022-09-01</date><risdate>2022</risdate><volume>10</volume><issue>9</issue><spage>782</spage><pages>782-</pages><issn>2075-1702</issn><eissn>2075-1702</eissn><abstract>The vision-guided robot is intensively embedded in modern industry, but it is still a challenge to track moving objects in real time accurately. In this paper, a hybrid adaptive control scheme combined with an Extreme Learning Machine (ELM) and proportional–integral–derivative (PID) is proposed for dynamic visual tracking of the manipulator. The scheme extracts line features on the image plane based on a laser-camera system and determines an optimal control input to guide the robot, so that the image features are aligned with their desired positions. The observation and state–space equations are first determined by analyzing the motion features of the camera and the object. The system is then represented as an autoregressive moving average with extra input (ARMAX) and a valid estimation model. The adaptive predictor estimates online the relevant 3D parameters between the camera and the object, which are subsequently used to calculate the system sensitivity of the neural network. The ELM–PID controller is designed for adaptive adjustment of control parameters, and the scheme was validated on a physical robot platform. The experimental results showed that the proposed method’s vision-tracking control displayed superior performance to pure P and PID controllers.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/machines10090782</doi><orcidid>https://orcid.org/0000-0002-4951-6337</orcidid><orcidid>https://orcid.org/0000-0003-0044-5328</orcidid><orcidid>https://orcid.org/0000-0001-5260-295X</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2075-1702
ispartof Machines (Basel), 2022-09, Vol.10 (9), p.782
issn 2075-1702
2075-1702
language eng
recordid cdi_doaj_primary_oai_doaj_org_article_10de34eda65a468b83d12b9b36dd66b7
source Publicly Available Content (ProQuest)
subjects Accuracy
Adaptive control
adaptive visual tracking
Algorithms
Artificial neural networks
Autoregressive moving average
Cameras
Control systems design
Controllers
ELM–PID control
Feature extraction
Kinematics
laser-camera system
Lasers
Machine learning
Machine vision
Management science
Masonry
Moving object recognition
Neural networks
Optical tracking
Optimal control
Parameters
Proportional integral derivative
Robotics
Robots
Servocontrol
Tracking control
Velocity
Vision
Visual control
visual servoing
title Adaptive Neural-PID Visual Servoing Tracking Control via Extreme Learning Machine
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T04%3A04%3A56IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Adaptive%20Neural-PID%20Visual%20Servoing%20Tracking%20Control%20via%20Extreme%20Learning%20Machine&rft.jtitle=Machines%20(Basel)&rft.au=Luo,%20Junqi&rft.date=2022-09-01&rft.volume=10&rft.issue=9&rft.spage=782&rft.pages=782-&rft.issn=2075-1702&rft.eissn=2075-1702&rft_id=info:doi/10.3390/machines10090782&rft_dat=%3Cgale_doaj_%3EA744730365%3C/gale_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c418t-2ef81ccd54527e3df50bdb12498baa676bae3f61fbae2ffcc09ef7e2b683a3b43%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2716574282&rft_id=info:pmid/&rft_galeid=A744730365&rfr_iscdi=true