Loading…

A deep reinforcement learning method to control chaos synchronization between two identical chaotic systems

We propose a model-free deep reinforcement learning method for controlling the synchronization between two identical chaotic systems, one target and one reference. By interacting with the target and the reference, the agent continuously optimizes its strategy of applying perturbations to the target...

Full description

Saved in:
Bibliographic Details
Published in:Chaos, solitons and fractals solitons and fractals, 2023-09, Vol.174, p.113809, Article 113809
Main Authors: Cheng, Haoxin, Li, Haihong, Dai, Qionglin, Yang, Junzhong
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c303t-c9f462093823d2fa52b74e9d94dfaddd1fc3c1b98e6267f51b672ba31d33721b3
cites cdi_FETCH-LOGICAL-c303t-c9f462093823d2fa52b74e9d94dfaddd1fc3c1b98e6267f51b672ba31d33721b3
container_end_page
container_issue
container_start_page 113809
container_title Chaos, solitons and fractals
container_volume 174
creator Cheng, Haoxin
Li, Haihong
Dai, Qionglin
Yang, Junzhong
description We propose a model-free deep reinforcement learning method for controlling the synchronization between two identical chaotic systems, one target and one reference. By interacting with the target and the reference, the agent continuously optimizes its strategy of applying perturbations to the target to synchronize the trajectory of the target with the reference. This method is different from previous chaos synchronization methods. It requires no prior knowledge of the chaotic systems. We apply the deep reinforcement learning method to several typical chaotic systems (Lorenz system, Rössler system, Chua circuit and Logistic map) and its efficiency of controlling synchronization between the target and the reference is demonstrated. Especially, we find that a single learned agent can be used to control the chaos synchronization for different chaotic systems. We also find that the method works well in controlling chaos synchronization even when only incomplete information of the state variables of the target and the reference can be obtained. •A model-free deep reinforcement learning method for controlling chaos synchronization is proposed.•The efficiency of controlling synchronization is demonstrated.•A single learned agent can be used to control the chaos synchronization for different chaotic systems.•The method works well even when only incomplete information of the state variables of the chaotic systems can be obtained.
doi_str_mv 10.1016/j.chaos.2023.113809
format article
fullrecord <record><control><sourceid>elsevier_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1016_j_chaos_2023_113809</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0960077923007105</els_id><sourcerecordid>S0960077923007105</sourcerecordid><originalsourceid>FETCH-LOGICAL-c303t-c9f462093823d2fa52b74e9d94dfaddd1fc3c1b98e6267f51b672ba31d33721b3</originalsourceid><addsrcrecordid>eNqFkLtOAzEURC0EEiHwBTT-gV38SNbrgiKKeElINFBbXvuaOGzsyLaIwtezyVJRQHWnmDNXMwhdU1JTQpubdW1WOuaaEcZrSnlL5Ama0FbwirWtOEUTIhtSESHkObrIeU0IoaRhE_SxwBZgixP44GIysIFQcA86BR_e8QbKKlpcIjYxlBR7fHyE8z6YVYrBf-niY8AdlB1AwGUXsbdDhDd69A5qcOcCm3yJzpzuM1z93Cl6u797XT5Wzy8PT8vFc2U44aUy0s0aRiRvGbfM6TnrxAyklTPrtLWWOsMN7WQLDWuEm9OuEazTnFrOBaMdnyI-5poUc07g1Db5jU57RYk67KXW6lhDHfZS414DJX9Rxpdju5K07_9hb0cWhlqfHpLKxkMwYH0CU5SN_k_-Gwwgit0</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A deep reinforcement learning method to control chaos synchronization between two identical chaotic systems</title><source>Elsevier</source><creator>Cheng, Haoxin ; Li, Haihong ; Dai, Qionglin ; Yang, Junzhong</creator><creatorcontrib>Cheng, Haoxin ; Li, Haihong ; Dai, Qionglin ; Yang, Junzhong</creatorcontrib><description>We propose a model-free deep reinforcement learning method for controlling the synchronization between two identical chaotic systems, one target and one reference. By interacting with the target and the reference, the agent continuously optimizes its strategy of applying perturbations to the target to synchronize the trajectory of the target with the reference. This method is different from previous chaos synchronization methods. It requires no prior knowledge of the chaotic systems. We apply the deep reinforcement learning method to several typical chaotic systems (Lorenz system, Rössler system, Chua circuit and Logistic map) and its efficiency of controlling synchronization between the target and the reference is demonstrated. Especially, we find that a single learned agent can be used to control the chaos synchronization for different chaotic systems. We also find that the method works well in controlling chaos synchronization even when only incomplete information of the state variables of the target and the reference can be obtained. •A model-free deep reinforcement learning method for controlling chaos synchronization is proposed.•The efficiency of controlling synchronization is demonstrated.•A single learned agent can be used to control the chaos synchronization for different chaotic systems.•The method works well even when only incomplete information of the state variables of the chaotic systems can be obtained.</description><identifier>ISSN: 0960-0779</identifier><identifier>EISSN: 1873-2887</identifier><identifier>DOI: 10.1016/j.chaos.2023.113809</identifier><language>eng</language><publisher>Elsevier Ltd</publisher><subject>Chaos synchronization ; Continuous control ; Deep reinforcement learning ; Model-free method</subject><ispartof>Chaos, solitons and fractals, 2023-09, Vol.174, p.113809, Article 113809</ispartof><rights>2023 Elsevier Ltd</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c303t-c9f462093823d2fa52b74e9d94dfaddd1fc3c1b98e6267f51b672ba31d33721b3</citedby><cites>FETCH-LOGICAL-c303t-c9f462093823d2fa52b74e9d94dfaddd1fc3c1b98e6267f51b672ba31d33721b3</cites><orcidid>0000-0001-6120-4689</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27923,27924</link.rule.ids></links><search><creatorcontrib>Cheng, Haoxin</creatorcontrib><creatorcontrib>Li, Haihong</creatorcontrib><creatorcontrib>Dai, Qionglin</creatorcontrib><creatorcontrib>Yang, Junzhong</creatorcontrib><title>A deep reinforcement learning method to control chaos synchronization between two identical chaotic systems</title><title>Chaos, solitons and fractals</title><description>We propose a model-free deep reinforcement learning method for controlling the synchronization between two identical chaotic systems, one target and one reference. By interacting with the target and the reference, the agent continuously optimizes its strategy of applying perturbations to the target to synchronize the trajectory of the target with the reference. This method is different from previous chaos synchronization methods. It requires no prior knowledge of the chaotic systems. We apply the deep reinforcement learning method to several typical chaotic systems (Lorenz system, Rössler system, Chua circuit and Logistic map) and its efficiency of controlling synchronization between the target and the reference is demonstrated. Especially, we find that a single learned agent can be used to control the chaos synchronization for different chaotic systems. We also find that the method works well in controlling chaos synchronization even when only incomplete information of the state variables of the target and the reference can be obtained. •A model-free deep reinforcement learning method for controlling chaos synchronization is proposed.•The efficiency of controlling synchronization is demonstrated.•A single learned agent can be used to control the chaos synchronization for different chaotic systems.•The method works well even when only incomplete information of the state variables of the chaotic systems can be obtained.</description><subject>Chaos synchronization</subject><subject>Continuous control</subject><subject>Deep reinforcement learning</subject><subject>Model-free method</subject><issn>0960-0779</issn><issn>1873-2887</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNqFkLtOAzEURC0EEiHwBTT-gV38SNbrgiKKeElINFBbXvuaOGzsyLaIwtezyVJRQHWnmDNXMwhdU1JTQpubdW1WOuaaEcZrSnlL5Ama0FbwirWtOEUTIhtSESHkObrIeU0IoaRhE_SxwBZgixP44GIysIFQcA86BR_e8QbKKlpcIjYxlBR7fHyE8z6YVYrBf-niY8AdlB1AwGUXsbdDhDd69A5qcOcCm3yJzpzuM1z93Cl6u797XT5Wzy8PT8vFc2U44aUy0s0aRiRvGbfM6TnrxAyklTPrtLWWOsMN7WQLDWuEm9OuEazTnFrOBaMdnyI-5poUc07g1Db5jU57RYk67KXW6lhDHfZS414DJX9Rxpdju5K07_9hb0cWhlqfHpLKxkMwYH0CU5SN_k_-Gwwgit0</recordid><startdate>202309</startdate><enddate>202309</enddate><creator>Cheng, Haoxin</creator><creator>Li, Haihong</creator><creator>Dai, Qionglin</creator><creator>Yang, Junzhong</creator><general>Elsevier Ltd</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0001-6120-4689</orcidid></search><sort><creationdate>202309</creationdate><title>A deep reinforcement learning method to control chaos synchronization between two identical chaotic systems</title><author>Cheng, Haoxin ; Li, Haihong ; Dai, Qionglin ; Yang, Junzhong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c303t-c9f462093823d2fa52b74e9d94dfaddd1fc3c1b98e6267f51b672ba31d33721b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Chaos synchronization</topic><topic>Continuous control</topic><topic>Deep reinforcement learning</topic><topic>Model-free method</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Cheng, Haoxin</creatorcontrib><creatorcontrib>Li, Haihong</creatorcontrib><creatorcontrib>Dai, Qionglin</creatorcontrib><creatorcontrib>Yang, Junzhong</creatorcontrib><collection>CrossRef</collection><jtitle>Chaos, solitons and fractals</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Cheng, Haoxin</au><au>Li, Haihong</au><au>Dai, Qionglin</au><au>Yang, Junzhong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A deep reinforcement learning method to control chaos synchronization between two identical chaotic systems</atitle><jtitle>Chaos, solitons and fractals</jtitle><date>2023-09</date><risdate>2023</risdate><volume>174</volume><spage>113809</spage><pages>113809-</pages><artnum>113809</artnum><issn>0960-0779</issn><eissn>1873-2887</eissn><abstract>We propose a model-free deep reinforcement learning method for controlling the synchronization between two identical chaotic systems, one target and one reference. By interacting with the target and the reference, the agent continuously optimizes its strategy of applying perturbations to the target to synchronize the trajectory of the target with the reference. This method is different from previous chaos synchronization methods. It requires no prior knowledge of the chaotic systems. We apply the deep reinforcement learning method to several typical chaotic systems (Lorenz system, Rössler system, Chua circuit and Logistic map) and its efficiency of controlling synchronization between the target and the reference is demonstrated. Especially, we find that a single learned agent can be used to control the chaos synchronization for different chaotic systems. We also find that the method works well in controlling chaos synchronization even when only incomplete information of the state variables of the target and the reference can be obtained. •A model-free deep reinforcement learning method for controlling chaos synchronization is proposed.•The efficiency of controlling synchronization is demonstrated.•A single learned agent can be used to control the chaos synchronization for different chaotic systems.•The method works well even when only incomplete information of the state variables of the chaotic systems can be obtained.</abstract><pub>Elsevier Ltd</pub><doi>10.1016/j.chaos.2023.113809</doi><orcidid>https://orcid.org/0000-0001-6120-4689</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0960-0779
ispartof Chaos, solitons and fractals, 2023-09, Vol.174, p.113809, Article 113809
issn 0960-0779
1873-2887
language eng
recordid cdi_crossref_primary_10_1016_j_chaos_2023_113809
source Elsevier
subjects Chaos synchronization
Continuous control
Deep reinforcement learning
Model-free method
title A deep reinforcement learning method to control chaos synchronization between two identical chaotic systems
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-10T14%3A38%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-elsevier_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20deep%20reinforcement%20learning%20method%20to%20control%20chaos%20synchronization%20between%20two%20identical%20chaotic%20systems&rft.jtitle=Chaos,%20solitons%20and%20fractals&rft.au=Cheng,%20Haoxin&rft.date=2023-09&rft.volume=174&rft.spage=113809&rft.pages=113809-&rft.artnum=113809&rft.issn=0960-0779&rft.eissn=1873-2887&rft_id=info:doi/10.1016/j.chaos.2023.113809&rft_dat=%3Celsevier_cross%3ES0960077923007105%3C/elsevier_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c303t-c9f462093823d2fa52b74e9d94dfaddd1fc3c1b98e6267f51b672ba31d33721b3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true