Loading…
Deceptive Path Planning via Count-Based Reinforcement Learning under Specific Time Constraint
Deceptive path planning (DPP) aims to find a path that minimizes the probability of the observer identifying the real goal of the observed before it reaches. It is important for addressing issues such as public safety, strategic path planning, and logistics route privacy protection. Existing traditi...
Saved in:
Published in: | Mathematics (Basel) 2024-07, Vol.12 (13), p.1979 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | cdi_FETCH-LOGICAL-c254t-7e62089ae8c5fb3383a28dac14eecd87dc53d4cc4f47b7270e891233b3f28d43 |
container_end_page | |
container_issue | 13 |
container_start_page | 1979 |
container_title | Mathematics (Basel) |
container_volume | 12 |
creator | Chen, Dejun Zeng, Yunxiu Zhang, Yi Li, Shuilin Xu, Kai Yin, Quanjun |
description | Deceptive path planning (DPP) aims to find a path that minimizes the probability of the observer identifying the real goal of the observed before it reaches. It is important for addressing issues such as public safety, strategic path planning, and logistics route privacy protection. Existing traditional methods often rely on “dissimulation”—hiding the truth—to obscure paths while ignoring the time constraints. Building upon the theory of probabilistic goal recognition based on cost difference, we proposed a DPP method, DPP_Q, based on count-based Q-learning for solving the DPP problems in discrete path-planning domains under specific time constraints. Furthermore, to extend this method to continuous domains, we proposed a new model of probabilistic goal recognition called the Approximate Goal Recognition Model (AGRM) and verified its feasibility in discrete path-planning domains. Finally, we also proposed a DPP method based on proximal policy optimization for continuous path-planning domains under specific time constraints called DPP_PPO. DPP methods like DPP_Q and DPP_PPO are types of research that have not yet been explored in the field of path planning. Experimental results show that, in discrete domains, compared to traditional methods, DPP_Q exhibits better effectiveness in enhancing the average deceptiveness of paths. (Improved on average by 12.53% compared to traditional methods). In continuous domains, DPP_PPO shows significant advantages over random walk methods. Both DPP_Q and DPP_PPO demonstrate good applicability in path-planning domains with uncomplicated obstacles. |
doi_str_mv | 10.3390/math12131979 |
format | article |
fullrecord | <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_0a317e6cd8604d9a9679644195cd7316</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_0a317e6cd8604d9a9679644195cd7316</doaj_id><sourcerecordid>3079077125</sourcerecordid><originalsourceid>FETCH-LOGICAL-c254t-7e62089ae8c5fb3383a28dac14eecd87dc53d4cc4f47b7270e891233b3f28d43</originalsourceid><addsrcrecordid>eNpNkU1LAzEQhhdRsNTe_AELXl3N1242R61fhYJFe5WQJrM1pZtdk2zBf29sRTqXGWbeed6BybJLjG4oFei2VfETE0yx4OIkGxFCeMHT4PSoPs8mIWxQCoFpzcQo-3gADX20O8gXCZAvtso569b5zqp82g0uFvcqgMnfwLqm8xpacDGfg_J72eAM-Py9B20bq_OlbSGtuRC9si5eZGeN2gaY_OVxtnx6XE5fivnr82x6Ny80KVksOFQE1UJBrctmRWlNFamN0pgBaFNzo0tqmNasYXzFCUdQC0woXdEm6RgdZ7MD1nRqI3tvW-W_Zaes3Dc6v5bKR6u3IJGiONklaoWYEUpUXFSMYVFqwymuEuvqwOp99zVAiHLTDd6l6yVFXCDOMSmT6vqg0r4LwUPz74qR_H2HPH4H_QHs_Xyu</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3079077125</pqid></control><display><type>article</type><title>Deceptive Path Planning via Count-Based Reinforcement Learning under Specific Time Constraint</title><source>Publicly Available Content (ProQuest)</source><creator>Chen, Dejun ; Zeng, Yunxiu ; Zhang, Yi ; Li, Shuilin ; Xu, Kai ; Yin, Quanjun</creator><creatorcontrib>Chen, Dejun ; Zeng, Yunxiu ; Zhang, Yi ; Li, Shuilin ; Xu, Kai ; Yin, Quanjun</creatorcontrib><description>Deceptive path planning (DPP) aims to find a path that minimizes the probability of the observer identifying the real goal of the observed before it reaches. It is important for addressing issues such as public safety, strategic path planning, and logistics route privacy protection. Existing traditional methods often rely on “dissimulation”—hiding the truth—to obscure paths while ignoring the time constraints. Building upon the theory of probabilistic goal recognition based on cost difference, we proposed a DPP method, DPP_Q, based on count-based Q-learning for solving the DPP problems in discrete path-planning domains under specific time constraints. Furthermore, to extend this method to continuous domains, we proposed a new model of probabilistic goal recognition called the Approximate Goal Recognition Model (AGRM) and verified its feasibility in discrete path-planning domains. Finally, we also proposed a DPP method based on proximal policy optimization for continuous path-planning domains under specific time constraints called DPP_PPO. DPP methods like DPP_Q and DPP_PPO are types of research that have not yet been explored in the field of path planning. Experimental results show that, in discrete domains, compared to traditional methods, DPP_Q exhibits better effectiveness in enhancing the average deceptiveness of paths. (Improved on average by 12.53% compared to traditional methods). In continuous domains, DPP_PPO shows significant advantages over random walk methods. Both DPP_Q and DPP_PPO demonstrate good applicability in path-planning domains with uncomplicated obstacles.</description><identifier>ISSN: 2227-7390</identifier><identifier>EISSN: 2227-7390</identifier><identifier>DOI: 10.3390/math12131979</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>count-based reinforcement learning ; Deception ; deceptiveness ; Experiments ; goal recognition ; Methods ; Path planning ; Planning ; Probability ; Public safety ; Random walk ; Rationality ; Recognition ; Statistical analysis</subject><ispartof>Mathematics (Basel), 2024-07, Vol.12 (13), p.1979</ispartof><rights>2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c254t-7e62089ae8c5fb3383a28dac14eecd87dc53d4cc4f47b7270e891233b3f28d43</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/3079077125/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/3079077125?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,25733,27903,27904,36991,44569,74872</link.rule.ids></links><search><creatorcontrib>Chen, Dejun</creatorcontrib><creatorcontrib>Zeng, Yunxiu</creatorcontrib><creatorcontrib>Zhang, Yi</creatorcontrib><creatorcontrib>Li, Shuilin</creatorcontrib><creatorcontrib>Xu, Kai</creatorcontrib><creatorcontrib>Yin, Quanjun</creatorcontrib><title>Deceptive Path Planning via Count-Based Reinforcement Learning under Specific Time Constraint</title><title>Mathematics (Basel)</title><description>Deceptive path planning (DPP) aims to find a path that minimizes the probability of the observer identifying the real goal of the observed before it reaches. It is important for addressing issues such as public safety, strategic path planning, and logistics route privacy protection. Existing traditional methods often rely on “dissimulation”—hiding the truth—to obscure paths while ignoring the time constraints. Building upon the theory of probabilistic goal recognition based on cost difference, we proposed a DPP method, DPP_Q, based on count-based Q-learning for solving the DPP problems in discrete path-planning domains under specific time constraints. Furthermore, to extend this method to continuous domains, we proposed a new model of probabilistic goal recognition called the Approximate Goal Recognition Model (AGRM) and verified its feasibility in discrete path-planning domains. Finally, we also proposed a DPP method based on proximal policy optimization for continuous path-planning domains under specific time constraints called DPP_PPO. DPP methods like DPP_Q and DPP_PPO are types of research that have not yet been explored in the field of path planning. Experimental results show that, in discrete domains, compared to traditional methods, DPP_Q exhibits better effectiveness in enhancing the average deceptiveness of paths. (Improved on average by 12.53% compared to traditional methods). In continuous domains, DPP_PPO shows significant advantages over random walk methods. Both DPP_Q and DPP_PPO demonstrate good applicability in path-planning domains with uncomplicated obstacles.</description><subject>count-based reinforcement learning</subject><subject>Deception</subject><subject>deceptiveness</subject><subject>Experiments</subject><subject>goal recognition</subject><subject>Methods</subject><subject>Path planning</subject><subject>Planning</subject><subject>Probability</subject><subject>Public safety</subject><subject>Random walk</subject><subject>Rationality</subject><subject>Recognition</subject><subject>Statistical analysis</subject><issn>2227-7390</issn><issn>2227-7390</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNpNkU1LAzEQhhdRsNTe_AELXl3N1242R61fhYJFe5WQJrM1pZtdk2zBf29sRTqXGWbeed6BybJLjG4oFei2VfETE0yx4OIkGxFCeMHT4PSoPs8mIWxQCoFpzcQo-3gADX20O8gXCZAvtso569b5zqp82g0uFvcqgMnfwLqm8xpacDGfg_J72eAM-Py9B20bq_OlbSGtuRC9si5eZGeN2gaY_OVxtnx6XE5fivnr82x6Ny80KVksOFQE1UJBrctmRWlNFamN0pgBaFNzo0tqmNasYXzFCUdQC0woXdEm6RgdZ7MD1nRqI3tvW-W_Zaes3Dc6v5bKR6u3IJGiONklaoWYEUpUXFSMYVFqwymuEuvqwOp99zVAiHLTDd6l6yVFXCDOMSmT6vqg0r4LwUPz74qR_H2HPH4H_QHs_Xyu</recordid><startdate>20240701</startdate><enddate>20240701</enddate><creator>Chen, Dejun</creator><creator>Zeng, Yunxiu</creator><creator>Zhang, Yi</creator><creator>Li, Shuilin</creator><creator>Xu, Kai</creator><creator>Yin, Quanjun</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7TB</scope><scope>7XB</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FR3</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>KR7</scope><scope>L6V</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0N</scope><scope>M7S</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>Q9U</scope><scope>DOA</scope></search><sort><creationdate>20240701</creationdate><title>Deceptive Path Planning via Count-Based Reinforcement Learning under Specific Time Constraint</title><author>Chen, Dejun ; Zeng, Yunxiu ; Zhang, Yi ; Li, Shuilin ; Xu, Kai ; Yin, Quanjun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c254t-7e62089ae8c5fb3383a28dac14eecd87dc53d4cc4f47b7270e891233b3f28d43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>count-based reinforcement learning</topic><topic>Deception</topic><topic>deceptiveness</topic><topic>Experiments</topic><topic>goal recognition</topic><topic>Methods</topic><topic>Path planning</topic><topic>Planning</topic><topic>Probability</topic><topic>Public safety</topic><topic>Random walk</topic><topic>Rationality</topic><topic>Recognition</topic><topic>Statistical analysis</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chen, Dejun</creatorcontrib><creatorcontrib>Zeng, Yunxiu</creatorcontrib><creatorcontrib>Zhang, Yi</creatorcontrib><creatorcontrib>Li, Shuilin</creatorcontrib><creatorcontrib>Xu, Kai</creatorcontrib><creatorcontrib>Yin, Quanjun</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Engineering Research Database</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Civil Engineering Abstracts</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Computing Database</collection><collection>Engineering Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection><collection>ProQuest Central Basic</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Mathematics (Basel)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chen, Dejun</au><au>Zeng, Yunxiu</au><au>Zhang, Yi</au><au>Li, Shuilin</au><au>Xu, Kai</au><au>Yin, Quanjun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deceptive Path Planning via Count-Based Reinforcement Learning under Specific Time Constraint</atitle><jtitle>Mathematics (Basel)</jtitle><date>2024-07-01</date><risdate>2024</risdate><volume>12</volume><issue>13</issue><spage>1979</spage><pages>1979-</pages><issn>2227-7390</issn><eissn>2227-7390</eissn><abstract>Deceptive path planning (DPP) aims to find a path that minimizes the probability of the observer identifying the real goal of the observed before it reaches. It is important for addressing issues such as public safety, strategic path planning, and logistics route privacy protection. Existing traditional methods often rely on “dissimulation”—hiding the truth—to obscure paths while ignoring the time constraints. Building upon the theory of probabilistic goal recognition based on cost difference, we proposed a DPP method, DPP_Q, based on count-based Q-learning for solving the DPP problems in discrete path-planning domains under specific time constraints. Furthermore, to extend this method to continuous domains, we proposed a new model of probabilistic goal recognition called the Approximate Goal Recognition Model (AGRM) and verified its feasibility in discrete path-planning domains. Finally, we also proposed a DPP method based on proximal policy optimization for continuous path-planning domains under specific time constraints called DPP_PPO. DPP methods like DPP_Q and DPP_PPO are types of research that have not yet been explored in the field of path planning. Experimental results show that, in discrete domains, compared to traditional methods, DPP_Q exhibits better effectiveness in enhancing the average deceptiveness of paths. (Improved on average by 12.53% compared to traditional methods). In continuous domains, DPP_PPO shows significant advantages over random walk methods. Both DPP_Q and DPP_PPO demonstrate good applicability in path-planning domains with uncomplicated obstacles.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/math12131979</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2227-7390 |
ispartof | Mathematics (Basel), 2024-07, Vol.12 (13), p.1979 |
issn | 2227-7390 2227-7390 |
language | eng |
recordid | cdi_doaj_primary_oai_doaj_org_article_0a317e6cd8604d9a9679644195cd7316 |
source | Publicly Available Content (ProQuest) |
subjects | count-based reinforcement learning Deception deceptiveness Experiments goal recognition Methods Path planning Planning Probability Public safety Random walk Rationality Recognition Statistical analysis |
title | Deceptive Path Planning via Count-Based Reinforcement Learning under Specific Time Constraint |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T04%3A24%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deceptive%20Path%20Planning%20via%20Count-Based%20Reinforcement%20Learning%20under%20Specific%20Time%20Constraint&rft.jtitle=Mathematics%20(Basel)&rft.au=Chen,%20Dejun&rft.date=2024-07-01&rft.volume=12&rft.issue=13&rft.spage=1979&rft.pages=1979-&rft.issn=2227-7390&rft.eissn=2227-7390&rft_id=info:doi/10.3390/math12131979&rft_dat=%3Cproquest_doaj_%3E3079077125%3C/proquest_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c254t-7e62089ae8c5fb3383a28dac14eecd87dc53d4cc4f47b7270e891233b3f28d43%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3079077125&rft_id=info:pmid/&rfr_iscdi=true |