Loading…
Traffic Signal Control with Cell Transmission Model Using Reinforcement Learning for Total Delay Minimisation
This paper proposes a new framework to control the traffic signal lights by applying the automated goal-directed learning and decision making scheme, namely the reinforcement learning (RL) method, to seek the best possible traffic signal ac- tions upon changes of network state modelled by the signal...
Saved in:
Published in: | International journal of computers, communications & control communications & control, 2015-10, Vol.10 (5) |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | 5 |
container_start_page | |
container_title | International journal of computers, communications & control |
container_volume | 10 |
creator | Chanloha, Pitipong Chinrungrueng, Jatuporn Usaha, Wipawee Aswakul, Chaodit |
description | This paper proposes a new framework to control the traffic signal lights by applying the automated goal-directed learning and decision making scheme, namely the reinforcement learning (RL) method, to seek the best possible traffic signal ac- tions upon changes of network state modelled by the signalised cell transmission model (CTM). This paper employs the Q-learning which is one of the RL tools in order to find the traffic signal solution because of its adaptability in finding the real time solu- tion upon the change of states. The goal is for RL to minimise the total network delay. Surprisingly, by using the total network delay as a reward function, the results were not necessarily as good as initially expected. Rather, both simulation and mathemat- ical derivation results confirm that using the newly proposed red light delay as the RL reward function gives better performance than using the total network delay as the reward function. The investigated scenarios include the situations where the summa- tion of overall traffic demands exceeds the maximum flow capacity. Reported results show that our proposed framework using RL and CTM in the macroscopic level can computationally efficiently find the proper control solution close to the brute-forcely searched best periodic signal solution (BPSS). For the practical case study conducted by AIMSUN microscopic traffic simulator, the proposed CTM-based RL reveals that the reduction of the average delay can be significantly decreased by 40% with bus lane and 38% without bus lane in comparison with the case of currently used traffic signal strategy. Therefore, the CTM-based RL algorithm could be a useful tool to adjust the proper traffic signal light in practice. |
doi_str_mv | 10.15837/ijccc.2015.5.2025 |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2518363249</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2518363249</sourcerecordid><originalsourceid>FETCH-LOGICAL-c226t-dfd2b4fa5bd555540f213fdff1cd70af0852a808ac75d9a9c83be279399b3b4c3</originalsourceid><addsrcrecordid>eNo9UEtLAzEYDKJgqf0DngKet-bZTY6yPqFF0O25ZPOoWdJENynivzeiOJcZhpmB7wPgEqMl5oK2137UWi8JwnzJKxF-AmZYMNxIwdjpv6arc7DIeUQVlAjU8hk49JNyzmv46vdRBdilWKYU4Kcvb7CzIcAaiPngc_Ypwk0yNsBt9nEPX6yPLk3aHmwscG3VFH_sasE-lbp1a4P6ghsffa2rUvsX4MypkO3ij-dge3_Xd4_N-vnhqbtZN5qQVWmMM2RgTvHB8AqGHMHUGeewNi1SDglOlEBC6ZYbqaQWdLCklVTKgQ5M0zm4-t19n9LH0eayG9NxqvflHeG4PoISJuk3cHNebQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2518363249</pqid></control><display><type>article</type><title>Traffic Signal Control with Cell Transmission Model Using Reinforcement Learning for Total Delay Minimisation</title><source>Publicly Available Content (ProQuest)</source><creator>Chanloha, Pitipong ; Chinrungrueng, Jatuporn ; Usaha, Wipawee ; Aswakul, Chaodit</creator><creatorcontrib>Chanloha, Pitipong ; Chinrungrueng, Jatuporn ; Usaha, Wipawee ; Aswakul, Chaodit</creatorcontrib><description>This paper proposes a new framework to control the traffic signal lights by applying the automated goal-directed learning and decision making scheme, namely the reinforcement learning (RL) method, to seek the best possible traffic signal ac- tions upon changes of network state modelled by the signalised cell transmission model (CTM). This paper employs the Q-learning which is one of the RL tools in order to find the traffic signal solution because of its adaptability in finding the real time solu- tion upon the change of states. The goal is for RL to minimise the total network delay. Surprisingly, by using the total network delay as a reward function, the results were not necessarily as good as initially expected. Rather, both simulation and mathemat- ical derivation results confirm that using the newly proposed red light delay as the RL reward function gives better performance than using the total network delay as the reward function. The investigated scenarios include the situations where the summa- tion of overall traffic demands exceeds the maximum flow capacity. Reported results show that our proposed framework using RL and CTM in the macroscopic level can computationally efficiently find the proper control solution close to the brute-forcely searched best periodic signal solution (BPSS). For the practical case study conducted by AIMSUN microscopic traffic simulator, the proposed CTM-based RL reveals that the reduction of the average delay can be significantly decreased by 40% with bus lane and 38% without bus lane in comparison with the case of currently used traffic signal strategy. Therefore, the CTM-based RL algorithm could be a useful tool to adjust the proper traffic signal light in practice.</description><identifier>ISSN: 1841-9836</identifier><identifier>EISSN: 1841-9844</identifier><identifier>DOI: 10.15837/ijccc.2015.5.2025</identifier><language>eng</language><publisher>Oradea: Agora University of Oradea</publisher><subject>Adaptability ; Algorithms ; Automatic control ; Decision making ; Delay ; Machine learning ; Maximum flow ; Reinforcement ; Traffic ; Traffic capacity ; Traffic control ; Traffic engineering ; Traffic signals</subject><ispartof>International journal of computers, communications & control, 2015-10, Vol.10 (5)</ispartof><rights>2015. This work is published under https://creativecommons.org/licenses/by-nc/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2518363249?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,25752,27923,27924,37011,44589</link.rule.ids></links><search><creatorcontrib>Chanloha, Pitipong</creatorcontrib><creatorcontrib>Chinrungrueng, Jatuporn</creatorcontrib><creatorcontrib>Usaha, Wipawee</creatorcontrib><creatorcontrib>Aswakul, Chaodit</creatorcontrib><title>Traffic Signal Control with Cell Transmission Model Using Reinforcement Learning for Total Delay Minimisation</title><title>International journal of computers, communications & control</title><description>This paper proposes a new framework to control the traffic signal lights by applying the automated goal-directed learning and decision making scheme, namely the reinforcement learning (RL) method, to seek the best possible traffic signal ac- tions upon changes of network state modelled by the signalised cell transmission model (CTM). This paper employs the Q-learning which is one of the RL tools in order to find the traffic signal solution because of its adaptability in finding the real time solu- tion upon the change of states. The goal is for RL to minimise the total network delay. Surprisingly, by using the total network delay as a reward function, the results were not necessarily as good as initially expected. Rather, both simulation and mathemat- ical derivation results confirm that using the newly proposed red light delay as the RL reward function gives better performance than using the total network delay as the reward function. The investigated scenarios include the situations where the summa- tion of overall traffic demands exceeds the maximum flow capacity. Reported results show that our proposed framework using RL and CTM in the macroscopic level can computationally efficiently find the proper control solution close to the brute-forcely searched best periodic signal solution (BPSS). For the practical case study conducted by AIMSUN microscopic traffic simulator, the proposed CTM-based RL reveals that the reduction of the average delay can be significantly decreased by 40% with bus lane and 38% without bus lane in comparison with the case of currently used traffic signal strategy. Therefore, the CTM-based RL algorithm could be a useful tool to adjust the proper traffic signal light in practice.</description><subject>Adaptability</subject><subject>Algorithms</subject><subject>Automatic control</subject><subject>Decision making</subject><subject>Delay</subject><subject>Machine learning</subject><subject>Maximum flow</subject><subject>Reinforcement</subject><subject>Traffic</subject><subject>Traffic capacity</subject><subject>Traffic control</subject><subject>Traffic engineering</subject><subject>Traffic signals</subject><issn>1841-9836</issn><issn>1841-9844</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2015</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNo9UEtLAzEYDKJgqf0DngKet-bZTY6yPqFF0O25ZPOoWdJENynivzeiOJcZhpmB7wPgEqMl5oK2137UWi8JwnzJKxF-AmZYMNxIwdjpv6arc7DIeUQVlAjU8hk49JNyzmv46vdRBdilWKYU4Kcvb7CzIcAaiPngc_Ypwk0yNsBt9nEPX6yPLk3aHmwscG3VFH_sasE-lbp1a4P6ghsffa2rUvsX4MypkO3ij-dge3_Xd4_N-vnhqbtZN5qQVWmMM2RgTvHB8AqGHMHUGeewNi1SDglOlEBC6ZYbqaQWdLCklVTKgQ5M0zm4-t19n9LH0eayG9NxqvflHeG4PoISJuk3cHNebQ</recordid><startdate>20151001</startdate><enddate>20151001</enddate><creator>Chanloha, Pitipong</creator><creator>Chinrungrueng, Jatuporn</creator><creator>Usaha, Wipawee</creator><creator>Aswakul, Chaodit</creator><general>Agora University of Oradea</general><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope></search><sort><creationdate>20151001</creationdate><title>Traffic Signal Control with Cell Transmission Model Using Reinforcement Learning for Total Delay Minimisation</title><author>Chanloha, Pitipong ; Chinrungrueng, Jatuporn ; Usaha, Wipawee ; Aswakul, Chaodit</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c226t-dfd2b4fa5bd555540f213fdff1cd70af0852a808ac75d9a9c83be279399b3b4c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2015</creationdate><topic>Adaptability</topic><topic>Algorithms</topic><topic>Automatic control</topic><topic>Decision making</topic><topic>Delay</topic><topic>Machine learning</topic><topic>Maximum flow</topic><topic>Reinforcement</topic><topic>Traffic</topic><topic>Traffic capacity</topic><topic>Traffic control</topic><topic>Traffic engineering</topic><topic>Traffic signals</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chanloha, Pitipong</creatorcontrib><creatorcontrib>Chinrungrueng, Jatuporn</creatorcontrib><creatorcontrib>Usaha, Wipawee</creatorcontrib><creatorcontrib>Aswakul, Chaodit</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>International journal of computers, communications & control</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chanloha, Pitipong</au><au>Chinrungrueng, Jatuporn</au><au>Usaha, Wipawee</au><au>Aswakul, Chaodit</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Traffic Signal Control with Cell Transmission Model Using Reinforcement Learning for Total Delay Minimisation</atitle><jtitle>International journal of computers, communications & control</jtitle><date>2015-10-01</date><risdate>2015</risdate><volume>10</volume><issue>5</issue><issn>1841-9836</issn><eissn>1841-9844</eissn><abstract>This paper proposes a new framework to control the traffic signal lights by applying the automated goal-directed learning and decision making scheme, namely the reinforcement learning (RL) method, to seek the best possible traffic signal ac- tions upon changes of network state modelled by the signalised cell transmission model (CTM). This paper employs the Q-learning which is one of the RL tools in order to find the traffic signal solution because of its adaptability in finding the real time solu- tion upon the change of states. The goal is for RL to minimise the total network delay. Surprisingly, by using the total network delay as a reward function, the results were not necessarily as good as initially expected. Rather, both simulation and mathemat- ical derivation results confirm that using the newly proposed red light delay as the RL reward function gives better performance than using the total network delay as the reward function. The investigated scenarios include the situations where the summa- tion of overall traffic demands exceeds the maximum flow capacity. Reported results show that our proposed framework using RL and CTM in the macroscopic level can computationally efficiently find the proper control solution close to the brute-forcely searched best periodic signal solution (BPSS). For the practical case study conducted by AIMSUN microscopic traffic simulator, the proposed CTM-based RL reveals that the reduction of the average delay can be significantly decreased by 40% with bus lane and 38% without bus lane in comparison with the case of currently used traffic signal strategy. Therefore, the CTM-based RL algorithm could be a useful tool to adjust the proper traffic signal light in practice.</abstract><cop>Oradea</cop><pub>Agora University of Oradea</pub><doi>10.15837/ijccc.2015.5.2025</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1841-9836 |
ispartof | International journal of computers, communications & control, 2015-10, Vol.10 (5) |
issn | 1841-9836 1841-9844 |
language | eng |
recordid | cdi_proquest_journals_2518363249 |
source | Publicly Available Content (ProQuest) |
subjects | Adaptability Algorithms Automatic control Decision making Delay Machine learning Maximum flow Reinforcement Traffic Traffic capacity Traffic control Traffic engineering Traffic signals |
title | Traffic Signal Control with Cell Transmission Model Using Reinforcement Learning for Total Delay Minimisation |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T07%3A52%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Traffic%20Signal%20Control%20with%20Cell%20Transmission%20Model%20Using%20Reinforcement%20Learning%20for%20Total%20Delay%20Minimisation&rft.jtitle=International%20journal%20of%20computers,%20communications%20&%20control&rft.au=Chanloha,%20Pitipong&rft.date=2015-10-01&rft.volume=10&rft.issue=5&rft.issn=1841-9836&rft.eissn=1841-9844&rft_id=info:doi/10.15837/ijccc.2015.5.2025&rft_dat=%3Cproquest%3E2518363249%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c226t-dfd2b4fa5bd555540f213fdff1cd70af0852a808ac75d9a9c83be279399b3b4c3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2518363249&rft_id=info:pmid/&rfr_iscdi=true |