Loading…

Context-Aware Multiagent Broad Reinforcement Learning for Mixed Pedestrian-Vehicle Adaptive Traffic Light Control

Efficient traffic light control is a critical part of realizing smart transportation. In particular, deep reinforcement learning (DRL) algorithms that use deep neural networks (DNNs) have superior autonomous decision-making ability. Most existing work has applied DRL to control traffic lights intell...

Full description

Saved in:
Bibliographic Details
Published in:IEEE internet of things journal 2022-10, Vol.9 (20), p.19694-19705
Main Authors: Zhu, Ruijie, Wu, Shuning, Li, Lulu, Lv, Ping, Xu, Mingliang
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Efficient traffic light control is a critical part of realizing smart transportation. In particular, deep reinforcement learning (DRL) algorithms that use deep neural networks (DNNs) have superior autonomous decision-making ability. Most existing work has applied DRL to control traffic lights intelligently. In this article, we propose a novel context-aware multiagent broad reinforcement learning (CAMABRL) approach based on broad reinforcement learning (BRL) for mixed pedestrian-vehicle adaptive traffic light control (ATLC). CAMABRL exploits the broad learning system (BLS) established in a flat network structure to make decisions instead of a deep network structure. Unlike previous works that consider the attributes of vehicles, CAMABRL also takes the states of pedestrians waiting at the intersection into consideration. Combining with the context-aware mechanism that utilizes the states of adjacent agents and potential state information captured by the long short-term memory (LSTM) network, agents can make farsighted decisions to alleviate traffic congestion. The experimental results show that CAMABRL is superior to several state-of-the-art multiagent reinforcement learning (MARL) methods.
ISSN:2327-4662
2327-4662
DOI:10.1109/JIOT.2022.3167029