Loading…
Cooperative MARL-PPO Approach for Automated Highway Platoon Merging
This paper presents a cooperative highway platooning strategy that integrates Multi-Agent Reinforcement Learning (MARL) with Proximal Policy Optimization (PPO) to effectively manage the complex task of merging. In modern transportation systems, platooning—where multiple vehicles travel closely toget...
Saved in:
Published in: | Electronics (Basel) 2024-08, Vol.13 (15), p.3102 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper presents a cooperative highway platooning strategy that integrates Multi-Agent Reinforcement Learning (MARL) with Proximal Policy Optimization (PPO) to effectively manage the complex task of merging. In modern transportation systems, platooning—where multiple vehicles travel closely together under coordinated control—promises significant improvements in traffic flow and fuel efficiency. However, the challenge of merging, which involves dynamically adjusting the formation to incorporate new vehicles, remains challenging. Our approach leverages the strengths of MARL to enable individual vehicles within a platoon to learn optimal behaviors through interactions. PPO ensures stable and efficient learning by optimizing policies balancing exploration and exploitation. Simulation results show that our method achieves merging with safety and operational efficiency. |
---|---|
ISSN: | 2079-9292 2079-9292 |
DOI: | 10.3390/electronics13153102 |