Loading…

GECRAN: Graph embedding based convolutional recurrent attention network for traffic flow prediction

Traffic flow prediction has become increasingly important with the rapid development of Intelligent Transportation Systems (ITS) in recent years. Due to the accurate representation of the road network by the graph structure, more and more approaches are now using graph models to solve the traffic fl...

Full description

Saved in:
Bibliographic Details
Published in:Expert systems with applications 2024-12, Vol.256, p.125001, Article 125001
Main Authors: Yan, JianQiang, Zhang, Lin, Gao, Yuan, Qu, BoTing
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Traffic flow prediction has become increasingly important with the rapid development of Intelligent Transportation Systems (ITS) in recent years. Due to the accurate representation of the road network by the graph structure, more and more approaches are now using graph models to solve the traffic flow prediction problem. Existing studies often directly use adjacency graphs to represent the spatial correlations in road networks. In order to accurately reflect the hidden spatial correlations and temporal dependencies in real road networks. In this paper, we propose a traffic flow prediction method based on graph embedding convolutional recurrent attention network (GECRAN). Specifically, we first design a predefined graph embedding module (PGEM) to represent the spatial correlations of the real road network structure. Then a graph convolutional recurrent network (GCRN) is constructed to capture the temporal dependencies in the road network structure. Finally, an attention module (ATTM) is introduced to capture the long-period dependency patterns in the traffic sequences, enabling accurate prediction of traffic flow. Experiments with four real datasets show that the proposed GECRAN model is more effective than the baseline models, the overall predictive performance of our model improves by an average of 2.35 %, 3.55 %, and 4.22 % over the three time-step results.
ISSN:0957-4174
DOI:10.1016/j.eswa.2024.125001