Loading…

Spatiotemporal Data Fusion in Graph Convolutional Networks for Traffic Prediction

A plethora of information is now readily available for traffic prediction, making an effective use of them enables better traffic planning. With data coming from multiple sources, and their features spanning spatial and temporal dimensions, there is an increasing demand to exploit them for accurate...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access 2020, Vol.8, p.76632-76641
Main Authors: Zhao, Baoxin, Gao, Xitong, Liu, Jianqi, Zhao, Juanjuan, Xu, Chengzhong
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:A plethora of information is now readily available for traffic prediction, making an effective use of them enables better traffic planning. With data coming from multiple sources, and their features spanning spatial and temporal dimensions, there is an increasing demand to exploit them for accurate traffic prediction. Existing methods, however, do not provide a solution for this, as they tend to require expertise feature engineering. In this paper, we propose a general architecture for S patio T emporal D ata F usion (STDF) with parameter efficiency. To make heterogeneous multi-source data fusion effectiveness, we separate all data into traffic directly related data and traffic indirectly related data. With traffic indirectly related data as the input to S patial E mbedding by T emporal convoluti ON (SETON) that simultaneously encodes each feature in both space and time dimensions and traffic directly related data as the input to the graph convolutional network(GCN), we designed a fine-grained feature transformer to match the ones generated by GCN. This is then followed by a fusion module to combine all features to make final prediction. Compared to using GCNs training with only traffic directly related data, experimental results show that our model can achieve a 6.1% improvement in prediction accuracy measured by Root Mean Squared Error.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2020.2989443