Loading…

Single Block Encoder-Decoder Transformer Model for Multi-Step Traffic Flow Forecasting

Accurate traffic flow forecasting is crucial for managing and planning urban transportation systems. Despite the widespread use of sequence modelling models like Long Short-Term Memory (LSTM) for this purpose, the potential of Transformer models remains underexplored. This is particularly true for t...

Full description

Saved in:
Bibliographic Details
Main Authors: Omar, Mas, Yakub, Fitri, Abu Talip, Mohamad Sofian, Nazmin Maslan, Mohd, Sinha, Vijay Kumar, Muljono, Muljono
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Accurate traffic flow forecasting is crucial for managing and planning urban transportation systems. Despite the widespread use of sequence modelling models like Long Short-Term Memory (LSTM) for this purpose, the potential of Transformer models remains underexplored. This is particularly true for the simplest form of a single block encoder-decoder Transformer model, which can be finely tuned through optimised hyperparameters. This paper examines the performance of a singular horizon-step forecasting method for multi-step traffic flow forecasting using a proposed Single Block Encoder-Decoder Transformer model optimised with a Grid Search algorithm. Results demonstrate that this model can enhance forecasting accuracy compared to the state-of-the-art LSTM model typically used for multi-step forecasting. The model effectively captures long-range temporal dependencies within a single road traffic flow dataset. It was tested on hourly traffic flow data to forecast the next 24 hours for the I5-North freeway in California, sourced from the Caltrans Performance Measurement System. The optimal configuration included an embedding dimension of 32, a feed-forward dimension of 128, and 8 attention heads. Results show a significant improvement, with a 4.7% reduction in Root Mean Squared Error compared to an LSTM model with two hidden layers of 100 neurons each, showcasing the potential of Single Block Encoder-Decoder Transformer models for real-world traffic prediction applications.
ISSN:2996-6752
DOI:10.1109/ISCI62787.2024.10667997