Loading…
Modelling monthly rainfall of India through transformer-based deep learning architecture
In the realm of Earth systems modelling, the forecasting of rainfall holds crucial significance. The accurate prediction of monthly rainfall in India is paramount due to its pivotal role in determining the country’s agricultural productivity. Due to this phenomenon's highly nonlinear dynamic na...
Saved in:
Published in: | Modeling earth systems and environment 2024-06, Vol.10 (3), p.3119-3136 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In the realm of Earth systems modelling, the forecasting of rainfall holds crucial significance. The accurate prediction of monthly rainfall in India is paramount due to its pivotal role in determining the country’s agricultural productivity. Due to this phenomenon's highly nonlinear dynamic nature, linear models are deemed inadequate. Parametric non-linear models also face limitations due to stringent assumptions. Consequently, there has been a notable surge in the adoption of machine learning approaches in recent times, owing to their data-driven nature. However, it is acknowledged that machine learning algorithms lack automatic feature extraction capabilities. This limitation has propelled the popularity of deep learning models, particularly in the domain of rainfall forecasting. Nevertheless, conventional deep learning architectures typically engage in the sequential processing of input data, a task that can prove challenging and time-consuming, especially when dealing with lengthy sequences. To address this concern, the present article proposes a rainfall modelling algorithm founded on a transformer-based deep learning architecture. The primary distinguishing feature of this approach lies in its capacity to parallelize sequential input data through an attention mechanism. This attribute facilitates expedited processing and training of larger datasets. The predictive performance of the transformer-based architecture was assessed using monthly rainfall data spanning 41 years, from 1980 to 2021, in India. Comparative evaluations were conducted with conventional recurrent neural networks, long short-term memory, and gated recurrent unit architectures. Experimental findings reveal that the transformer architecture outperforms other conventional deep learning architectures based on root mean square error and mean absolute percentage error. Furthermore, the accuracy of each architecture's predictions underwent testing using the Diebold–Mariano test. The conclusive findings highlight the discernible and noteworthy advantages of the transformer-based architecture in comparison to the sequential-based architectures. |
---|---|
ISSN: | 2363-6203 2363-6211 |
DOI: | 10.1007/s40808-023-01944-7 |