Loading…

Transforming Language Translation: A Deep Learning Approach to Urdu–English Translation

Machine translation has revolutionized the field of language translation in the last decade. Initially dominated by statistical models, the rise of deep learning techniques has led to neural networks, particularly Transformer models, taking the lead. These models have demonstrated exceptional perfor...

Full description

Saved in:
Bibliographic Details
Published in:Journal of ambient intelligence and humanized computing 2024-10, Vol.15 (10), p.3651-3662
Main Authors: Safder, Iqra, Abu Bakar, Muhammad, Zaman, Farooq, Waheed, Hajra, Aljohani, Naif Radi, Nawaz, Raheel, Hassan, Saeed Ul
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Machine translation has revolutionized the field of language translation in the last decade. Initially dominated by statistical models, the rise of deep learning techniques has led to neural networks, particularly Transformer models, taking the lead. These models have demonstrated exceptional performance in natural language processing tasks, surpassing traditional sequence-to-sequence models like RNN, GRU, and LSTM. With advantages like better handling of long-range dependencies and requiring less training time, the NLP community has shifted towards using Transformers for sequence-to-sequence tasks. In this work, we leverage the sequence-to-sequence transformer model to translate Urdu (a low resourced language) to English. Our model is based on a variant of transformer with some changes as activation dropout, attention dropout and final layer normalization. We have used four different datasets (UMC005, Tanzil, The Wire, and PIB) from two categories (religious and news) to train our model. The achieved results demonstrated that the model’s performance and quality of translation varied depending on the dataset used for fine-tuning. Our designed model has out performed the baseline models with 23.9 BLEU, 0.46 chrf, 0.44 METEOR and 60.75 TER scores. The enhanced performance attributes to meticulous parameter tuning, encompassing modifications in architecture and optimization techniques. Comprehensive parametric details regarding model configurations and optimizations are provided to elucidate the distinctiveness of our approach and how it surpasses prior works. We provide source code via GitHub for future studies.
ISSN:1868-5137
1868-5145
DOI:10.1007/s12652-024-04839-2