Loading…
XAI-SALPAD: Explainable deep learning techniques for Saudi Arabia license plate automatic detection
In recent decades, automatic license plate recognition (ALPR) has emerged as a critical application of artificial intelligence in intelligent transportation systems (ITS), addressing a multitude of existing issues. Several algorithms have been developed to improve ALPR accuracy under a variety of co...
Saved in:
Published in: | Alexandria engineering journal 2024-12, Vol.109, p.578-590 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In recent decades, automatic license plate recognition (ALPR) has emerged as a critical application of artificial intelligence in intelligent transportation systems (ITS), addressing a multitude of existing issues. Several algorithms have been developed to improve ALPR accuracy under a variety of conditions, each with its own set of strengths and limitations. The KSA intends to digitize traditional services as part of its Vision 2030 initiative. In this methodology, the ALPR system for accurately recognizing KSA's license plates (LP) leverages deep neural networks (DNNs) as a powerful technique, capable of performing effectively in unconstrained environments with images taken under various conditions, even when the training data is limited. The approach employs YOLOv8 to detect LPs and alphanumeric characters in real-time, followed by using a convolutional neural network (CNN) for character recognition. According to our finding, the YOLOv8 model outperforms other models in the identification of LPs, achieving remarkable accuracy and F1 scores (mAP@0.5 = 0.96 and mAP@0.95 = 0.97). As a result, YOLOv8 was chosen for SLP character recognition. This strategy is appropriate for ITS since it tackles a number of issues, like decreasing car theft and improving public safety. Furthermore, a locally interpretable agnostic model (LIME) offers justifications that help users better understand the model. The system's provision of clear explanations for its decisions not only satisfies performance standards but also corresponds with the increasing need for explainable AI, guaranteeing an efficient and responsible implementation. |
---|---|
ISSN: | 1110-0168 |
DOI: | 10.1016/j.aej.2024.09.057 |