Loading…
Deep Learning-Based Intelligent Post-Bushfire Detection Using UAVs
Bushfires across remote areas can easily spread regionally due to extreme weather conditions. Unmanned aerial vehicles (UAVs) can play a significant role in postdisaster assessment as they have low-cost and flexible deployment characteristics. Detecting bushfires in their initial phase is important...
Saved in:
Published in: | IEEE geoscience and remote sensing letters 2024, Vol.21, p.1-5 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Bushfires across remote areas can easily spread regionally due to extreme weather conditions. Unmanned aerial vehicles (UAVs) can play a significant role in postdisaster assessment as they have low-cost and flexible deployment characteristics. Detecting bushfires in their initial phase is important to save the inhabitants, infrastructure, and ecosystem. In the context of smart cities, we have applied state-of-the-art deep learning (DL) algorithms to detect and differentiate between fire and no-fire regions. However, certain factors like reachability to forest fires and timely detection of the region of interest (ROI) are quite challenging. Therefore, we incorporate UAVs for capturing the real-time images and process these images into our proposed YOLOv5-s (you only look once-small) model that helps achieve fast and accurate detection of the affected region. We have proposed a lightweight single-stage network with an improved bottleneck cross-stage partial (CSP) module and pyramid attention network (PAN) layers to enhance precise feature extraction and reduce computation time in fire detection. Notably, the HardSwish activation function outperformed ReLU in a specific fire detection scenario. Based on the provided dataset, simulation results demonstrate that the optimized model can effectively detect and differentiate between fire and nonfire regions, which may be challenging to discern with the naked eye. The results indicate that the proposed model surpasses existing models, achieving an accuracy of 97.4%, a low false-positive rate of 1.258%, and nonmaximum suppression (NMS) of 3 ms. Our model can provide real-time applications for fire and rescue relief teams. |
---|---|
ISSN: | 1545-598X 1558-0571 |
DOI: | 10.1109/LGRS.2023.3329509 |