Loading…
Segment-anything embedding for pixel-level road damage extraction using high-resolution satellite images
•A road damage satellite image dataset was released for the first time.•A new segmentation model, RDSeg, was proposed for road damage extraction.•This is the first study to apply a deep learning model for road damage extraction.•RDSeg achieved the highest accuracy on the proposed dataset. When a str...
Saved in:
Published in: | International journal of applied earth observation and geoinformation 2024-07, Vol.131, p.103985, Article 103985 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | •A road damage satellite image dataset was released for the first time.•A new segmentation model, RDSeg, was proposed for road damage extraction.•This is the first study to apply a deep learning model for road damage extraction.•RDSeg achieved the highest accuracy on the proposed dataset.
When a strong earthquake occurs, roads are the lifelines of rescue. The rapid development of high-resolution satellite imaging platforms has made the application of remote sensing technology for road damage identification possible. Over the years, road damage identification has required a significant amount of manual involvement, making it difficult to meet the needs of rapid post-disaster response. The automatic recognition of road damage using satellite images has always been difficult. Damaged areas appear in the satellite images with blurry boundaries, versatile sizes, and uneven spatial distributions. With the aim of automatic pixel-level road damage identification, we introduce the first road damage dataset, CAU-RoadDamage, which includes high-resolution satellite images and pixel-level human annotations. Moreover, we propose the application of a pre-trained vision foundation model for the first time to automatically identify road damage. Low-rank adaptation technology is used to fine-tune the foundation model on the satellite images, and two-way attention is used to integrate the foundation model with domain specialist model components. The proposed segmentation model is compared to multiple state-of-the-art methods on the CAU-RoadDamage dataset. Our approach achieves the highest F1 of 76.09%, which is notably higher than that of the other models. The experimental results demonstrate the feasibility of pixel-level road damage recognition and the applicability of vision foundation models for downstream remote sensing tasks. The CAU-RoadDamage dataset will be made publicly available at https://github.com/CAU-HE/RoadDamageExtraction. |
---|---|
ISSN: | 1569-8432 1872-826X |
DOI: | 10.1016/j.jag.2024.103985 |