Loading…
Evaluating the Efficacy of Segment Anything Model for Delineating Agriculture and Urban Green Spaces in Multiresolution Aerial and Spaceborne Remote Sensing Images
Segmentation of Agricultural Remote Sensing Images (ARSIs) stands as a pivotal component within the intelligent development path of agricultural information technology. Similarly, quick and effective delineation of urban green spaces (UGSs) in high-resolution images is also increasingly needed as in...
Saved in:
Published in: | Remote sensing (Basel, Switzerland) Switzerland), 2024-01, Vol.16 (2), p.414 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Segmentation of Agricultural Remote Sensing Images (ARSIs) stands as a pivotal component within the intelligent development path of agricultural information technology. Similarly, quick and effective delineation of urban green spaces (UGSs) in high-resolution images is also increasingly needed as input in various urban simulation models. Numerous segmentation algorithms exist for ARSIs and UGSs; however, a model with exceptional generalization capabilities and accuracy remains elusive. Notably, the newly released Segment Anything Model (SAM) by META AI is gaining significant recognition in various domains for segmenting conventional images, yielding commendable results. Nevertheless, SAM’s application in ARSI and UGS segmentation has been relatively limited. ARSIs and UGSs exhibit distinct image characteristics, such as prominent boundaries, larger frame sizes, and extensive data types and volumes. Presently, there is a dearth of research on how SAM can effectively handle various ARSI and UGS image types and deliver superior segmentation outcomes. Thus, as a novel attempt in this paper, we aim to evaluate SAM’s compatibility with a wide array of ARSI and UGS image types. The data acquisition platform comprises both aerial and spaceborne sensors, and the study sites encompass most regions of the United States, with images of varying resolutions and frame sizes. It is noteworthy that the segmentation effect of SAM is significantly influenced by the content of the image, as well as the stability and accuracy across images of different resolutions and sizes. However, in general, our findings indicate that resolution has a minimal impact on the effectiveness of conditional SAM-based segmentation, maintaining an overall segmentation accuracy above 90%. In contrast, the unsupervised segmentation approach, SAM, exhibits performance issues, with around 55% of images (3 m and coarser resolutions) experiencing lower accuracy on low-resolution images. Whereas frame size exerts a more substantial influence, as the image size increases, the accuracy of unsupervised segmentation methods decreases extremely fast, and conditional segmentation methods also show some degree of degradation. Additionally, SAM’s segmentation efficacy diminishes considerably in the case of images featuring unclear edges and minimal color distinctions. Consequently, we propose enhancing SAM’s capabilities by augmenting the training dataset and fine-tuning hyperparameters to align with the demands |
---|---|
ISSN: | 2072-4292 2072-4292 |
DOI: | 10.3390/rs16020414 |