Loading…

Iterative, Deep Synthetic Aperture Sonar Image Segmentation

Synthetic aperture sonar (SAS) systems produce high-resolution images of the seabed environment. Moreover, deep learning has demonstrated superior ability in finding robust features for automating imagery analysis. However, the success of deep learning is conditioned on having lots of labeled traini...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on geoscience and remote sensing 2022, Vol.60, p.1-15
Main Authors: Sun, Yung-Chen, Gerg, Isaac D., Monga, Vishal
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Synthetic aperture sonar (SAS) systems produce high-resolution images of the seabed environment. Moreover, deep learning has demonstrated superior ability in finding robust features for automating imagery analysis. However, the success of deep learning is conditioned on having lots of labeled training data but obtaining generous pixel-level annotations of SAS imagery is often practically infeasible. This challenge has thus far limited the adoption of deep learning methods for SAS segmentation. Algorithms exist to segment SAS imagery in an unsupervised manner, but they lack the benefit of state-of-the-art learning methods and the results present significant room for improvement. In view of the above, we propose a new iterative algorithm for unsupervised SAS image segmentation combining superpixel formation, deep learning, and traditional clustering methods. We call our method iterative deep unsupervised segmentation (IDUS). IDUS is an unsupervised learning framework that can be divided into four main steps: 1) a deep network estimates class assignments; 2) low-level image features from the deep network are clustered into superpixels; 3) superpixels are clustered into class assignments (which we call pseudo-labels) using k -means; and 4) resulting pseudo-labels are used for loss backpropagation of the deep network prediction. These four steps are performed iteratively until convergence. A comparison of IDUS to current state-of-the-art methods on a realistic benchmark dataset for SAS image segmentation demonstrates the benefits of our proposal even as the IDUS incurs a much lower computational burden during inference (actual labeling of a test image). Because our design combines merits of classical superpixel methods with deep learning, practically we demonstrate a very significant benefit in terms of reduced selection bias, i.e., IDUS shows markedly improved robustness against the choice of training images. Finally, we also develop a semi-supervised (SS) extension of IDUS called Iterative Deep SS Segmentation (IDSS) and demonstrate experimentally that it can further enhance performance while outperforming supervised alternatives that exploit the same labeled training imagery.
ISSN:0196-2892
1558-0644
DOI:10.1109/TGRS.2022.3162420