Loading…
Reconstruction Bias U-Net for Road Extraction From Optical Remote Sensing Images
Automatic road extraction from remote sensing images plays an important role for navigation, intelligent transportation, and road network update, etc. Convolutional neural network (CNN)-based methods have presented many achievements for road extraction from remote sensing images. CNN-based methods r...
Saved in:
Published in: | IEEE journal of selected topics in applied earth observations and remote sensing 2021, Vol.14, p.2284-2294 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Automatic road extraction from remote sensing images plays an important role for navigation, intelligent transportation, and road network update, etc. Convolutional neural network (CNN)-based methods have presented many achievements for road extraction from remote sensing images. CNN-based methods require a large dataset with high quality labels for model training. However, there is still few standard and large dataset, which is specially designed for road extraction from optical remote sensing images. Besides, the existing end-to-end CNN models for road extraction from remote sensing images are usually with symmetric structure, studying on asymmetric structure between encoding and decoding is rare. To address the above problems, this article first provides a publicly available dataset LRSNY for road extraction from optical remote sensing images with manually labelled labels. Second, we propose a reconstruction bias U-Net for road extraction from remote sensing images. In our model, we increase the decoding branches to obtain multiple semantic information from different upsamplings. Experimental results show that our method achieves better performance compared with other six state-of-the-art segmentation models when testing on our LRSNY dataset. We also test on Massachusetts and Shaoshan datasets. The good performances on the two datasets further prove the effectiveness of our method. |
---|---|
ISSN: | 1939-1404 2151-1535 |
DOI: | 10.1109/JSTARS.2021.3053603 |