Loading…

Deep Semantic Segmentation for Identifying Traversable Terrain in Off-Road Autonomous Driving

Autonomous navigation in off-road environments presents significant challenges due to the diverse and unpredictable characteristics of natural terrains. Due to the class imbalances prevalent in the existing datasets, the current models exhibit difficulties adapting to different environmental conditi...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access 2024, Vol.12, p.162977-162989
Main Authors: Rahi, Adibuzzaman, Elgeoushy, Omar, Syed, Shazeb H., El-Mounayri, Hazim, Wasfy, Hatem, Wasfy, Tamer, Anwar, Sohel
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Autonomous navigation in off-road environments presents significant challenges due to the diverse and unpredictable characteristics of natural terrains. Due to the class imbalances prevalent in the existing datasets, the current models exhibit difficulties adapting to different environmental conditions. In this paper, we present an approach to address these challenges through the development of a deep semantic segmentation model tailored specifically to aid in navigating diverse off-road scenarios. Our methodology consists of two primary components: dataset development and segmentation model construction. The dataset comprises 1,414 images derived from the Yamaha-CMU Off-Road dataset (YCOR) through accuracy enhancements and augmentation techniques. Subsequently, a segmentation model is developed employing an encoder-decoder architecture using Resnet34 as the feature extractor and U-Net as the decoder. The proposed model demonstrates a notably high segmentation accuracy and attains a micro- F_{1} score of 90% or higher on benchmark datasets such as YCOR, Rellis-3D, and RUGD with minimal or no transfer learning, highlighting its versatility and adaptability across various environmental settings. The model also exhibits a per-frame inference time of 40ms, rendering it feasible for real-time application.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2024.3491135