Loading…

Reinforcement Learning for Mobile Robotics Exploration: A Survey

Efficient exploration of unknown environments is a fundamental precondition for modern autonomous mobile robot applications. Aiming to design robust and effective robotic exploration strategies, suitable to complex real-world scenarios, the academic community has increasingly investigated the integr...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transaction on neural networks and learning systems 2023-08, Vol.34 (8), p.3796-3810
Main Authors: Garaffa, Luiza Caetano, Basso, Maik, Konzen, Andrea Aparecida, de Freitas, Edison Pignaton
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Efficient exploration of unknown environments is a fundamental precondition for modern autonomous mobile robot applications. Aiming to design robust and effective robotic exploration strategies, suitable to complex real-world scenarios, the academic community has increasingly investigated the integration of robotics with reinforcement learning (RL) techniques. This survey provides a comprehensive review of recent research works that use RL to design unknown environment exploration strategies for single and multirobots. The primary purpose of this study is to facilitate future research by compiling and analyzing the current state of works that link these two knowledge domains. This survey summarizes: what are the employed RL algorithms and how they compose the so far proposed mobile robot exploration strategies; how robotic exploration solutions are addressing typical RL problems like the exploration-exploitation dilemma, the curse of dimensionality, reward shaping, and slow learning convergence; and what are the performed experiments and software tools used for learning and testing. Achieved progress is described, and a discussion about remaining limitations and future perspectives is presented.
ISSN:2162-237X
2162-2388
DOI:10.1109/TNNLS.2021.3124466