Loading…
Efficient visual memory based navigation of indoor robot with a wide-field of view camera
In this paper, we present a complete framework for autonomous indoor robot navigation. We show that autonomous navigation is possible in indoor situation using a single camera and natural landmarks. When navigating in an unknown environment for the first time, a natural behavior consists on memorizi...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In this paper, we present a complete framework for autonomous indoor robot navigation. We show that autonomous navigation is possible in indoor situation using a single camera and natural landmarks. When navigating in an unknown environment for the first time, a natural behavior consists on memorizing some key views along the performed path, in order to use these references as checkpoints for a future navigation mission. The navigation framework for wheeled robots presented in this paper is based on this assumption. During a human-guided learning step, the robot performs paths which are sampled and stored as a set of ordered key images, acquired by an embedded camera. The set of these obtained visual paths is topologically organized and provides a visual memory of the environment. Given an image of one of the visual paths as a target, the robot navigation mission is defined as a concatenation of visual path subsets, called visual route. When running autonomously, the control guides the robot along the reference visual route without explicitly planning any trajectory. The control consists on a vision-based control law adapted to the nonholonomic constraint. The proposed framework has been designed for a generic class of cameras (including conventional, catadioptric and fish-eye cameras). Experiments with a AT3 Pioneer robot navigating in an indoor environment have been carried on with a fisheye camera. Results validate our approach. |
---|---|
DOI: | 10.1109/ICARCV.2008.4795530 |