Loading…

Audio-visual integration for human-robot interaction in multi-person scenarios

This paper presents the integration of audio-visual perception components for human robot interaction in the Robot Operating System (ROS). Visual-based nodes consist of skeleton tracking and gesture recognition using a depth camera, and face recognition using an RGB camera. Auditory perception is ba...

Full description

Saved in:
Bibliographic Details
Main Authors: Quang Nguyen, Sang-Seok Yun, JongSuk Choi
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper presents the integration of audio-visual perception components for human robot interaction in the Robot Operating System (ROS). Visual-based nodes consist of skeleton tracking and gesture recognition using a depth camera, and face recognition using an RGB camera. Auditory perception is based on sound source localization using a microphone array. We present an integration framework of these nodes using a top-down hierarchical messaging protocol. On the top of the integration, a message carries information about the number of persons and their corresponding states (who, what, where), which are updated from many low-level perception nodes. The top message is passed to a planning node to make a reaction of the robot, according to the perception about surrounding people. This paper demonstrates human-robot interaction in multi-persons scenario where robot pays its attention to the speaking or waving hand persons. Moreover, this modularization architecture enables reusing modules for other applications. To validate this approach, two sound source localization algorithms are evaluated in real-time where ground-truth localization is provided by the face recognition module.
ISSN:1946-0740
1946-0759
DOI:10.1109/ETFA.2014.7005303