Loading…
A service assistant combining autonomous robotics, flexible goal formulation, and deep-learning-based brain–computer interfacing
As autonomous service robots become more affordable and thus available for the general public, there is a growing need for user-friendly interfaces to control these systems. Control interfaces typically get more complicated with increasing complexity of robotic tasks and environments. Traditional co...
Saved in:
Published in: | Robotics and autonomous systems 2019-06, Vol.116, p.98-113 |
---|---|
Main Authors: | , , , , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | As autonomous service robots become more affordable and thus available for the general public, there is a growing need for user-friendly interfaces to control these systems. Control interfaces typically get more complicated with increasing complexity of robotic tasks and environments. Traditional control modalities such as touch, speech or gesture are not necessarily suited for all users. While some users can make the effort to familiarize themselves with a robotic system, users with motor disabilities may not be capable of controlling such systems even though they need robotic assistance most. In this paper, we present a novel framework that allows these users to interact with a robotic service assistant in a closed-loop fashion, using only thoughts. The system is composed of several interacting components: a brain–computer interface (BCI) that uses non-invasive neuronal signal recording and co-adaptive deep learning, high-level task planning based on referring expressions, navigation and manipulation planning as well as environmental perception. We extensively evaluate the BCI in various tasks, determine the performance of the goal formulation user interface and investigate its intuitiveness in a user study. Furthermore, we demonstrate the applicability and robustness of the system in real-world scenarios, considering fetch-and-carry tasks, close human–robot interactions and in presence of unexpected changes. As our results show, the system is capable of adapting to frequent changes in the environment and reliably accomplishes given tasks within a reasonable amount of time. Combined with high-level task planning based on referring expressions and an autonomous robotic system, interesting new perspectives open up for non-invasive BCI-based human–robot interactions.
•BCI-controlled autonomous robotic service assistant.•First online brain–computer-interface using deep learning.•Menu-driven language generation based on referring expressions.•Modular ROS-based mobile robot interaction.•Experimental evaluation with a real robot. |
---|---|
ISSN: | 0921-8890 1872-793X |
DOI: | 10.1016/j.robot.2019.02.015 |