Loading…
Distilling Location Proposals of Unknown Objects through Gaze Information for Human-Robot Interaction
Successful and meaningful human-robot interaction requires robots to have knowledge about the interaction context - e.g., which objects should be interacted with. Unfortunately, the corpora of interactive objects is - for all practical purposes - infinite. This fact hinders the deployment of robots...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Successful and meaningful human-robot interaction requires robots to have knowledge about the interaction context - e.g., which objects should be interacted with. Unfortunately, the corpora of interactive objects is - for all practical purposes - infinite. This fact hinders the deployment of robots with pre-trained object-detection neural networks other than in pre-defined scenarios. A more flexible alternative to pre-training is to let a human teach the robot about new objects after deployment. However, doing so manually presents significant usability issues as the user must manipulate the object and communicate the object's boundaries to the robot. In this work, we propose streamlining this process by using automatic object location proposal methods in combination with human gaze to distill pertinent object location proposals. Experiments show that the proposed method 1) increased the precision by a factor of approximately 21 compared to location proposal alone, 2) is able to locate objects sufficiently similar to a state-of-the-art pre-trained deep-learning method (FCOS) without any training, and 3) detected objects that were completely missed by FCOS. Furthermore, the method is able to locate objects for which FCOS was not trained on, which are undetectable for FCOS by definition. |
---|---|
ISSN: | 2153-0866 |
DOI: | 10.1109/IROS45743.2020.9340893 |