Loading…
Making a robotic scene representation accessible to feature and label queries
We present a neural architecture for scene representation that stores semantic information about objects in the robot's workspace. We show how this representation can be queried both through low-level features such as color and size, through feature conjunctions, as well as through symbolic lab...
Saved in:
Main Authors: | , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Citations: | Items that cite this one |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | We present a neural architecture for scene representation that stores semantic information about objects in the robot's workspace. We show how this representation can be queried both through low-level features such as color and size, through feature conjunctions, as well as through symbolic labels. This is possible by binding different feature dimensions through space and integrating these space-feature representations with an object recognition system. Queries lead to the activation of a neural representation of previously seen objects, which can then be used to drive object-oriented action. The representation is continuously linked to sensory information and autonomously updates when objects are moved or removed. |
---|---|
ISSN: | 2161-9476 |
DOI: | 10.1109/DEVLRN.2011.6037360 |