Loading…
Interpretation of multimodal designation with imprecise gesture
We are interested in multimodal systems that use the following modes and modalities: speech (and natural language) as input as well as output, gesture as input and visual as output using screen displays. The user exchanges with the system by gesture and/or oral statements in natural language. This e...
Saved in:
Main Authors: | , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | We are interested in multimodal systems that use the following modes and modalities: speech (and natural language) as input as well as output, gesture as input and visual as output using screen displays. The user exchanges with the system by gesture and/or oral statements in natural language. This exchange, encoded in the different modalities, carries the goal of the user and also the designation of objects (referents) needed to achieve this goal. The system must identify in a precise and non-ambiguous way the objects designated by the user. In this paper, our main concern is the multimodal designations, with possibly imprecise gesture, of objects in the visual context. In order to identify such a designation, we propose a solution which uses probabilities, knowledge about manipulated objects, and perceptive aspects (degree of salience) associated with these objects. |
---|---|
DOI: | 10.1049/cp:20070374 |