Loading…

Interpretation of multimodal designation with imprecise gesture

We are interested in multimodal systems that use the following modes and modalities: speech (and natural language) as input as well as output, gesture as input and visual as output using screen displays. The user exchanges with the system by gesture and/or oral statements in natural language. This e...

Full description

Saved in:
Bibliographic Details
Main Authors: Choumane, A, Siroux, J
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 238
container_issue
container_start_page 232
container_title
container_volume
creator Choumane, A
Siroux, J
description We are interested in multimodal systems that use the following modes and modalities: speech (and natural language) as input as well as output, gesture as input and visual as output using screen displays. The user exchanges with the system by gesture and/or oral statements in natural language. This exchange, encoded in the different modalities, carries the goal of the user and also the designation of objects (referents) needed to achieve this goal. The system must identify in a precise and non-ambiguous way the objects designated by the user. In this paper, our main concern is the multimodal designations, with possibly imprecise gesture, of objects in the visual context. In order to identify such a designation, we propose a solution which uses probabilities, knowledge about manipulated objects, and perceptive aspects (degree of salience) associated with these objects.
doi_str_mv 10.1049/cp:20070374
format conference_proceeding
fullrecord <record><control><sourceid>iet</sourceid><recordid>TN_cdi_iet_conferences_10_1049_cp_20070374</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_1049_cp_20070374</sourcerecordid><originalsourceid>FETCH-LOGICAL-i1424-3e04a6b7f98e0c7ad451eaa78d5801ffbdde2b28dee97997bfd3965c149951033</originalsourceid><addsrcrecordid>eNo1j0tLxDAURgMiKGNX_oGuheq9SZqHG5HBx8CAG12XNLkZI522NBn8-yrjrM7iO3xwGLtGuEWQ9s7P9xxAg9DyjFVWGzBKSDSt4BesyvkLAFArxVFdsofNWGiZFyqupGmsp1jvD0NJ-ym4oQ6U0248Lt-pfNZp_6v6lKneUS6Hha7YeXRDpuqfK_bx_PS-fm22by-b9eO2SSi5bASBdKrX0RoCr12QLZJz2oTWAMbYh0C85yYQWW2t7mMQVrUepbUtghArdnP8TVQ6P42RFho95Q6h-6vu_NydqsUPRx9Mjw</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Interpretation of multimodal designation with imprecise gesture</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Choumane, A ; Siroux, J</creator><creatorcontrib>Choumane, A ; Siroux, J</creatorcontrib><description>We are interested in multimodal systems that use the following modes and modalities: speech (and natural language) as input as well as output, gesture as input and visual as output using screen displays. The user exchanges with the system by gesture and/or oral statements in natural language. This exchange, encoded in the different modalities, carries the goal of the user and also the designation of objects (referents) needed to achieve this goal. The system must identify in a precise and non-ambiguous way the objects designated by the user. In this paper, our main concern is the multimodal designations, with possibly imprecise gesture, of objects in the visual context. In order to identify such a designation, we propose a solution which uses probabilities, knowledge about manipulated objects, and perceptive aspects (degree of salience) associated with these objects.</description><identifier>ISBN: 9780863418532</identifier><identifier>ISBN: 0863418538</identifier><identifier>DOI: 10.1049/cp:20070374</identifier><language>eng</language><publisher>Stevenage: IET</publisher><subject>Natural language processing</subject><ispartof>3rd IET International Conference on Intelligent Environments (IE 07), 2007, p.232-238</ispartof><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>309,310,776,780,785,786,4036,4037,27904</link.rule.ids></links><search><creatorcontrib>Choumane, A</creatorcontrib><creatorcontrib>Siroux, J</creatorcontrib><title>Interpretation of multimodal designation with imprecise gesture</title><title>3rd IET International Conference on Intelligent Environments (IE 07)</title><description>We are interested in multimodal systems that use the following modes and modalities: speech (and natural language) as input as well as output, gesture as input and visual as output using screen displays. The user exchanges with the system by gesture and/or oral statements in natural language. This exchange, encoded in the different modalities, carries the goal of the user and also the designation of objects (referents) needed to achieve this goal. The system must identify in a precise and non-ambiguous way the objects designated by the user. In this paper, our main concern is the multimodal designations, with possibly imprecise gesture, of objects in the visual context. In order to identify such a designation, we propose a solution which uses probabilities, knowledge about manipulated objects, and perceptive aspects (degree of salience) associated with these objects.</description><subject>Natural language processing</subject><isbn>9780863418532</isbn><isbn>0863418538</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2007</creationdate><recordtype>conference_proceeding</recordtype><recordid>eNo1j0tLxDAURgMiKGNX_oGuheq9SZqHG5HBx8CAG12XNLkZI522NBn8-yrjrM7iO3xwGLtGuEWQ9s7P9xxAg9DyjFVWGzBKSDSt4BesyvkLAFArxVFdsofNWGiZFyqupGmsp1jvD0NJ-ym4oQ6U0248Lt-pfNZp_6v6lKneUS6Hha7YeXRDpuqfK_bx_PS-fm22by-b9eO2SSi5bASBdKrX0RoCr12QLZJz2oTWAMbYh0C85yYQWW2t7mMQVrUepbUtghArdnP8TVQ6P42RFho95Q6h-6vu_NydqsUPRx9Mjw</recordid><startdate>2007</startdate><enddate>2007</enddate><creator>Choumane, A</creator><creator>Siroux, J</creator><general>IET</general><scope>8ET</scope></search><sort><creationdate>2007</creationdate><title>Interpretation of multimodal designation with imprecise gesture</title><author>Choumane, A ; Siroux, J</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i1424-3e04a6b7f98e0c7ad451eaa78d5801ffbdde2b28dee97997bfd3965c149951033</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2007</creationdate><topic>Natural language processing</topic><toplevel>online_resources</toplevel><creatorcontrib>Choumane, A</creatorcontrib><creatorcontrib>Siroux, J</creatorcontrib><collection>IET Conference Publications by volume</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Choumane, A</au><au>Siroux, J</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Interpretation of multimodal designation with imprecise gesture</atitle><btitle>3rd IET International Conference on Intelligent Environments (IE 07)</btitle><date>2007</date><risdate>2007</risdate><spage>232</spage><epage>238</epage><pages>232-238</pages><isbn>9780863418532</isbn><isbn>0863418538</isbn><abstract>We are interested in multimodal systems that use the following modes and modalities: speech (and natural language) as input as well as output, gesture as input and visual as output using screen displays. The user exchanges with the system by gesture and/or oral statements in natural language. This exchange, encoded in the different modalities, carries the goal of the user and also the designation of objects (referents) needed to achieve this goal. The system must identify in a precise and non-ambiguous way the objects designated by the user. In this paper, our main concern is the multimodal designations, with possibly imprecise gesture, of objects in the visual context. In order to identify such a designation, we propose a solution which uses probabilities, knowledge about manipulated objects, and perceptive aspects (degree of salience) associated with these objects.</abstract><cop>Stevenage</cop><pub>IET</pub><doi>10.1049/cp:20070374</doi><tpages>7</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISBN: 9780863418532
ispartof 3rd IET International Conference on Intelligent Environments (IE 07), 2007, p.232-238
issn
language eng
recordid cdi_iet_conferences_10_1049_cp_20070374
source IEEE Electronic Library (IEL) Conference Proceedings
subjects Natural language processing
title Interpretation of multimodal designation with imprecise gesture
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T10%3A50%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-iet&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Interpretation%20of%20multimodal%20designation%20with%20imprecise%20gesture&rft.btitle=3rd%20IET%20International%20Conference%20on%20Intelligent%20Environments%20(IE%2007)&rft.au=Choumane,%20A&rft.date=2007&rft.spage=232&rft.epage=238&rft.pages=232-238&rft.isbn=9780863418532&rft.isbn_list=0863418538&rft_id=info:doi/10.1049/cp:20070374&rft_dat=%3Ciet%3E10_1049_cp_20070374%3C/iet%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i1424-3e04a6b7f98e0c7ad451eaa78d5801ffbdde2b28dee97997bfd3965c149951033%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true