Loading…

Synthetic and hybrid imaging in the HUMANOID and VIDAS projects

The research activity in natural/synthetic image processing and representation reported in this paper, initiated under the Esprit project HUMANOID and currently continued under the ACTS project VIDAS, concerns the application of virtual reality methodologies to interpersonal audio/video communicatio...

Full description

Saved in:
Bibliographic Details
Main Authors: Lavagetto, F., Pandzic, I.S., Kalra, F., Magnenat-Thalmann, N.
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 666 vol.3
container_issue
container_start_page 663
container_title
container_volume 3
creator Lavagetto, F.
Pandzic, I.S.
Kalra, F.
Magnenat-Thalmann, N.
description The research activity in natural/synthetic image processing and representation reported in this paper, initiated under the Esprit project HUMANOID and currently continued under the ACTS project VIDAS, concerns the application of virtual reality methodologies to interpersonal audio/video communication. The 3D videophone scene is modeled in video (the talker's face) and in audio (the talker's speech) so that natural data can be efficiently mixed with synthetic data and adapted onto deformable parameterized structures. Robust image analysis/synthesis tools are necessary to extract the visual primitives associated to the talker's face and to adapt them onto suitable modeling structures (wire-frames). Image/speech analysis performed at the transmitter provides suitable audio/video parameters which are encoded and used at the receiver to synthesize the corresponding facial expressions together with synchronized lip movements.
doi_str_mv 10.1109/ICIP.1996.560582
format conference_proceeding
fullrecord <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_560582</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>560582</ieee_id><sourcerecordid>560582</sourcerecordid><originalsourceid>FETCH-LOGICAL-i104t-143abb6df72b78c82345de6db95da8a6964efed4f86fc0dac6beedd2bb8752793</originalsourceid><addsrcrecordid>eNotT0tLw0AYXBChUnMvnvYPJO47uycJqdpAtUKt17KPb9stGkuSS_69oXUYmMMMwwxCC0oKSol5bOrmo6DGqEIqIjW7QZkpNZnIOZOGzlDW9ycyQUjJNLlDT9uxHY4wJI9tG_BxdF0KOP3YQ2oPOLV4MvFq91a9b5rlJfLVLKstPne_J_BDf49uo_3uIfvXOdq9PH_Wq3y9eW3qap0nSsSQU8GtcyrEkrlSe824kAFUcEYGq60ySkCEIKJW0ZNgvXIAITDndClZafgcPVx7EwDsz920sBv315f8D_3NRyA</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Synthetic and hybrid imaging in the HUMANOID and VIDAS projects</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Lavagetto, F. ; Pandzic, I.S. ; Kalra, F. ; Magnenat-Thalmann, N.</creator><creatorcontrib>Lavagetto, F. ; Pandzic, I.S. ; Kalra, F. ; Magnenat-Thalmann, N.</creatorcontrib><description>The research activity in natural/synthetic image processing and representation reported in this paper, initiated under the Esprit project HUMANOID and currently continued under the ACTS project VIDAS, concerns the application of virtual reality methodologies to interpersonal audio/video communication. The 3D videophone scene is modeled in video (the talker's face) and in audio (the talker's speech) so that natural data can be efficiently mixed with synthetic data and adapted onto deformable parameterized structures. Robust image analysis/synthesis tools are necessary to extract the visual primitives associated to the talker's face and to adapt them onto suitable modeling structures (wire-frames). Image/speech analysis performed at the transmitter provides suitable audio/video parameters which are encoded and used at the receiver to synthesize the corresponding facial expressions together with synchronized lip movements.</description><identifier>ISBN: 9780780332591</identifier><identifier>ISBN: 0780332598</identifier><identifier>DOI: 10.1109/ICIP.1996.560582</identifier><language>eng</language><publisher>IEEE</publisher><subject>Data mining ; Deformable models ; Image analysis ; Image processing ; Layout ; Robustness ; Speech analysis ; Speech synthesis ; Transmitters ; Virtual reality</subject><ispartof>Proceedings of 3rd IEEE International Conference on Image Processing, 1996, Vol.3, p.663-666 vol.3</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/560582$$EHTML$$P50$$Gieee$$H</linktohtml><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/560582$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Lavagetto, F.</creatorcontrib><creatorcontrib>Pandzic, I.S.</creatorcontrib><creatorcontrib>Kalra, F.</creatorcontrib><creatorcontrib>Magnenat-Thalmann, N.</creatorcontrib><title>Synthetic and hybrid imaging in the HUMANOID and VIDAS projects</title><title>Proceedings of 3rd IEEE International Conference on Image Processing</title><addtitle>ICIP</addtitle><description>The research activity in natural/synthetic image processing and representation reported in this paper, initiated under the Esprit project HUMANOID and currently continued under the ACTS project VIDAS, concerns the application of virtual reality methodologies to interpersonal audio/video communication. The 3D videophone scene is modeled in video (the talker's face) and in audio (the talker's speech) so that natural data can be efficiently mixed with synthetic data and adapted onto deformable parameterized structures. Robust image analysis/synthesis tools are necessary to extract the visual primitives associated to the talker's face and to adapt them onto suitable modeling structures (wire-frames). Image/speech analysis performed at the transmitter provides suitable audio/video parameters which are encoded and used at the receiver to synthesize the corresponding facial expressions together with synchronized lip movements.</description><subject>Data mining</subject><subject>Deformable models</subject><subject>Image analysis</subject><subject>Image processing</subject><subject>Layout</subject><subject>Robustness</subject><subject>Speech analysis</subject><subject>Speech synthesis</subject><subject>Transmitters</subject><subject>Virtual reality</subject><isbn>9780780332591</isbn><isbn>0780332598</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>1996</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotT0tLw0AYXBChUnMvnvYPJO47uycJqdpAtUKt17KPb9stGkuSS_69oXUYmMMMwwxCC0oKSol5bOrmo6DGqEIqIjW7QZkpNZnIOZOGzlDW9ycyQUjJNLlDT9uxHY4wJI9tG_BxdF0KOP3YQ2oPOLV4MvFq91a9b5rlJfLVLKstPne_J_BDf49uo_3uIfvXOdq9PH_Wq3y9eW3qap0nSsSQU8GtcyrEkrlSe824kAFUcEYGq60ySkCEIKJW0ZNgvXIAITDndClZafgcPVx7EwDsz920sBv315f8D_3NRyA</recordid><startdate>1996</startdate><enddate>1996</enddate><creator>Lavagetto, F.</creator><creator>Pandzic, I.S.</creator><creator>Kalra, F.</creator><creator>Magnenat-Thalmann, N.</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>1996</creationdate><title>Synthetic and hybrid imaging in the HUMANOID and VIDAS projects</title><author>Lavagetto, F. ; Pandzic, I.S. ; Kalra, F. ; Magnenat-Thalmann, N.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i104t-143abb6df72b78c82345de6db95da8a6964efed4f86fc0dac6beedd2bb8752793</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>1996</creationdate><topic>Data mining</topic><topic>Deformable models</topic><topic>Image analysis</topic><topic>Image processing</topic><topic>Layout</topic><topic>Robustness</topic><topic>Speech analysis</topic><topic>Speech synthesis</topic><topic>Transmitters</topic><topic>Virtual reality</topic><toplevel>online_resources</toplevel><creatorcontrib>Lavagetto, F.</creatorcontrib><creatorcontrib>Pandzic, I.S.</creatorcontrib><creatorcontrib>Kalra, F.</creatorcontrib><creatorcontrib>Magnenat-Thalmann, N.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Xplore POP ALL</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lavagetto, F.</au><au>Pandzic, I.S.</au><au>Kalra, F.</au><au>Magnenat-Thalmann, N.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Synthetic and hybrid imaging in the HUMANOID and VIDAS projects</atitle><btitle>Proceedings of 3rd IEEE International Conference on Image Processing</btitle><stitle>ICIP</stitle><date>1996</date><risdate>1996</risdate><volume>3</volume><spage>663</spage><epage>666 vol.3</epage><pages>663-666 vol.3</pages><isbn>9780780332591</isbn><isbn>0780332598</isbn><abstract>The research activity in natural/synthetic image processing and representation reported in this paper, initiated under the Esprit project HUMANOID and currently continued under the ACTS project VIDAS, concerns the application of virtual reality methodologies to interpersonal audio/video communication. The 3D videophone scene is modeled in video (the talker's face) and in audio (the talker's speech) so that natural data can be efficiently mixed with synthetic data and adapted onto deformable parameterized structures. Robust image analysis/synthesis tools are necessary to extract the visual primitives associated to the talker's face and to adapt them onto suitable modeling structures (wire-frames). Image/speech analysis performed at the transmitter provides suitable audio/video parameters which are encoded and used at the receiver to synthesize the corresponding facial expressions together with synchronized lip movements.</abstract><pub>IEEE</pub><doi>10.1109/ICIP.1996.560582</doi></addata></record>
fulltext fulltext_linktorsrc
identifier ISBN: 9780780332591
ispartof Proceedings of 3rd IEEE International Conference on Image Processing, 1996, Vol.3, p.663-666 vol.3
issn
language eng
recordid cdi_ieee_primary_560582
source IEEE Electronic Library (IEL) Conference Proceedings
subjects Data mining
Deformable models
Image analysis
Image processing
Layout
Robustness
Speech analysis
Speech synthesis
Transmitters
Virtual reality
title Synthetic and hybrid imaging in the HUMANOID and VIDAS projects
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-03-06T03%3A39%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Synthetic%20and%20hybrid%20imaging%20in%20the%20HUMANOID%20and%20VIDAS%20projects&rft.btitle=Proceedings%20of%203rd%20IEEE%20International%20Conference%20on%20Image%20Processing&rft.au=Lavagetto,%20F.&rft.date=1996&rft.volume=3&rft.spage=663&rft.epage=666%20vol.3&rft.pages=663-666%20vol.3&rft.isbn=9780780332591&rft.isbn_list=0780332598&rft_id=info:doi/10.1109/ICIP.1996.560582&rft_dat=%3Cieee_6IE%3E560582%3C/ieee_6IE%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i104t-143abb6df72b78c82345de6db95da8a6964efed4f86fc0dac6beedd2bb8752793%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=560582&rfr_iscdi=true