Loading…
Audiovisual Speech Synthesis
This paper presents the main approaches used to synthesize talking faces & provides greater detail on a handful of these approaches. An attempt is made to distinguish between facial synthesis itself (ie, the manner in which facial movements are rendered on a computer screen), & the way these...
Saved in:
Published in: | International journal of speech technology 2003-10, Vol.6 (4), p.331-346 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Citations: | Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper presents the main approaches used to synthesize talking faces & provides greater detail on a handful of these approaches. An attempt is made to distinguish between facial synthesis itself (ie, the manner in which facial movements are rendered on a computer screen), & the way these movements may be controlled & predicted using phonetic input. The two main synthesis techniques (model-based vs image-based) are contrasted & presented by a brief description of the most illustrative existing systems. The challenging issues - evaluation, data acquisition, & modeling - that may drive future models are also discussed & illustrated by our current work at ICP. 16 Figures, 64 References. Adapted from the source document |
---|---|
ISSN: | 1381-2416 |
DOI: | 10.1023/A:1025700715107 |