Loading…

Multi-Modal Fusion of Speech-Gesture Using Integrated Probability Density Distribution

Although speech recognition has been explored extensively and successfully developed, it still encounters serious errors in noisy environments. In such cases, gestures, a by-product of speech, can be used to help interpret the speech. In this paper, we propose a method of multi-modal fusion recognit...

Full description

Saved in:
Bibliographic Details
Main Authors: Chi-geun Lee, Mun-sung Han
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Although speech recognition has been explored extensively and successfully developed, it still encounters serious errors in noisy environments. In such cases, gestures, a by-product of speech, can be used to help interpret the speech. In this paper, we propose a method of multi-modal fusion recognition of speech-gesture using integrated discrete probability density function omit estimated by a histogram. The method is tested with a microphone and a 3-axis accelerator in a real-time experiment. The test has two parts: a method of add-and-accumulate speech and gesture probability density functions respectively, and a more complicated method of creating new probability density function from integrating the two PDF's of speech and gesture.
DOI:10.1109/IITA.2008.278