Loading…
Towards a multimodal emotion recognition framework to be integrated in a Computer Based Speech Therapy System
Emotion recognition has become a "must have" for all system that want to inspire user's confidence and to interact in a friendly and familiar way. In this paper we propose an improved CBST (Computer Based Speech Therapy System) architecture by using multimodal (i.e. paralanguage, visu...
Saved in:
Main Authors: | , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Emotion recognition has become a "must have" for all system that want to inspire user's confidence and to interact in a friendly and familiar way. In this paper we propose an improved CBST (Computer Based Speech Therapy System) architecture by using multimodal (i.e. paralanguage, visual, and physiological parameters) emotion recognition techniques. Most research on emotion recognition using speech analysis so far has focused on adult subjects, with a good pronunciation. However, little research has been conducted on adapting classical affect recognition techniques in "narrow areas" such as children speech therapy, where emotions play a key role. So, our paper aims to deal with the assessment of the affective state of the children with speech disorders. A brief literature review is presented, exploring the recent work in the area. New hypothesis are formulated in order to identify the limits of using classical emotion recognition techniques in this special conditions. An original framework to be integrated in the CBST architecture is also outlined. The proposed framework can be seen as an extension of a CBST but will be flexible to other learning systems too. |
---|---|
DOI: | 10.1109/SPED.2011.5940727 |