Loading…

Emotion interactive robot focus on speaker independently emotion recognition

This paper descries the realization of emotional interaction for thinking robot (T-ROT), especially focus on speech emotion recognition. In the field of speech emotion recognition, most researchers work in a speaker dependent mode. However, the speaker independent system is needed for commercial use...

Full description

Saved in:
Bibliographic Details
Main Authors: Eun Ho Kim, Kyung Hak Hyun, Soo Hyun Kim, Yoon Keun Kwak
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 6
container_issue
container_start_page 1
container_title
container_volume
creator Eun Ho Kim
Kyung Hak Hyun
Soo Hyun Kim
Yoon Keun Kwak
description This paper descries the realization of emotional interaction for thinking robot (T-ROT), especially focus on speech emotion recognition. In the field of speech emotion recognition, most researchers work in a speaker dependent mode. However, the speaker independent system is needed for commercial use. Hence, in this paper, a new feature, ratio of a spectral flatness measure to a spectral center (RSS) with a small variation in speakers in terms of constructing a speaker independent system is proposed. Using the Mel frequency cepstral coefficients and RSS, an average recognition rate of 59.0(plusmn 6.6) % at 90% confidence interval is achieved in a speaker independent and gender dependent mode.
doi_str_mv 10.1109/AIM.2007.4412451
format conference_proceeding
fullrecord <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_4412451</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>4412451</ieee_id><sourcerecordid>4412451</sourcerecordid><originalsourceid>FETCH-LOGICAL-i90t-3ce3efe094adddc1f2d7b6c59c92db2dc36b56435ff2ddb6d0151b87e84b811b3</originalsourceid><addsrcrecordid>eNo9kEtPwzAQhM1LopTckbjkD6R4_UjsY1W1UCmIS-9VbG-QoY0jxyD135OKwGVmR9_uHoaQB6ALAKqfltvXBaO0WggBTEi4IJmuFAh2zqVQl2TGQOqiZFJekbs_wPn1PxDVLcmG4YNSClRJxviM1OtjSD50ue8SxsYm_415DCakvA32a8hHNPTYfGIcVxz2OEqXDqccp8OINrx3_jzfk5u2OQyYTT4nu816t3op6rfn7WpZF17TVHCLHFukWjTOOQstc5UprdRWM2eYs7w0shRctiNxpnQUJBhVoRJGARg-J4-_bz0i7vvoj0087ade-A-1s1QS</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Emotion interactive robot focus on speaker independently emotion recognition</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Eun Ho Kim ; Kyung Hak Hyun ; Soo Hyun Kim ; Yoon Keun Kwak</creator><creatorcontrib>Eun Ho Kim ; Kyung Hak Hyun ; Soo Hyun Kim ; Yoon Keun Kwak</creatorcontrib><description>This paper descries the realization of emotional interaction for thinking robot (T-ROT), especially focus on speech emotion recognition. In the field of speech emotion recognition, most researchers work in a speaker dependent mode. However, the speaker independent system is needed for commercial use. Hence, in this paper, a new feature, ratio of a spectral flatness measure to a spectral center (RSS) with a small variation in speakers in terms of constructing a speaker independent system is proposed. Using the Mel frequency cepstral coefficients and RSS, an average recognition rate of 59.0(plusmn 6.6) % at 90% confidence interval is achieved in a speaker independent and gender dependent mode.</description><identifier>ISSN: 2159-6247</identifier><identifier>ISBN: 1424412633</identifier><identifier>ISBN: 9781424412631</identifier><identifier>EISSN: 2159-6255</identifier><identifier>EISBN: 9781424412648</identifier><identifier>EISBN: 1424412641</identifier><identifier>DOI: 10.1109/AIM.2007.4412451</identifier><language>eng</language><publisher>IEEE</publisher><subject>Acoustic sensors ; Emotion recognition ; Feedback ; Humans ; Manipulators ; Mechanical engineering ; Noise robustness ; Robot sensing systems ; Spatial databases ; Speech recognition</subject><ispartof>2007 IEEE/ASME international conference on advanced intelligent mechatronics, 2007, p.1-6</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/4412451$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,2058,27925,54555,54920,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/4412451$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Eun Ho Kim</creatorcontrib><creatorcontrib>Kyung Hak Hyun</creatorcontrib><creatorcontrib>Soo Hyun Kim</creatorcontrib><creatorcontrib>Yoon Keun Kwak</creatorcontrib><title>Emotion interactive robot focus on speaker independently emotion recognition</title><title>2007 IEEE/ASME international conference on advanced intelligent mechatronics</title><addtitle>AIM</addtitle><description>This paper descries the realization of emotional interaction for thinking robot (T-ROT), especially focus on speech emotion recognition. In the field of speech emotion recognition, most researchers work in a speaker dependent mode. However, the speaker independent system is needed for commercial use. Hence, in this paper, a new feature, ratio of a spectral flatness measure to a spectral center (RSS) with a small variation in speakers in terms of constructing a speaker independent system is proposed. Using the Mel frequency cepstral coefficients and RSS, an average recognition rate of 59.0(plusmn 6.6) % at 90% confidence interval is achieved in a speaker independent and gender dependent mode.</description><subject>Acoustic sensors</subject><subject>Emotion recognition</subject><subject>Feedback</subject><subject>Humans</subject><subject>Manipulators</subject><subject>Mechanical engineering</subject><subject>Noise robustness</subject><subject>Robot sensing systems</subject><subject>Spatial databases</subject><subject>Speech recognition</subject><issn>2159-6247</issn><issn>2159-6255</issn><isbn>1424412633</isbn><isbn>9781424412631</isbn><isbn>9781424412648</isbn><isbn>1424412641</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2007</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNo9kEtPwzAQhM1LopTckbjkD6R4_UjsY1W1UCmIS-9VbG-QoY0jxyD135OKwGVmR9_uHoaQB6ALAKqfltvXBaO0WggBTEi4IJmuFAh2zqVQl2TGQOqiZFJekbs_wPn1PxDVLcmG4YNSClRJxviM1OtjSD50ue8SxsYm_415DCakvA32a8hHNPTYfGIcVxz2OEqXDqccp8OINrx3_jzfk5u2OQyYTT4nu816t3op6rfn7WpZF17TVHCLHFukWjTOOQstc5UprdRWM2eYs7w0shRctiNxpnQUJBhVoRJGARg-J4-_bz0i7vvoj0087ade-A-1s1QS</recordid><startdate>200709</startdate><enddate>200709</enddate><creator>Eun Ho Kim</creator><creator>Kyung Hak Hyun</creator><creator>Soo Hyun Kim</creator><creator>Yoon Keun Kwak</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>200709</creationdate><title>Emotion interactive robot focus on speaker independently emotion recognition</title><author>Eun Ho Kim ; Kyung Hak Hyun ; Soo Hyun Kim ; Yoon Keun Kwak</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i90t-3ce3efe094adddc1f2d7b6c59c92db2dc36b56435ff2ddb6d0151b87e84b811b3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2007</creationdate><topic>Acoustic sensors</topic><topic>Emotion recognition</topic><topic>Feedback</topic><topic>Humans</topic><topic>Manipulators</topic><topic>Mechanical engineering</topic><topic>Noise robustness</topic><topic>Robot sensing systems</topic><topic>Spatial databases</topic><topic>Speech recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Eun Ho Kim</creatorcontrib><creatorcontrib>Kyung Hak Hyun</creatorcontrib><creatorcontrib>Soo Hyun Kim</creatorcontrib><creatorcontrib>Yoon Keun Kwak</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE/IET Electronic Library (IEL) - Journals and E-Books</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Eun Ho Kim</au><au>Kyung Hak Hyun</au><au>Soo Hyun Kim</au><au>Yoon Keun Kwak</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Emotion interactive robot focus on speaker independently emotion recognition</atitle><btitle>2007 IEEE/ASME international conference on advanced intelligent mechatronics</btitle><stitle>AIM</stitle><date>2007-09</date><risdate>2007</risdate><spage>1</spage><epage>6</epage><pages>1-6</pages><issn>2159-6247</issn><eissn>2159-6255</eissn><isbn>1424412633</isbn><isbn>9781424412631</isbn><eisbn>9781424412648</eisbn><eisbn>1424412641</eisbn><abstract>This paper descries the realization of emotional interaction for thinking robot (T-ROT), especially focus on speech emotion recognition. In the field of speech emotion recognition, most researchers work in a speaker dependent mode. However, the speaker independent system is needed for commercial use. Hence, in this paper, a new feature, ratio of a spectral flatness measure to a spectral center (RSS) with a small variation in speakers in terms of constructing a speaker independent system is proposed. Using the Mel frequency cepstral coefficients and RSS, an average recognition rate of 59.0(plusmn 6.6) % at 90% confidence interval is achieved in a speaker independent and gender dependent mode.</abstract><pub>IEEE</pub><doi>10.1109/AIM.2007.4412451</doi><tpages>6</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2159-6247
ispartof 2007 IEEE/ASME international conference on advanced intelligent mechatronics, 2007, p.1-6
issn 2159-6247
2159-6255
language eng
recordid cdi_ieee_primary_4412451
source IEEE Electronic Library (IEL) Conference Proceedings
subjects Acoustic sensors
Emotion recognition
Feedback
Humans
Manipulators
Mechanical engineering
Noise robustness
Robot sensing systems
Spatial databases
Speech recognition
title Emotion interactive robot focus on speaker independently emotion recognition
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T05%3A59%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Emotion%20interactive%20robot%20focus%20on%20speaker%20independently%20emotion%20recognition&rft.btitle=2007%20IEEE/ASME%20international%20conference%20on%20advanced%20intelligent%20mechatronics&rft.au=Eun%20Ho%20Kim&rft.date=2007-09&rft.spage=1&rft.epage=6&rft.pages=1-6&rft.issn=2159-6247&rft.eissn=2159-6255&rft.isbn=1424412633&rft.isbn_list=9781424412631&rft_id=info:doi/10.1109/AIM.2007.4412451&rft.eisbn=9781424412648&rft.eisbn_list=1424412641&rft_dat=%3Cieee_6IE%3E4412451%3C/ieee_6IE%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i90t-3ce3efe094adddc1f2d7b6c59c92db2dc36b56435ff2ddb6d0151b87e84b811b3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=4412451&rfr_iscdi=true