Loading…
Recognizing emotions in dialogues with acoustic and lexical features
Automatic emotion recognition has long been a focus of Affective Computing. We aim at improving the performance of state-of-the-art emotion recognition in dialogues using novel knowledge-inspired features and modality fusion strategies. We propose features based on disfluencies and nonverbal vocalis...
Saved in:
Main Authors: | , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | 742 |
container_issue | |
container_start_page | 737 |
container_title | |
container_volume | |
creator | Leimin Tian Moore, Johanna D. Lai, Catherine |
description | Automatic emotion recognition has long been a focus of Affective Computing. We aim at improving the performance of state-of-the-art emotion recognition in dialogues using novel knowledge-inspired features and modality fusion strategies. We propose features based on disfluencies and nonverbal vocalisations (DIS-NVs), and show that they are highly predictive for recognizing emotions in spontaneous dialogues. We also propose the hierarchical fusion strategy as an alternative to current feature-level and decision-level fusion. This fusion strategy combines features from different modalities at different layers in a hierarchical structure. It is expected to overcome limitations of feature-level and decision-level fusion by including knowledge on modality differences, while preserving information of each modality. |
doi_str_mv | 10.1109/ACII.2015.7344651 |
format | conference_proceeding |
fullrecord | <record><control><sourceid>proquest_CHZPO</sourceid><recordid>TN_cdi_proquest_miscellaneous_1778046885</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>7344651</ieee_id><sourcerecordid>1778046885</sourcerecordid><originalsourceid>FETCH-LOGICAL-i251t-7378e31dd546aafe3633bef66144bc2cda9bf914cc4f0d66358e6e01ea494e93</originalsourceid><addsrcrecordid>eNotkE1LxDAYhKMguK77A8RLjl5a8zYfbY5LdbWwIMjeS5q-rZF-rE2KH7_ewu5c5jIMzwwhd8BiAKYft3lRxAkDGadcCCXhgtyASPUiyfUlWSUgVZQBwDXZeP_JGAMtWZbJFXl6Rzu2g_tzQ0uxH4MbB0_dQGtnurGd0dNvFz6osePsg7PUDDXt8MdZ09EGTZgn9LfkqjGdx83Z1-Swez7kr9H-7aXIt_vIJRJClPI0Qw51LYUypkGuOK-wUQqEqGxia6OrRoOwVjSsVorLDBUyQCO0QM3X5OFUe5zGr4UslL3zFrvODLjQlZCmGRNq2bVE709Rh4jlcXK9mX7L8z38H3D2WXE</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype><pqid>1778046885</pqid></control><display><type>conference_proceeding</type><title>Recognizing emotions in dialogues with acoustic and lexical features</title><source>IEEE Xplore All Conference Series</source><creator>Leimin Tian ; Moore, Johanna D. ; Lai, Catherine</creator><creatorcontrib>Leimin Tian ; Moore, Johanna D. ; Lai, Catherine</creatorcontrib><description>Automatic emotion recognition has long been a focus of Affective Computing. We aim at improving the performance of state-of-the-art emotion recognition in dialogues using novel knowledge-inspired features and modality fusion strategies. We propose features based on disfluencies and nonverbal vocalisations (DIS-NVs), and show that they are highly predictive for recognizing emotions in spontaneous dialogues. We also propose the hierarchical fusion strategy as an alternative to current feature-level and decision-level fusion. This fusion strategy combines features from different modalities at different layers in a hierarchical structure. It is expected to overcome limitations of feature-level and decision-level fusion by including knowledge on modality differences, while preserving information of each modality.</description><identifier>EISSN: 2156-8111</identifier><identifier>EISBN: 1479999539</identifier><identifier>EISBN: 9781479999538</identifier><identifier>DOI: 10.1109/ACII.2015.7344651</identifier><language>eng</language><publisher>IEEE</publisher><subject>Acoustics ; Computation ; Conferences ; Context modeling ; dialogue system ; disfluency ; Emotion recognition ; Emotions ; Feature extraction ; Feature recognition ; Predictive models ; Recognition ; State of the art ; Strategy ; Visualization</subject><ispartof>2015 International Conference on Affective Computing and Intelligent Interaction (ACII), 2015, p.737-742</ispartof><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/7344651$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,314,780,784,789,790,23930,23931,25140,27924,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/7344651$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Leimin Tian</creatorcontrib><creatorcontrib>Moore, Johanna D.</creatorcontrib><creatorcontrib>Lai, Catherine</creatorcontrib><title>Recognizing emotions in dialogues with acoustic and lexical features</title><title>2015 International Conference on Affective Computing and Intelligent Interaction (ACII)</title><addtitle>ACII</addtitle><description>Automatic emotion recognition has long been a focus of Affective Computing. We aim at improving the performance of state-of-the-art emotion recognition in dialogues using novel knowledge-inspired features and modality fusion strategies. We propose features based on disfluencies and nonverbal vocalisations (DIS-NVs), and show that they are highly predictive for recognizing emotions in spontaneous dialogues. We also propose the hierarchical fusion strategy as an alternative to current feature-level and decision-level fusion. This fusion strategy combines features from different modalities at different layers in a hierarchical structure. It is expected to overcome limitations of feature-level and decision-level fusion by including knowledge on modality differences, while preserving information of each modality.</description><subject>Acoustics</subject><subject>Computation</subject><subject>Conferences</subject><subject>Context modeling</subject><subject>dialogue system</subject><subject>disfluency</subject><subject>Emotion recognition</subject><subject>Emotions</subject><subject>Feature extraction</subject><subject>Feature recognition</subject><subject>Predictive models</subject><subject>Recognition</subject><subject>State of the art</subject><subject>Strategy</subject><subject>Visualization</subject><issn>2156-8111</issn><isbn>1479999539</isbn><isbn>9781479999538</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2015</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotkE1LxDAYhKMguK77A8RLjl5a8zYfbY5LdbWwIMjeS5q-rZF-rE2KH7_ewu5c5jIMzwwhd8BiAKYft3lRxAkDGadcCCXhgtyASPUiyfUlWSUgVZQBwDXZeP_JGAMtWZbJFXl6Rzu2g_tzQ0uxH4MbB0_dQGtnurGd0dNvFz6osePsg7PUDDXt8MdZ09EGTZgn9LfkqjGdx83Z1-Swez7kr9H-7aXIt_vIJRJClPI0Qw51LYUypkGuOK-wUQqEqGxia6OrRoOwVjSsVorLDBUyQCO0QM3X5OFUe5zGr4UslL3zFrvODLjQlZCmGRNq2bVE709Rh4jlcXK9mX7L8z38H3D2WXE</recordid><startdate>20150901</startdate><enddate>20150901</enddate><creator>Leimin Tian</creator><creator>Moore, Johanna D.</creator><creator>Lai, Catherine</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20150901</creationdate><title>Recognizing emotions in dialogues with acoustic and lexical features</title><author>Leimin Tian ; Moore, Johanna D. ; Lai, Catherine</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i251t-7378e31dd546aafe3633bef66144bc2cda9bf914cc4f0d66358e6e01ea494e93</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2015</creationdate><topic>Acoustics</topic><topic>Computation</topic><topic>Conferences</topic><topic>Context modeling</topic><topic>dialogue system</topic><topic>disfluency</topic><topic>Emotion recognition</topic><topic>Emotions</topic><topic>Feature extraction</topic><topic>Feature recognition</topic><topic>Predictive models</topic><topic>Recognition</topic><topic>State of the art</topic><topic>Strategy</topic><topic>Visualization</topic><toplevel>online_resources</toplevel><creatorcontrib>Leimin Tian</creatorcontrib><creatorcontrib>Moore, Johanna D.</creatorcontrib><creatorcontrib>Lai, Catherine</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE/IET Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Leimin Tian</au><au>Moore, Johanna D.</au><au>Lai, Catherine</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Recognizing emotions in dialogues with acoustic and lexical features</atitle><btitle>2015 International Conference on Affective Computing and Intelligent Interaction (ACII)</btitle><stitle>ACII</stitle><date>2015-09-01</date><risdate>2015</risdate><spage>737</spage><epage>742</epage><pages>737-742</pages><eissn>2156-8111</eissn><eisbn>1479999539</eisbn><eisbn>9781479999538</eisbn><abstract>Automatic emotion recognition has long been a focus of Affective Computing. We aim at improving the performance of state-of-the-art emotion recognition in dialogues using novel knowledge-inspired features and modality fusion strategies. We propose features based on disfluencies and nonverbal vocalisations (DIS-NVs), and show that they are highly predictive for recognizing emotions in spontaneous dialogues. We also propose the hierarchical fusion strategy as an alternative to current feature-level and decision-level fusion. This fusion strategy combines features from different modalities at different layers in a hierarchical structure. It is expected to overcome limitations of feature-level and decision-level fusion by including knowledge on modality differences, while preserving information of each modality.</abstract><pub>IEEE</pub><doi>10.1109/ACII.2015.7344651</doi><tpages>6</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | EISSN: 2156-8111 |
ispartof | 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), 2015, p.737-742 |
issn | 2156-8111 |
language | eng |
recordid | cdi_proquest_miscellaneous_1778046885 |
source | IEEE Xplore All Conference Series |
subjects | Acoustics Computation Conferences Context modeling dialogue system disfluency Emotion recognition Emotions Feature extraction Feature recognition Predictive models Recognition State of the art Strategy Visualization |
title | Recognizing emotions in dialogues with acoustic and lexical features |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T04%3A23%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Recognizing%20emotions%20in%20dialogues%20with%20acoustic%20and%20lexical%20features&rft.btitle=2015%20International%20Conference%20on%20Affective%20Computing%20and%20Intelligent%20Interaction%20(ACII)&rft.au=Leimin%20Tian&rft.date=2015-09-01&rft.spage=737&rft.epage=742&rft.pages=737-742&rft.eissn=2156-8111&rft_id=info:doi/10.1109/ACII.2015.7344651&rft.eisbn=1479999539&rft.eisbn_list=9781479999538&rft_dat=%3Cproquest_CHZPO%3E1778046885%3C/proquest_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i251t-7378e31dd546aafe3633bef66144bc2cda9bf914cc4f0d66358e6e01ea494e93%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=1778046885&rft_id=info:pmid/&rft_ieee_id=7344651&rfr_iscdi=true |