Loading…

Emotion classification via utterance-level dynamics: A pattern-based approach to characterizing affective expressions

Human emotion changes continuously and sequentially. This results in dynamics intrinsic to affective communication. One of the goals of automatic emotion recognition research is to computationally represent and analyze these dynamic patterns. In this work, we focus on the global utterance-level dyna...

Full description

Saved in:
Bibliographic Details
Main Authors: Yelin Kim, Provost, Emily Mower
Format: Conference Proceeding
Language:English
Subjects:
Citations: Items that cite this one
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c225t-d917d7a769a78df1f50b3aa9ec96e7082c7fef5a2275e2e7adcb831cab6b7db73
cites
container_end_page 3681
container_issue
container_start_page 3677
container_title
container_volume
creator Yelin Kim
Provost, Emily Mower
description Human emotion changes continuously and sequentially. This results in dynamics intrinsic to affective communication. One of the goals of automatic emotion recognition research is to computationally represent and analyze these dynamic patterns. In this work, we focus on the global utterance-level dynamics. We are motivated by the hypothesis that global dynamics have emotion-specific variations that can be used to differentiate between emotion classes. Consequently, classification systems that focus on these patterns will be able to make accurate emotional assessments. We quantitatively represent emotion flow within an utterance by estimating short-time affective characteristics. We compare time-series estimates of these characteristics using Dynamic Time Warping, a time-series similarity measure. We demonstrate that this similarity can effectively recognize the affective label of the utterance. The similarity-based pattern modeling outperforms both a feature-based baseline and static modeling. It also provides insight into typical high-level patterns of emotion. We visualize these dynamic patterns and the similarities between the patterns to gain insight into the nature of emotion expression.
doi_str_mv 10.1109/ICASSP.2013.6638344
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_6638344</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>6638344</ieee_id><sourcerecordid>6638344</sourcerecordid><originalsourceid>FETCH-LOGICAL-c225t-d917d7a769a78df1f50b3aa9ec96e7082c7fef5a2275e2e7adcb831cab6b7db73</originalsourceid><addsrcrecordid>eNotkNtKAzEYhKMo2FafoDd5gdQcdpONd6XUAxQUquBd-Tf5YyPb3WWzXaxPb9VeDcMHM8wQMhV8JgS3t0-L-Xr9MpNcqJnWqlBZdkbGIjPWcpVrfU5GUhnLhOXvF2QkcsmZFpm9IuOUPjnnhcmKEdkvd00fm5q6ClKKITr4s0MEuu977KB2yCocsKL-UMMuunRH57SFX1izEhJ6Cm3bNeC2tG-o20IH7gjjd6w_KISAro8DUvxqOzx2NHW6JpcBqoQ3J52Qt_vl6-KRrZ4fjsNWzEmZ98xbYbwBoy2YwgcRcl4qAIvOajS8kM4EDDlIaXKUaMC7slDCQalL40ujJmT6nxsRcdN2cQfdYXO6S_0ApYZhlg</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Emotion classification via utterance-level dynamics: A pattern-based approach to characterizing affective expressions</title><source>IEEE Xplore All Conference Series</source><creator>Yelin Kim ; Provost, Emily Mower</creator><creatorcontrib>Yelin Kim ; Provost, Emily Mower</creatorcontrib><description>Human emotion changes continuously and sequentially. This results in dynamics intrinsic to affective communication. One of the goals of automatic emotion recognition research is to computationally represent and analyze these dynamic patterns. In this work, we focus on the global utterance-level dynamics. We are motivated by the hypothesis that global dynamics have emotion-specific variations that can be used to differentiate between emotion classes. Consequently, classification systems that focus on these patterns will be able to make accurate emotional assessments. We quantitatively represent emotion flow within an utterance by estimating short-time affective characteristics. We compare time-series estimates of these characteristics using Dynamic Time Warping, a time-series similarity measure. We demonstrate that this similarity can effectively recognize the affective label of the utterance. The similarity-based pattern modeling outperforms both a feature-based baseline and static modeling. It also provides insight into typical high-level patterns of emotion. We visualize these dynamic patterns and the similarities between the patterns to gain insight into the nature of emotion expression.</description><identifier>ISSN: 1520-6149</identifier><identifier>EISSN: 2379-190X</identifier><identifier>EISBN: 1479903566</identifier><identifier>EISBN: 9781479903566</identifier><identifier>DOI: 10.1109/ICASSP.2013.6638344</identifier><language>eng</language><publisher>IEEE</publisher><subject>Accuracy ; dynamic pattern ; dynamic time warping ; emotion classification ; emotion dynamics ; Emotion recognition ; emotion structure ; Hidden Markov models ; Mathematical model ; multimodal ; Speech ; Speech recognition ; Trajectory</subject><ispartof>2013 IEEE International Conference on Acoustics, Speech and Signal Processing, 2013, p.3677-3681</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c225t-d917d7a769a78df1f50b3aa9ec96e7082c7fef5a2275e2e7adcb831cab6b7db73</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/6638344$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,23930,23931,25140,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/6638344$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Yelin Kim</creatorcontrib><creatorcontrib>Provost, Emily Mower</creatorcontrib><title>Emotion classification via utterance-level dynamics: A pattern-based approach to characterizing affective expressions</title><title>2013 IEEE International Conference on Acoustics, Speech and Signal Processing</title><addtitle>ICASSP</addtitle><description>Human emotion changes continuously and sequentially. This results in dynamics intrinsic to affective communication. One of the goals of automatic emotion recognition research is to computationally represent and analyze these dynamic patterns. In this work, we focus on the global utterance-level dynamics. We are motivated by the hypothesis that global dynamics have emotion-specific variations that can be used to differentiate between emotion classes. Consequently, classification systems that focus on these patterns will be able to make accurate emotional assessments. We quantitatively represent emotion flow within an utterance by estimating short-time affective characteristics. We compare time-series estimates of these characteristics using Dynamic Time Warping, a time-series similarity measure. We demonstrate that this similarity can effectively recognize the affective label of the utterance. The similarity-based pattern modeling outperforms both a feature-based baseline and static modeling. It also provides insight into typical high-level patterns of emotion. We visualize these dynamic patterns and the similarities between the patterns to gain insight into the nature of emotion expression.</description><subject>Accuracy</subject><subject>dynamic pattern</subject><subject>dynamic time warping</subject><subject>emotion classification</subject><subject>emotion dynamics</subject><subject>Emotion recognition</subject><subject>emotion structure</subject><subject>Hidden Markov models</subject><subject>Mathematical model</subject><subject>multimodal</subject><subject>Speech</subject><subject>Speech recognition</subject><subject>Trajectory</subject><issn>1520-6149</issn><issn>2379-190X</issn><isbn>1479903566</isbn><isbn>9781479903566</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2013</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotkNtKAzEYhKMo2FafoDd5gdQcdpONd6XUAxQUquBd-Tf5YyPb3WWzXaxPb9VeDcMHM8wQMhV8JgS3t0-L-Xr9MpNcqJnWqlBZdkbGIjPWcpVrfU5GUhnLhOXvF2QkcsmZFpm9IuOUPjnnhcmKEdkvd00fm5q6ClKKITr4s0MEuu977KB2yCocsKL-UMMuunRH57SFX1izEhJ6Cm3bNeC2tG-o20IH7gjjd6w_KISAro8DUvxqOzx2NHW6JpcBqoQ3J52Qt_vl6-KRrZ4fjsNWzEmZ98xbYbwBoy2YwgcRcl4qAIvOajS8kM4EDDlIaXKUaMC7slDCQalL40ujJmT6nxsRcdN2cQfdYXO6S_0ApYZhlg</recordid><startdate>201305</startdate><enddate>201305</enddate><creator>Yelin Kim</creator><creator>Provost, Emily Mower</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>201305</creationdate><title>Emotion classification via utterance-level dynamics: A pattern-based approach to characterizing affective expressions</title><author>Yelin Kim ; Provost, Emily Mower</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c225t-d917d7a769a78df1f50b3aa9ec96e7082c7fef5a2275e2e7adcb831cab6b7db73</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2013</creationdate><topic>Accuracy</topic><topic>dynamic pattern</topic><topic>dynamic time warping</topic><topic>emotion classification</topic><topic>emotion dynamics</topic><topic>Emotion recognition</topic><topic>emotion structure</topic><topic>Hidden Markov models</topic><topic>Mathematical model</topic><topic>multimodal</topic><topic>Speech</topic><topic>Speech recognition</topic><topic>Trajectory</topic><toplevel>online_resources</toplevel><creatorcontrib>Yelin Kim</creatorcontrib><creatorcontrib>Provost, Emily Mower</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE/IET Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yelin Kim</au><au>Provost, Emily Mower</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Emotion classification via utterance-level dynamics: A pattern-based approach to characterizing affective expressions</atitle><btitle>2013 IEEE International Conference on Acoustics, Speech and Signal Processing</btitle><stitle>ICASSP</stitle><date>2013-05</date><risdate>2013</risdate><spage>3677</spage><epage>3681</epage><pages>3677-3681</pages><issn>1520-6149</issn><eissn>2379-190X</eissn><eisbn>1479903566</eisbn><eisbn>9781479903566</eisbn><abstract>Human emotion changes continuously and sequentially. This results in dynamics intrinsic to affective communication. One of the goals of automatic emotion recognition research is to computationally represent and analyze these dynamic patterns. In this work, we focus on the global utterance-level dynamics. We are motivated by the hypothesis that global dynamics have emotion-specific variations that can be used to differentiate between emotion classes. Consequently, classification systems that focus on these patterns will be able to make accurate emotional assessments. We quantitatively represent emotion flow within an utterance by estimating short-time affective characteristics. We compare time-series estimates of these characteristics using Dynamic Time Warping, a time-series similarity measure. We demonstrate that this similarity can effectively recognize the affective label of the utterance. The similarity-based pattern modeling outperforms both a feature-based baseline and static modeling. It also provides insight into typical high-level patterns of emotion. We visualize these dynamic patterns and the similarities between the patterns to gain insight into the nature of emotion expression.</abstract><pub>IEEE</pub><doi>10.1109/ICASSP.2013.6638344</doi><tpages>5</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1520-6149
ispartof 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, 2013, p.3677-3681
issn 1520-6149
2379-190X
language eng
recordid cdi_ieee_primary_6638344
source IEEE Xplore All Conference Series
subjects Accuracy
dynamic pattern
dynamic time warping
emotion classification
emotion dynamics
Emotion recognition
emotion structure
Hidden Markov models
Mathematical model
multimodal
Speech
Speech recognition
Trajectory
title Emotion classification via utterance-level dynamics: A pattern-based approach to characterizing affective expressions
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T05%3A56%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Emotion%20classification%20via%20utterance-level%20dynamics:%20A%20pattern-based%20approach%20to%20characterizing%20affective%20expressions&rft.btitle=2013%20IEEE%20International%20Conference%20on%20Acoustics,%20Speech%20and%20Signal%20Processing&rft.au=Yelin%20Kim&rft.date=2013-05&rft.spage=3677&rft.epage=3681&rft.pages=3677-3681&rft.issn=1520-6149&rft.eissn=2379-190X&rft_id=info:doi/10.1109/ICASSP.2013.6638344&rft.eisbn=1479903566&rft.eisbn_list=9781479903566&rft_dat=%3Cieee_CHZPO%3E6638344%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c225t-d917d7a769a78df1f50b3aa9ec96e7082c7fef5a2275e2e7adcb831cab6b7db73%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=6638344&rfr_iscdi=true