Loading…

Exploring sources of variation in human behavioral data: Towards automatic audio-visual emotion recognition

My PhD work aims at developing computational methodologies for automatic emotion recognition from audiovisual behavioral data. A main challenge in automatic emotion recognition is that human behavioral data are highly complex, due to multiple sources that vary and modulate behaviors. My goal is to p...

Full description

Saved in:
Bibliographic Details
Main Author: Kim, Yelin
Format: Conference Proceeding
Language:English
Subjects:
Citations: Items that cite this one
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c256t-48ae7fa7cddccaa8101bb5a4071cb429685482b21d5917c1564c3f8ffa7a85d73
cites
container_end_page 753
container_issue
container_start_page 748
container_title
container_volume
creator Kim, Yelin
description My PhD work aims at developing computational methodologies for automatic emotion recognition from audiovisual behavioral data. A main challenge in automatic emotion recognition is that human behavioral data are highly complex, due to multiple sources that vary and modulate behaviors. My goal is to provide computational frameworks for understanding and controlling for multiple sources of variation in human behavioral data that co-occur with the production of emotion, with the aim of improving automatic emotion recognition systems [1]-[6]. In particular, my research aims at providing representation, modeling, and analysis methods for complex and time-changing behaviors in human audio-visual data by introducing temporal segmentation and time-series analysis techniques. This research contributes to the affective computing community by improving the performance of automatic emotion recognition systems and increasing the understanding of affective cues embedded within complex audio-visual data.
doi_str_mv 10.1109/ACII.2015.7344653
format conference_proceeding
fullrecord <record><control><sourceid>proquest_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_7344653</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>7344653</ieee_id><sourcerecordid>1778046967</sourcerecordid><originalsourceid>FETCH-LOGICAL-c256t-48ae7fa7cddccaa8101bb5a4071cb429685482b21d5917c1564c3f8ffa7a85d73</originalsourceid><addsrcrecordid>eNotkDFPwzAUhA0SEqX0ByAWjywJfokdO2xVVaBSJZYyRy-O0xqSuNhJgX9PSvuWu-G7k94RcgcsBmD543yxWsUJAxHLlPNMpBfkBrjMxxNpfkkmCYgsUgBwTWYhfDDGIBdMKTEhn8uffeO87bY0uMFrE6ir6QG9xd66jtqO7oYWO1qaHR6s89jQCnt8ohv3jb4KFIfetSOsR1dZFx1sGEbItO6_wBvttp09-ltyVWMTzOysU_L-vNwsXqP128tqMV9HOhFZH3GFRtYodVVpjaiAQVkK5EyCLnmSZ0pwlZQJVCIHqcffuE5rVY8RVKKS6ZQ8nHr33n0NJvRFa4M2TYOdcUMoQErFeJZnR_T-hFpjTLH3tkX_W5xnTP8AOnhokQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype><pqid>1778046967</pqid></control><display><type>conference_proceeding</type><title>Exploring sources of variation in human behavioral data: Towards automatic audio-visual emotion recognition</title><source>IEEE Xplore All Conference Series</source><creator>Kim, Yelin</creator><creatorcontrib>Kim, Yelin</creatorcontrib><description>My PhD work aims at developing computational methodologies for automatic emotion recognition from audiovisual behavioral data. A main challenge in automatic emotion recognition is that human behavioral data are highly complex, due to multiple sources that vary and modulate behaviors. My goal is to provide computational frameworks for understanding and controlling for multiple sources of variation in human behavioral data that co-occur with the production of emotion, with the aim of improving automatic emotion recognition systems [1]-[6]. In particular, my research aims at providing representation, modeling, and analysis methods for complex and time-changing behaviors in human audio-visual data by introducing temporal segmentation and time-series analysis techniques. This research contributes to the affective computing community by improving the performance of automatic emotion recognition systems and increasing the understanding of affective cues embedded within complex audio-visual data.</description><identifier>EISSN: 2156-8111</identifier><identifier>EISBN: 1479999539</identifier><identifier>EISBN: 9781479999538</identifier><identifier>DOI: 10.1109/ACII.2015.7344653</identifier><language>eng</language><publisher>IEEE</publisher><subject>affective computing ; Analytical models ; Audiovisual ; Automation ; Communities ; Computation ; Conferences ; emotion estimation ; Emotion recognition ; Emotions ; Human behavior ; human perception ; Motion segmentation ; multimodal ; Production ; Recognition ; Speech ; Speech recognition ; temporal ; variation ; Visualization</subject><ispartof>2015 International Conference on Affective Computing and Intelligent Interaction (ACII), 2015, p.748-753</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c256t-48ae7fa7cddccaa8101bb5a4071cb429685482b21d5917c1564c3f8ffa7a85d73</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/7344653$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,314,780,784,789,790,23930,23931,25140,27924,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/7344653$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Kim, Yelin</creatorcontrib><title>Exploring sources of variation in human behavioral data: Towards automatic audio-visual emotion recognition</title><title>2015 International Conference on Affective Computing and Intelligent Interaction (ACII)</title><addtitle>ACII</addtitle><description>My PhD work aims at developing computational methodologies for automatic emotion recognition from audiovisual behavioral data. A main challenge in automatic emotion recognition is that human behavioral data are highly complex, due to multiple sources that vary and modulate behaviors. My goal is to provide computational frameworks for understanding and controlling for multiple sources of variation in human behavioral data that co-occur with the production of emotion, with the aim of improving automatic emotion recognition systems [1]-[6]. In particular, my research aims at providing representation, modeling, and analysis methods for complex and time-changing behaviors in human audio-visual data by introducing temporal segmentation and time-series analysis techniques. This research contributes to the affective computing community by improving the performance of automatic emotion recognition systems and increasing the understanding of affective cues embedded within complex audio-visual data.</description><subject>affective computing</subject><subject>Analytical models</subject><subject>Audiovisual</subject><subject>Automation</subject><subject>Communities</subject><subject>Computation</subject><subject>Conferences</subject><subject>emotion estimation</subject><subject>Emotion recognition</subject><subject>Emotions</subject><subject>Human behavior</subject><subject>human perception</subject><subject>Motion segmentation</subject><subject>multimodal</subject><subject>Production</subject><subject>Recognition</subject><subject>Speech</subject><subject>Speech recognition</subject><subject>temporal</subject><subject>variation</subject><subject>Visualization</subject><issn>2156-8111</issn><isbn>1479999539</isbn><isbn>9781479999538</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2015</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotkDFPwzAUhA0SEqX0ByAWjywJfokdO2xVVaBSJZYyRy-O0xqSuNhJgX9PSvuWu-G7k94RcgcsBmD543yxWsUJAxHLlPNMpBfkBrjMxxNpfkkmCYgsUgBwTWYhfDDGIBdMKTEhn8uffeO87bY0uMFrE6ir6QG9xd66jtqO7oYWO1qaHR6s89jQCnt8ohv3jb4KFIfetSOsR1dZFx1sGEbItO6_wBvttp09-ltyVWMTzOysU_L-vNwsXqP128tqMV9HOhFZH3GFRtYodVVpjaiAQVkK5EyCLnmSZ0pwlZQJVCIHqcffuE5rVY8RVKKS6ZQ8nHr33n0NJvRFa4M2TYOdcUMoQErFeJZnR_T-hFpjTLH3tkX_W5xnTP8AOnhokQ</recordid><startdate>20150901</startdate><enddate>20150901</enddate><creator>Kim, Yelin</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20150901</creationdate><title>Exploring sources of variation in human behavioral data: Towards automatic audio-visual emotion recognition</title><author>Kim, Yelin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c256t-48ae7fa7cddccaa8101bb5a4071cb429685482b21d5917c1564c3f8ffa7a85d73</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2015</creationdate><topic>affective computing</topic><topic>Analytical models</topic><topic>Audiovisual</topic><topic>Automation</topic><topic>Communities</topic><topic>Computation</topic><topic>Conferences</topic><topic>emotion estimation</topic><topic>Emotion recognition</topic><topic>Emotions</topic><topic>Human behavior</topic><topic>human perception</topic><topic>Motion segmentation</topic><topic>multimodal</topic><topic>Production</topic><topic>Recognition</topic><topic>Speech</topic><topic>Speech recognition</topic><topic>temporal</topic><topic>variation</topic><topic>Visualization</topic><toplevel>online_resources</toplevel><creatorcontrib>Kim, Yelin</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kim, Yelin</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Exploring sources of variation in human behavioral data: Towards automatic audio-visual emotion recognition</atitle><btitle>2015 International Conference on Affective Computing and Intelligent Interaction (ACII)</btitle><stitle>ACII</stitle><date>2015-09-01</date><risdate>2015</risdate><spage>748</spage><epage>753</epage><pages>748-753</pages><eissn>2156-8111</eissn><eisbn>1479999539</eisbn><eisbn>9781479999538</eisbn><abstract>My PhD work aims at developing computational methodologies for automatic emotion recognition from audiovisual behavioral data. A main challenge in automatic emotion recognition is that human behavioral data are highly complex, due to multiple sources that vary and modulate behaviors. My goal is to provide computational frameworks for understanding and controlling for multiple sources of variation in human behavioral data that co-occur with the production of emotion, with the aim of improving automatic emotion recognition systems [1]-[6]. In particular, my research aims at providing representation, modeling, and analysis methods for complex and time-changing behaviors in human audio-visual data by introducing temporal segmentation and time-series analysis techniques. This research contributes to the affective computing community by improving the performance of automatic emotion recognition systems and increasing the understanding of affective cues embedded within complex audio-visual data.</abstract><pub>IEEE</pub><doi>10.1109/ACII.2015.7344653</doi><tpages>6</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2156-8111
ispartof 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), 2015, p.748-753
issn 2156-8111
language eng
recordid cdi_ieee_primary_7344653
source IEEE Xplore All Conference Series
subjects affective computing
Analytical models
Audiovisual
Automation
Communities
Computation
Conferences
emotion estimation
Emotion recognition
Emotions
Human behavior
human perception
Motion segmentation
multimodal
Production
Recognition
Speech
Speech recognition
temporal
variation
Visualization
title Exploring sources of variation in human behavioral data: Towards automatic audio-visual emotion recognition
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T21%3A47%3A50IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Exploring%20sources%20of%20variation%20in%20human%20behavioral%20data:%20Towards%20automatic%20audio-visual%20emotion%20recognition&rft.btitle=2015%20International%20Conference%20on%20Affective%20Computing%20and%20Intelligent%20Interaction%20(ACII)&rft.au=Kim,%20Yelin&rft.date=2015-09-01&rft.spage=748&rft.epage=753&rft.pages=748-753&rft.eissn=2156-8111&rft_id=info:doi/10.1109/ACII.2015.7344653&rft.eisbn=1479999539&rft.eisbn_list=9781479999538&rft_dat=%3Cproquest_CHZPO%3E1778046967%3C/proquest_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c256t-48ae7fa7cddccaa8101bb5a4071cb429685482b21d5917c1564c3f8ffa7a85d73%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=1778046967&rft_id=info:pmid/&rft_ieee_id=7344653&rfr_iscdi=true