Loading…

A Joint Model with Contextual and Speaker Information for Conversational Causal Emotion Entailment

Conversational Causal Emotion Entailment (C2E2) aims to identify the causes of a target emotion in a non-neutral conversation. Most models treat C2E2 as an independent utterance pair classification problem that ignores the contextual information. Furthermore, most recent works focus only on the cont...

Full description

Saved in:
Bibliographic Details
Main Authors: Yang, Shanliang, Rao, Guozheng, Zhang, Li, Cong, Qing
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 8
container_issue
container_start_page 1
container_title
container_volume
creator Yang, Shanliang
Rao, Guozheng
Zhang, Li
Cong, Qing
description Conversational Causal Emotion Entailment (C2E2) aims to identify the causes of a target emotion in a non-neutral conversation. Most models treat C2E2 as an independent utterance pair classification problem that ignores the contextual information. Furthermore, most recent works focus only on the contribution of utterance information while ignoring the impact of speaker and emotional information. To solve these problems, we propose a joint model with the contextual and speaker information for conversational causal emotion entailment. We introduce the temporal convolutional network structure to effectively capture contextual information and help the model better understand and analyze emotional changes in conversations. At the same time, a multi-feature interaction network is proposed to use multiple features of utterance to analyze the causes of the emotion, the network extracts location information from the position-aware graph and uses speaker and emotion information to help the model understand the causes behind emotion generation. The experimental results demonstrate that our method outperforms the baseline method and can infer the causes of different emotions in more complex contexts.
doi_str_mv 10.1109/IJCNN60899.2024.10651308
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_10651308</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10651308</ieee_id><sourcerecordid>10651308</sourcerecordid><originalsourceid>FETCH-ieee_primary_106513083</originalsourceid><addsrcrecordid>eNqFjsFuwjAQRN1KSKWQP-hhf4CwtpMQH6soqCA1l3JHRmyE28RGtoH27wmoPff0RjPvMIwBx5RzVPPVumqaAkulUoEiSzkWOZdYPrBELVQpc5S5klw8srHgBZ9lGS6e2HMIn4hCKiXHbPcKa2dshHe3pw4uJh6gcjbSdzzpDrTdw8eR9Bd5WNnW-V5H4ywM6aadyYd7MaiVPoUBde_uRm2jNl1PNk7ZqNVdoOSXE_ayrDfV28wQ0fboTa_9z_bvu_xnvgKw-kke</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>A Joint Model with Contextual and Speaker Information for Conversational Causal Emotion Entailment</title><source>IEEE Xplore All Conference Series</source><creator>Yang, Shanliang ; Rao, Guozheng ; Zhang, Li ; Cong, Qing</creator><creatorcontrib>Yang, Shanliang ; Rao, Guozheng ; Zhang, Li ; Cong, Qing</creatorcontrib><description>Conversational Causal Emotion Entailment (C2E2) aims to identify the causes of a target emotion in a non-neutral conversation. Most models treat C2E2 as an independent utterance pair classification problem that ignores the contextual information. Furthermore, most recent works focus only on the contribution of utterance information while ignoring the impact of speaker and emotional information. To solve these problems, we propose a joint model with the contextual and speaker information for conversational causal emotion entailment. We introduce the temporal convolutional network structure to effectively capture contextual information and help the model better understand and analyze emotional changes in conversations. At the same time, a multi-feature interaction network is proposed to use multiple features of utterance to analyze the causes of the emotion, the network extracts location information from the position-aware graph and uses speaker and emotion information to help the model understand the causes behind emotion generation. The experimental results demonstrate that our method outperforms the baseline method and can infer the causes of different emotions in more complex contexts.</description><identifier>EISSN: 2161-4407</identifier><identifier>EISBN: 9798350359312</identifier><identifier>DOI: 10.1109/IJCNN60899.2024.10651308</identifier><language>eng</language><publisher>IEEE</publisher><subject>Analytical models ; Conversational Causal Emotion Entailment ; Current measurement ; Emotion recognition ; Feature extraction ; Neural networks ; Noise ; Noise measurement ; Speaker ; Temporal convolutional network</subject><ispartof>2024 International Joint Conference on Neural Networks (IJCNN), 2024, p.1-8</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10651308$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10651308$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Yang, Shanliang</creatorcontrib><creatorcontrib>Rao, Guozheng</creatorcontrib><creatorcontrib>Zhang, Li</creatorcontrib><creatorcontrib>Cong, Qing</creatorcontrib><title>A Joint Model with Contextual and Speaker Information for Conversational Causal Emotion Entailment</title><title>2024 International Joint Conference on Neural Networks (IJCNN)</title><addtitle>IJCNN</addtitle><description>Conversational Causal Emotion Entailment (C2E2) aims to identify the causes of a target emotion in a non-neutral conversation. Most models treat C2E2 as an independent utterance pair classification problem that ignores the contextual information. Furthermore, most recent works focus only on the contribution of utterance information while ignoring the impact of speaker and emotional information. To solve these problems, we propose a joint model with the contextual and speaker information for conversational causal emotion entailment. We introduce the temporal convolutional network structure to effectively capture contextual information and help the model better understand and analyze emotional changes in conversations. At the same time, a multi-feature interaction network is proposed to use multiple features of utterance to analyze the causes of the emotion, the network extracts location information from the position-aware graph and uses speaker and emotion information to help the model understand the causes behind emotion generation. The experimental results demonstrate that our method outperforms the baseline method and can infer the causes of different emotions in more complex contexts.</description><subject>Analytical models</subject><subject>Conversational Causal Emotion Entailment</subject><subject>Current measurement</subject><subject>Emotion recognition</subject><subject>Feature extraction</subject><subject>Neural networks</subject><subject>Noise</subject><subject>Noise measurement</subject><subject>Speaker</subject><subject>Temporal convolutional network</subject><issn>2161-4407</issn><isbn>9798350359312</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2024</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNqFjsFuwjAQRN1KSKWQP-hhf4CwtpMQH6soqCA1l3JHRmyE28RGtoH27wmoPff0RjPvMIwBx5RzVPPVumqaAkulUoEiSzkWOZdYPrBELVQpc5S5klw8srHgBZ9lGS6e2HMIn4hCKiXHbPcKa2dshHe3pw4uJh6gcjbSdzzpDrTdw8eR9Bd5WNnW-V5H4ywM6aadyYd7MaiVPoUBde_uRm2jNl1PNk7ZqNVdoOSXE_ayrDfV28wQ0fboTa_9z_bvu_xnvgKw-kke</recordid><startdate>20240630</startdate><enddate>20240630</enddate><creator>Yang, Shanliang</creator><creator>Rao, Guozheng</creator><creator>Zhang, Li</creator><creator>Cong, Qing</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>20240630</creationdate><title>A Joint Model with Contextual and Speaker Information for Conversational Causal Emotion Entailment</title><author>Yang, Shanliang ; Rao, Guozheng ; Zhang, Li ; Cong, Qing</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-ieee_primary_106513083</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Analytical models</topic><topic>Conversational Causal Emotion Entailment</topic><topic>Current measurement</topic><topic>Emotion recognition</topic><topic>Feature extraction</topic><topic>Neural networks</topic><topic>Noise</topic><topic>Noise measurement</topic><topic>Speaker</topic><topic>Temporal convolutional network</topic><toplevel>online_resources</toplevel><creatorcontrib>Yang, Shanliang</creatorcontrib><creatorcontrib>Rao, Guozheng</creatorcontrib><creatorcontrib>Zhang, Li</creatorcontrib><creatorcontrib>Cong, Qing</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yang, Shanliang</au><au>Rao, Guozheng</au><au>Zhang, Li</au><au>Cong, Qing</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>A Joint Model with Contextual and Speaker Information for Conversational Causal Emotion Entailment</atitle><btitle>2024 International Joint Conference on Neural Networks (IJCNN)</btitle><stitle>IJCNN</stitle><date>2024-06-30</date><risdate>2024</risdate><spage>1</spage><epage>8</epage><pages>1-8</pages><eissn>2161-4407</eissn><eisbn>9798350359312</eisbn><abstract>Conversational Causal Emotion Entailment (C2E2) aims to identify the causes of a target emotion in a non-neutral conversation. Most models treat C2E2 as an independent utterance pair classification problem that ignores the contextual information. Furthermore, most recent works focus only on the contribution of utterance information while ignoring the impact of speaker and emotional information. To solve these problems, we propose a joint model with the contextual and speaker information for conversational causal emotion entailment. We introduce the temporal convolutional network structure to effectively capture contextual information and help the model better understand and analyze emotional changes in conversations. At the same time, a multi-feature interaction network is proposed to use multiple features of utterance to analyze the causes of the emotion, the network extracts location information from the position-aware graph and uses speaker and emotion information to help the model understand the causes behind emotion generation. The experimental results demonstrate that our method outperforms the baseline method and can infer the causes of different emotions in more complex contexts.</abstract><pub>IEEE</pub><doi>10.1109/IJCNN60899.2024.10651308</doi></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2161-4407
ispartof 2024 International Joint Conference on Neural Networks (IJCNN), 2024, p.1-8
issn 2161-4407
language eng
recordid cdi_ieee_primary_10651308
source IEEE Xplore All Conference Series
subjects Analytical models
Conversational Causal Emotion Entailment
Current measurement
Emotion recognition
Feature extraction
Neural networks
Noise
Noise measurement
Speaker
Temporal convolutional network
title A Joint Model with Contextual and Speaker Information for Conversational Causal Emotion Entailment
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T00%3A01%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=A%20Joint%20Model%20with%20Contextual%20and%20Speaker%20Information%20for%20Conversational%20Causal%20Emotion%20Entailment&rft.btitle=2024%20International%20Joint%20Conference%20on%20Neural%20Networks%20(IJCNN)&rft.au=Yang,%20Shanliang&rft.date=2024-06-30&rft.spage=1&rft.epage=8&rft.pages=1-8&rft.eissn=2161-4407&rft_id=info:doi/10.1109/IJCNN60899.2024.10651308&rft.eisbn=9798350359312&rft_dat=%3Cieee_CHZPO%3E10651308%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-ieee_primary_106513083%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10651308&rfr_iscdi=true