Loading…

Multi‐level feature optimization and multimodal contextual fusion for sentiment analysis and emotion classification

The availability of the humongous amount of multimodal content on the internet, the multimodal sentiment classification, and emotion detection has become the most researched topic. The feature selection, context extraction, and multi‐modal fusion are the most important challenges in multimodal senti...

Full description

Saved in:
Bibliographic Details
Published in:Computational intelligence 2020-05, Vol.36 (2), p.861-881
Main Authors: Huddar, Mahesh G., Sannakki, Sanjeev S., Rajpurohit, Vijay S.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c3014-33a8020ecef9317fa25271a22f77f6014930a94058528751f39ed2fef052d1c33
cites cdi_FETCH-LOGICAL-c3014-33a8020ecef9317fa25271a22f77f6014930a94058528751f39ed2fef052d1c33
container_end_page 881
container_issue 2
container_start_page 861
container_title Computational intelligence
container_volume 36
creator Huddar, Mahesh G.
Sannakki, Sanjeev S.
Rajpurohit, Vijay S.
description The availability of the humongous amount of multimodal content on the internet, the multimodal sentiment classification, and emotion detection has become the most researched topic. The feature selection, context extraction, and multi‐modal fusion are the most important challenges in multimodal sentiment classification and affective computing. To address these challenges this paper presents multilevel feature optimization and multimodal contextual fusion technique. The evolutionary computing based feature selection models extract a subset of features from multiple modalities. The contextual information between the neighboring utterances is extracted using bidirectional long‐short‐term‐memory at multiple levels. Initially, bimodal fusion is performed by fusing a combination of two unimodal modalities at a time and finally, trimodal fusion is performed by fusing all three modalities. The result of the proposed method is demonstrated using two publically available datasets such as CMU‐MOSI for sentiment classification and IEMOCAP for affective computing. Incorporating a subset of features and contextual information, the proposed model obtains better classification accuracy than the two standard baselines by over 3% and 6% in sentiment and emotion classification, respectively.
doi_str_mv 10.1111/coin.12274
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2386936857</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2386936857</sourcerecordid><originalsourceid>FETCH-LOGICAL-c3014-33a8020ecef9317fa25271a22f77f6014930a94058528751f39ed2fef052d1c33</originalsourceid><addsrcrecordid>eNp90LtOwzAUBmALgUQpLDxBJDakgC9J7Iyo4lKp0AVmy3KOJVdOXOwEKBOPwDPyJLgJMx5sD9_5dfQjdE7wFUnnWnvbXRFKeXGAZqSoeC6qAh-iGRa0yHnNymN0EuMGY0xYIWZoeBxcb3--vh28gcsMqH4IkPltb1v7qXrru0x1TdbuWesb5TLtux4--iF9zRD3wPiQRegSSFfiyu2ijeMctH7M0E7FaI3VY-QpOjLKRTj7e-fo5e72efGQr9b3y8XNKtcMkyJnTAlMMWgwNSPcKFpSThSlhnNTJVEzrOoCl6KkgpfEsBoaasDgkjZEMzZHF1PuNvjXAWIvN34Iab8oKRNVzSpR8qQuJ6WDjzGAkdtgWxV2kmC5r1Xua5VjrQmTCb9bB7t_pFysl0_TzC_bN316</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2386936857</pqid></control><display><type>article</type><title>Multi‐level feature optimization and multimodal contextual fusion for sentiment analysis and emotion classification</title><source>Wiley:Jisc Collections:Wiley Read and Publish Open Access 2024-2025 (reading list)</source><source>Business Source Ultimate</source><creator>Huddar, Mahesh G. ; Sannakki, Sanjeev S. ; Rajpurohit, Vijay S.</creator><creatorcontrib>Huddar, Mahesh G. ; Sannakki, Sanjeev S. ; Rajpurohit, Vijay S.</creatorcontrib><description>The availability of the humongous amount of multimodal content on the internet, the multimodal sentiment classification, and emotion detection has become the most researched topic. The feature selection, context extraction, and multi‐modal fusion are the most important challenges in multimodal sentiment classification and affective computing. To address these challenges this paper presents multilevel feature optimization and multimodal contextual fusion technique. The evolutionary computing based feature selection models extract a subset of features from multiple modalities. The contextual information between the neighboring utterances is extracted using bidirectional long‐short‐term‐memory at multiple levels. Initially, bimodal fusion is performed by fusing a combination of two unimodal modalities at a time and finally, trimodal fusion is performed by fusing all three modalities. The result of the proposed method is demonstrated using two publically available datasets such as CMU‐MOSI for sentiment classification and IEMOCAP for affective computing. Incorporating a subset of features and contextual information, the proposed model obtains better classification accuracy than the two standard baselines by over 3% and 6% in sentiment and emotion classification, respectively.</description><identifier>ISSN: 0824-7935</identifier><identifier>EISSN: 1467-8640</identifier><identifier>DOI: 10.1111/coin.12274</identifier><language>eng</language><publisher>Hoboken, USA: John Wiley &amp; Sons, Inc</publisher><subject>Affective computing ; bidirectional LSTM ; Classification ; contextual information ; Data mining ; Emotions ; Evolutionary algorithms ; evolutionary computing ; Feature extraction ; Feature selection ; Model accuracy ; multimodal fusion ; Optimization ; Sentiment analysis</subject><ispartof>Computational intelligence, 2020-05, Vol.36 (2), p.861-881</ispartof><rights>2020 Wiley Periodicals, Inc.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c3014-33a8020ecef9317fa25271a22f77f6014930a94058528751f39ed2fef052d1c33</citedby><cites>FETCH-LOGICAL-c3014-33a8020ecef9317fa25271a22f77f6014930a94058528751f39ed2fef052d1c33</cites><orcidid>0000-0002-4344-6024 ; 0000-0003-0659-296X ; 0000-0002-7084-2196</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27922,27923</link.rule.ids></links><search><creatorcontrib>Huddar, Mahesh G.</creatorcontrib><creatorcontrib>Sannakki, Sanjeev S.</creatorcontrib><creatorcontrib>Rajpurohit, Vijay S.</creatorcontrib><title>Multi‐level feature optimization and multimodal contextual fusion for sentiment analysis and emotion classification</title><title>Computational intelligence</title><description>The availability of the humongous amount of multimodal content on the internet, the multimodal sentiment classification, and emotion detection has become the most researched topic. The feature selection, context extraction, and multi‐modal fusion are the most important challenges in multimodal sentiment classification and affective computing. To address these challenges this paper presents multilevel feature optimization and multimodal contextual fusion technique. The evolutionary computing based feature selection models extract a subset of features from multiple modalities. The contextual information between the neighboring utterances is extracted using bidirectional long‐short‐term‐memory at multiple levels. Initially, bimodal fusion is performed by fusing a combination of two unimodal modalities at a time and finally, trimodal fusion is performed by fusing all three modalities. The result of the proposed method is demonstrated using two publically available datasets such as CMU‐MOSI for sentiment classification and IEMOCAP for affective computing. Incorporating a subset of features and contextual information, the proposed model obtains better classification accuracy than the two standard baselines by over 3% and 6% in sentiment and emotion classification, respectively.</description><subject>Affective computing</subject><subject>bidirectional LSTM</subject><subject>Classification</subject><subject>contextual information</subject><subject>Data mining</subject><subject>Emotions</subject><subject>Evolutionary algorithms</subject><subject>evolutionary computing</subject><subject>Feature extraction</subject><subject>Feature selection</subject><subject>Model accuracy</subject><subject>multimodal fusion</subject><subject>Optimization</subject><subject>Sentiment analysis</subject><issn>0824-7935</issn><issn>1467-8640</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><recordid>eNp90LtOwzAUBmALgUQpLDxBJDakgC9J7Iyo4lKp0AVmy3KOJVdOXOwEKBOPwDPyJLgJMx5sD9_5dfQjdE7wFUnnWnvbXRFKeXGAZqSoeC6qAh-iGRa0yHnNymN0EuMGY0xYIWZoeBxcb3--vh28gcsMqH4IkPltb1v7qXrru0x1TdbuWesb5TLtux4--iF9zRD3wPiQRegSSFfiyu2ijeMctH7M0E7FaI3VY-QpOjLKRTj7e-fo5e72efGQr9b3y8XNKtcMkyJnTAlMMWgwNSPcKFpSThSlhnNTJVEzrOoCl6KkgpfEsBoaasDgkjZEMzZHF1PuNvjXAWIvN34Iab8oKRNVzSpR8qQuJ6WDjzGAkdtgWxV2kmC5r1Xua5VjrQmTCb9bB7t_pFysl0_TzC_bN316</recordid><startdate>202005</startdate><enddate>202005</enddate><creator>Huddar, Mahesh G.</creator><creator>Sannakki, Sanjeev S.</creator><creator>Rajpurohit, Vijay S.</creator><general>John Wiley &amp; Sons, Inc</general><general>Blackwell Publishing Ltd</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-4344-6024</orcidid><orcidid>https://orcid.org/0000-0003-0659-296X</orcidid><orcidid>https://orcid.org/0000-0002-7084-2196</orcidid></search><sort><creationdate>202005</creationdate><title>Multi‐level feature optimization and multimodal contextual fusion for sentiment analysis and emotion classification</title><author>Huddar, Mahesh G. ; Sannakki, Sanjeev S. ; Rajpurohit, Vijay S.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c3014-33a8020ecef9317fa25271a22f77f6014930a94058528751f39ed2fef052d1c33</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Affective computing</topic><topic>bidirectional LSTM</topic><topic>Classification</topic><topic>contextual information</topic><topic>Data mining</topic><topic>Emotions</topic><topic>Evolutionary algorithms</topic><topic>evolutionary computing</topic><topic>Feature extraction</topic><topic>Feature selection</topic><topic>Model accuracy</topic><topic>multimodal fusion</topic><topic>Optimization</topic><topic>Sentiment analysis</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Huddar, Mahesh G.</creatorcontrib><creatorcontrib>Sannakki, Sanjeev S.</creatorcontrib><creatorcontrib>Rajpurohit, Vijay S.</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Computational intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Huddar, Mahesh G.</au><au>Sannakki, Sanjeev S.</au><au>Rajpurohit, Vijay S.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multi‐level feature optimization and multimodal contextual fusion for sentiment analysis and emotion classification</atitle><jtitle>Computational intelligence</jtitle><date>2020-05</date><risdate>2020</risdate><volume>36</volume><issue>2</issue><spage>861</spage><epage>881</epage><pages>861-881</pages><issn>0824-7935</issn><eissn>1467-8640</eissn><abstract>The availability of the humongous amount of multimodal content on the internet, the multimodal sentiment classification, and emotion detection has become the most researched topic. The feature selection, context extraction, and multi‐modal fusion are the most important challenges in multimodal sentiment classification and affective computing. To address these challenges this paper presents multilevel feature optimization and multimodal contextual fusion technique. The evolutionary computing based feature selection models extract a subset of features from multiple modalities. The contextual information between the neighboring utterances is extracted using bidirectional long‐short‐term‐memory at multiple levels. Initially, bimodal fusion is performed by fusing a combination of two unimodal modalities at a time and finally, trimodal fusion is performed by fusing all three modalities. The result of the proposed method is demonstrated using two publically available datasets such as CMU‐MOSI for sentiment classification and IEMOCAP for affective computing. Incorporating a subset of features and contextual information, the proposed model obtains better classification accuracy than the two standard baselines by over 3% and 6% in sentiment and emotion classification, respectively.</abstract><cop>Hoboken, USA</cop><pub>John Wiley &amp; Sons, Inc</pub><doi>10.1111/coin.12274</doi><tpages>21</tpages><orcidid>https://orcid.org/0000-0002-4344-6024</orcidid><orcidid>https://orcid.org/0000-0003-0659-296X</orcidid><orcidid>https://orcid.org/0000-0002-7084-2196</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0824-7935
ispartof Computational intelligence, 2020-05, Vol.36 (2), p.861-881
issn 0824-7935
1467-8640
language eng
recordid cdi_proquest_journals_2386936857
source Wiley:Jisc Collections:Wiley Read and Publish Open Access 2024-2025 (reading list); Business Source Ultimate
subjects Affective computing
bidirectional LSTM
Classification
contextual information
Data mining
Emotions
Evolutionary algorithms
evolutionary computing
Feature extraction
Feature selection
Model accuracy
multimodal fusion
Optimization
Sentiment analysis
title Multi‐level feature optimization and multimodal contextual fusion for sentiment analysis and emotion classification
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T12%3A32%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multi%E2%80%90level%20feature%20optimization%20and%20multimodal%20contextual%20fusion%20for%20sentiment%20analysis%20and%20emotion%20classification&rft.jtitle=Computational%20intelligence&rft.au=Huddar,%20Mahesh%20G.&rft.date=2020-05&rft.volume=36&rft.issue=2&rft.spage=861&rft.epage=881&rft.pages=861-881&rft.issn=0824-7935&rft.eissn=1467-8640&rft_id=info:doi/10.1111/coin.12274&rft_dat=%3Cproquest_cross%3E2386936857%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c3014-33a8020ecef9317fa25271a22f77f6014930a94058528751f39ed2fef052d1c33%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2386936857&rft_id=info:pmid/&rfr_iscdi=true