Loading…
A Hybrid Multimodal Emotion Recognition Framework for UX Evaluation Using Generalized Mixture Functions
Multimodal emotion recognition has gained much traction in the field of affective computing, human-computer interaction (HCI), artificial intelligence (AI), and user experience (UX). There is growing demand to automate analysis of user emotion towards HCI, AI, and UX evaluation applications for prov...
Saved in:
Published in: | Sensors (Basel, Switzerland) Switzerland), 2023-04, Vol.23 (9), p.4373 |
---|---|
Main Authors: | , , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c509t-1dabbd7c5c8d148c42276a84bd07e607d04229885acd74642b7147bf3a2e7c063 |
---|---|
cites | cdi_FETCH-LOGICAL-c509t-1dabbd7c5c8d148c42276a84bd07e607d04229885acd74642b7147bf3a2e7c063 |
container_end_page | |
container_issue | 9 |
container_start_page | 4373 |
container_title | Sensors (Basel, Switzerland) |
container_volume | 23 |
creator | Razzaq, Muhammad Asif Hussain, Jamil Bang, Jaehun Hua, Cam-Hao Satti, Fahad Ahmed Rehman, Ubaid Ur Bilal, Hafiz Syed Muhammad Kim, Seong Tae Lee, Sungyoung |
description | Multimodal emotion recognition has gained much traction in the field of affective computing, human-computer interaction (HCI), artificial intelligence (AI), and user experience (UX). There is growing demand to automate analysis of user emotion towards HCI, AI, and UX evaluation applications for providing affective services. Emotions are increasingly being used, obtained through the videos, audio, text or physiological signals. This has led to process emotions from multiple modalities, usually combined through ensemble-based systems with static weights. Due to numerous limitations like missing modality data, inter-class variations, and intra-class similarities, an effective weighting scheme is thus required to improve the aforementioned discrimination between modalities. This article takes into account the importance of difference between multiple modalities and assigns dynamic weights to them by adapting a more efficient combination process with the application of generalized mixture (GM) functions. Therefore, we present a hybrid multimodal emotion recognition (H-MMER) framework using multi-view learning approach for unimodal emotion recognition and introducing multimodal feature fusion level, and decision level fusion using GM functions. In an experimental study, we evaluated the ability of our proposed framework to model a set of four different emotional states (
,
,
, and
) and found that most of them can be modeled well with significantly high accuracy using GM functions. The experiment shows that the proposed framework can model emotional states with an average accuracy of 98.19% and indicates significant gain in terms of performance in contrast to traditional approaches. The overall evaluation results indicate that we can identify emotional states with high accuracy and increase the robustness of an emotion classification system required for UX measurement. |
doi_str_mv | 10.3390/s23094373 |
format | article |
fullrecord | <record><control><sourceid>gale_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_4175543bcd654845a6623f14a617cc02</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A749233967</galeid><doaj_id>oai_doaj_org_article_4175543bcd654845a6623f14a617cc02</doaj_id><sourcerecordid>A749233967</sourcerecordid><originalsourceid>FETCH-LOGICAL-c509t-1dabbd7c5c8d148c42276a84bd07e607d04229885acd74642b7147bf3a2e7c063</originalsourceid><addsrcrecordid>eNpdkk1vEzEQhi1ERUvKgT-AVuIChxR_23tCUZW0lVohISL1Znlt7-Kwaxd7t1B-PU5Sohb54NHMM689HwC8RfCMkBp-ypjAmhJBXoATRDGdS4zhyyf2MXid8wZCTAiRr8AxEUgIJugJ6BbV5UOTvK1upn70Q7S6r5ZDHH0M1VdnYhf8zl4lPbhfMf2o2piq9W21vNf9pHexdfahqy5ccEn3_o8rYv73OCVXraZgtkg-BUet7rN783jPwHq1_HZ-Ob_-cnF1vrieGwbrcY6sbhorDDPSIioNxVhwLWljoXAcCguLp5aSaWMF5RQ3AlHRtERjJwzkZAau9ro26o26S37Q6UFF7dXOEVOndBq96Z2iSDBGSWMsZ1RSpjnHpEVUcySMKb2agc97rbupGZw1LoylvmeizyPBf1ddvFcIIok4YUXhw6NCij8nl0c1-Gxc3-vg4pQVloiwMgcsC_r-P3QTpxRKr7YUFoQJWBfqbE91ulTgQxvLw6Yc6wZvYnCtL_6FoHUZdc1FSfi4TzAp5pxce_g-gmq7POqwPIV997TeA_lvW8hf_Gy94w</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2812735709</pqid></control><display><type>article</type><title>A Hybrid Multimodal Emotion Recognition Framework for UX Evaluation Using Generalized Mixture Functions</title><source>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</source><source>PubMed Central</source><creator>Razzaq, Muhammad Asif ; Hussain, Jamil ; Bang, Jaehun ; Hua, Cam-Hao ; Satti, Fahad Ahmed ; Rehman, Ubaid Ur ; Bilal, Hafiz Syed Muhammad ; Kim, Seong Tae ; Lee, Sungyoung</creator><creatorcontrib>Razzaq, Muhammad Asif ; Hussain, Jamil ; Bang, Jaehun ; Hua, Cam-Hao ; Satti, Fahad Ahmed ; Rehman, Ubaid Ur ; Bilal, Hafiz Syed Muhammad ; Kim, Seong Tae ; Lee, Sungyoung</creatorcontrib><description>Multimodal emotion recognition has gained much traction in the field of affective computing, human-computer interaction (HCI), artificial intelligence (AI), and user experience (UX). There is growing demand to automate analysis of user emotion towards HCI, AI, and UX evaluation applications for providing affective services. Emotions are increasingly being used, obtained through the videos, audio, text or physiological signals. This has led to process emotions from multiple modalities, usually combined through ensemble-based systems with static weights. Due to numerous limitations like missing modality data, inter-class variations, and intra-class similarities, an effective weighting scheme is thus required to improve the aforementioned discrimination between modalities. This article takes into account the importance of difference between multiple modalities and assigns dynamic weights to them by adapting a more efficient combination process with the application of generalized mixture (GM) functions. Therefore, we present a hybrid multimodal emotion recognition (H-MMER) framework using multi-view learning approach for unimodal emotion recognition and introducing multimodal feature fusion level, and decision level fusion using GM functions. In an experimental study, we evaluated the ability of our proposed framework to model a set of four different emotional states (
,
,
, and
) and found that most of them can be modeled well with significantly high accuracy using GM functions. The experiment shows that the proposed framework can model emotional states with an average accuracy of 98.19% and indicates significant gain in terms of performance in contrast to traditional approaches. The overall evaluation results indicate that we can identify emotional states with high accuracy and increase the robustness of an emotion classification system required for UX measurement.</description><identifier>ISSN: 1424-8220</identifier><identifier>EISSN: 1424-8220</identifier><identifier>DOI: 10.3390/s23094373</identifier><identifier>PMID: 37177574</identifier><language>eng</language><publisher>Switzerland: MDPI AG</publisher><subject>Accuracy ; Affective computing ; Algorithms ; Artificial Intelligence ; audio-based emotion recognition ; Classification ; decision fusioning ; Electroencephalography - methods ; Emotion recognition ; Emotional factors ; Emotions ; Emotions - physiology ; feature fusioning ; generalized mixture function ; Human-computer interface ; Humans ; Learning ; Mixtures ; Neural networks ; Recognition, Psychology ; User experience</subject><ispartof>Sensors (Basel, Switzerland), 2023-04, Vol.23 (9), p.4373</ispartof><rights>COPYRIGHT 2023 MDPI AG</rights><rights>2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>2023 by the authors. 2023</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c509t-1dabbd7c5c8d148c42276a84bd07e607d04229885acd74642b7147bf3a2e7c063</citedby><cites>FETCH-LOGICAL-c509t-1dabbd7c5c8d148c42276a84bd07e607d04229885acd74642b7147bf3a2e7c063</cites><orcidid>0000-0003-3862-8787 ; 0000-0003-3675-2258 ; 0000-0002-8920-4231 ; 0000-0002-0061-8834 ; 0000-0002-5962-1587 ; 0000-0002-2132-6021 ; 0000-0002-9883-3355 ; 0000-0003-2155-8911 ; 0000-0002-2556-4991</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2812735709/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2812735709?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>230,314,727,780,784,885,25753,27924,27925,37012,37013,44590,53791,53793,75126</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/37177574$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Razzaq, Muhammad Asif</creatorcontrib><creatorcontrib>Hussain, Jamil</creatorcontrib><creatorcontrib>Bang, Jaehun</creatorcontrib><creatorcontrib>Hua, Cam-Hao</creatorcontrib><creatorcontrib>Satti, Fahad Ahmed</creatorcontrib><creatorcontrib>Rehman, Ubaid Ur</creatorcontrib><creatorcontrib>Bilal, Hafiz Syed Muhammad</creatorcontrib><creatorcontrib>Kim, Seong Tae</creatorcontrib><creatorcontrib>Lee, Sungyoung</creatorcontrib><title>A Hybrid Multimodal Emotion Recognition Framework for UX Evaluation Using Generalized Mixture Functions</title><title>Sensors (Basel, Switzerland)</title><addtitle>Sensors (Basel)</addtitle><description>Multimodal emotion recognition has gained much traction in the field of affective computing, human-computer interaction (HCI), artificial intelligence (AI), and user experience (UX). There is growing demand to automate analysis of user emotion towards HCI, AI, and UX evaluation applications for providing affective services. Emotions are increasingly being used, obtained through the videos, audio, text or physiological signals. This has led to process emotions from multiple modalities, usually combined through ensemble-based systems with static weights. Due to numerous limitations like missing modality data, inter-class variations, and intra-class similarities, an effective weighting scheme is thus required to improve the aforementioned discrimination between modalities. This article takes into account the importance of difference between multiple modalities and assigns dynamic weights to them by adapting a more efficient combination process with the application of generalized mixture (GM) functions. Therefore, we present a hybrid multimodal emotion recognition (H-MMER) framework using multi-view learning approach for unimodal emotion recognition and introducing multimodal feature fusion level, and decision level fusion using GM functions. In an experimental study, we evaluated the ability of our proposed framework to model a set of four different emotional states (
,
,
, and
) and found that most of them can be modeled well with significantly high accuracy using GM functions. The experiment shows that the proposed framework can model emotional states with an average accuracy of 98.19% and indicates significant gain in terms of performance in contrast to traditional approaches. The overall evaluation results indicate that we can identify emotional states with high accuracy and increase the robustness of an emotion classification system required for UX measurement.</description><subject>Accuracy</subject><subject>Affective computing</subject><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>audio-based emotion recognition</subject><subject>Classification</subject><subject>decision fusioning</subject><subject>Electroencephalography - methods</subject><subject>Emotion recognition</subject><subject>Emotional factors</subject><subject>Emotions</subject><subject>Emotions - physiology</subject><subject>feature fusioning</subject><subject>generalized mixture function</subject><subject>Human-computer interface</subject><subject>Humans</subject><subject>Learning</subject><subject>Mixtures</subject><subject>Neural networks</subject><subject>Recognition, Psychology</subject><subject>User experience</subject><issn>1424-8220</issn><issn>1424-8220</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNpdkk1vEzEQhi1ERUvKgT-AVuIChxR_23tCUZW0lVohISL1Znlt7-Kwaxd7t1B-PU5Sohb54NHMM689HwC8RfCMkBp-ypjAmhJBXoATRDGdS4zhyyf2MXid8wZCTAiRr8AxEUgIJugJ6BbV5UOTvK1upn70Q7S6r5ZDHH0M1VdnYhf8zl4lPbhfMf2o2piq9W21vNf9pHexdfahqy5ccEn3_o8rYv73OCVXraZgtkg-BUet7rN783jPwHq1_HZ-Ob_-cnF1vrieGwbrcY6sbhorDDPSIioNxVhwLWljoXAcCguLp5aSaWMF5RQ3AlHRtERjJwzkZAau9ro26o26S37Q6UFF7dXOEVOndBq96Z2iSDBGSWMsZ1RSpjnHpEVUcySMKb2agc97rbupGZw1LoylvmeizyPBf1ddvFcIIok4YUXhw6NCij8nl0c1-Gxc3-vg4pQVloiwMgcsC_r-P3QTpxRKr7YUFoQJWBfqbE91ulTgQxvLw6Yc6wZvYnCtL_6FoHUZdc1FSfi4TzAp5pxce_g-gmq7POqwPIV997TeA_lvW8hf_Gy94w</recordid><startdate>20230428</startdate><enddate>20230428</enddate><creator>Razzaq, Muhammad Asif</creator><creator>Hussain, Jamil</creator><creator>Bang, Jaehun</creator><creator>Hua, Cam-Hao</creator><creator>Satti, Fahad Ahmed</creator><creator>Rehman, Ubaid Ur</creator><creator>Bilal, Hafiz Syed Muhammad</creator><creator>Kim, Seong Tae</creator><creator>Lee, Sungyoung</creator><general>MDPI AG</general><general>MDPI</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>K9.</scope><scope>M0S</scope><scope>M1P</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0003-3862-8787</orcidid><orcidid>https://orcid.org/0000-0003-3675-2258</orcidid><orcidid>https://orcid.org/0000-0002-8920-4231</orcidid><orcidid>https://orcid.org/0000-0002-0061-8834</orcidid><orcidid>https://orcid.org/0000-0002-5962-1587</orcidid><orcidid>https://orcid.org/0000-0002-2132-6021</orcidid><orcidid>https://orcid.org/0000-0002-9883-3355</orcidid><orcidid>https://orcid.org/0000-0003-2155-8911</orcidid><orcidid>https://orcid.org/0000-0002-2556-4991</orcidid></search><sort><creationdate>20230428</creationdate><title>A Hybrid Multimodal Emotion Recognition Framework for UX Evaluation Using Generalized Mixture Functions</title><author>Razzaq, Muhammad Asif ; Hussain, Jamil ; Bang, Jaehun ; Hua, Cam-Hao ; Satti, Fahad Ahmed ; Rehman, Ubaid Ur ; Bilal, Hafiz Syed Muhammad ; Kim, Seong Tae ; Lee, Sungyoung</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c509t-1dabbd7c5c8d148c42276a84bd07e607d04229885acd74642b7147bf3a2e7c063</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Accuracy</topic><topic>Affective computing</topic><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>audio-based emotion recognition</topic><topic>Classification</topic><topic>decision fusioning</topic><topic>Electroencephalography - methods</topic><topic>Emotion recognition</topic><topic>Emotional factors</topic><topic>Emotions</topic><topic>Emotions - physiology</topic><topic>feature fusioning</topic><topic>generalized mixture function</topic><topic>Human-computer interface</topic><topic>Humans</topic><topic>Learning</topic><topic>Mixtures</topic><topic>Neural networks</topic><topic>Recognition, Psychology</topic><topic>User experience</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Razzaq, Muhammad Asif</creatorcontrib><creatorcontrib>Hussain, Jamil</creatorcontrib><creatorcontrib>Bang, Jaehun</creatorcontrib><creatorcontrib>Hua, Cam-Hao</creatorcontrib><creatorcontrib>Satti, Fahad Ahmed</creatorcontrib><creatorcontrib>Rehman, Ubaid Ur</creatorcontrib><creatorcontrib>Bilal, Hafiz Syed Muhammad</creatorcontrib><creatorcontrib>Kim, Seong Tae</creatorcontrib><creatorcontrib>Lee, Sungyoung</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>ProQuest - Health & Medical Complete保健、医学与药学数据库</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>Health & Medical Collection (Alumni Edition)</collection><collection>PML(ProQuest Medical Library)</collection><collection>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Sensors (Basel, Switzerland)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Razzaq, Muhammad Asif</au><au>Hussain, Jamil</au><au>Bang, Jaehun</au><au>Hua, Cam-Hao</au><au>Satti, Fahad Ahmed</au><au>Rehman, Ubaid Ur</au><au>Bilal, Hafiz Syed Muhammad</au><au>Kim, Seong Tae</au><au>Lee, Sungyoung</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Hybrid Multimodal Emotion Recognition Framework for UX Evaluation Using Generalized Mixture Functions</atitle><jtitle>Sensors (Basel, Switzerland)</jtitle><addtitle>Sensors (Basel)</addtitle><date>2023-04-28</date><risdate>2023</risdate><volume>23</volume><issue>9</issue><spage>4373</spage><pages>4373-</pages><issn>1424-8220</issn><eissn>1424-8220</eissn><abstract>Multimodal emotion recognition has gained much traction in the field of affective computing, human-computer interaction (HCI), artificial intelligence (AI), and user experience (UX). There is growing demand to automate analysis of user emotion towards HCI, AI, and UX evaluation applications for providing affective services. Emotions are increasingly being used, obtained through the videos, audio, text or physiological signals. This has led to process emotions from multiple modalities, usually combined through ensemble-based systems with static weights. Due to numerous limitations like missing modality data, inter-class variations, and intra-class similarities, an effective weighting scheme is thus required to improve the aforementioned discrimination between modalities. This article takes into account the importance of difference between multiple modalities and assigns dynamic weights to them by adapting a more efficient combination process with the application of generalized mixture (GM) functions. Therefore, we present a hybrid multimodal emotion recognition (H-MMER) framework using multi-view learning approach for unimodal emotion recognition and introducing multimodal feature fusion level, and decision level fusion using GM functions. In an experimental study, we evaluated the ability of our proposed framework to model a set of four different emotional states (
,
,
, and
) and found that most of them can be modeled well with significantly high accuracy using GM functions. The experiment shows that the proposed framework can model emotional states with an average accuracy of 98.19% and indicates significant gain in terms of performance in contrast to traditional approaches. The overall evaluation results indicate that we can identify emotional states with high accuracy and increase the robustness of an emotion classification system required for UX measurement.</abstract><cop>Switzerland</cop><pub>MDPI AG</pub><pmid>37177574</pmid><doi>10.3390/s23094373</doi><orcidid>https://orcid.org/0000-0003-3862-8787</orcidid><orcidid>https://orcid.org/0000-0003-3675-2258</orcidid><orcidid>https://orcid.org/0000-0002-8920-4231</orcidid><orcidid>https://orcid.org/0000-0002-0061-8834</orcidid><orcidid>https://orcid.org/0000-0002-5962-1587</orcidid><orcidid>https://orcid.org/0000-0002-2132-6021</orcidid><orcidid>https://orcid.org/0000-0002-9883-3355</orcidid><orcidid>https://orcid.org/0000-0003-2155-8911</orcidid><orcidid>https://orcid.org/0000-0002-2556-4991</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1424-8220 |
ispartof | Sensors (Basel, Switzerland), 2023-04, Vol.23 (9), p.4373 |
issn | 1424-8220 1424-8220 |
language | eng |
recordid | cdi_doaj_primary_oai_doaj_org_article_4175543bcd654845a6623f14a617cc02 |
source | Publicly Available Content Database (Proquest) (PQ_SDU_P3); PubMed Central |
subjects | Accuracy Affective computing Algorithms Artificial Intelligence audio-based emotion recognition Classification decision fusioning Electroencephalography - methods Emotion recognition Emotional factors Emotions Emotions - physiology feature fusioning generalized mixture function Human-computer interface Humans Learning Mixtures Neural networks Recognition, Psychology User experience |
title | A Hybrid Multimodal Emotion Recognition Framework for UX Evaluation Using Generalized Mixture Functions |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T12%3A00%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Hybrid%20Multimodal%20Emotion%20Recognition%20Framework%20for%20UX%20Evaluation%20Using%20Generalized%20Mixture%20Functions&rft.jtitle=Sensors%20(Basel,%20Switzerland)&rft.au=Razzaq,%20Muhammad%20Asif&rft.date=2023-04-28&rft.volume=23&rft.issue=9&rft.spage=4373&rft.pages=4373-&rft.issn=1424-8220&rft.eissn=1424-8220&rft_id=info:doi/10.3390/s23094373&rft_dat=%3Cgale_doaj_%3EA749233967%3C/gale_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c509t-1dabbd7c5c8d148c42276a84bd07e607d04229885acd74642b7147bf3a2e7c063%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2812735709&rft_id=info:pmid/37177574&rft_galeid=A749233967&rfr_iscdi=true |