Loading…

Exploring the Social Influence of Virtual Humans Unintentionally Conveying Conflicting Emotions

The expression of human emotion is integral to social interaction, and in virtual reality it is increasingly common to develop virtual avatars that attempt to convey emotions by mimicking these visual and aural cues, i.e. the facial and vocal expressions. However, errors in (or the absence of) facia...

Full description

Saved in:
Bibliographic Details
Main Authors: Choudhary, Zubin, Norouzi, Nahal, Erickson, Austin, Schubert, Ryan, Bruder, Gerd, Welch, Gregory F.
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 580
container_issue
container_start_page 571
container_title
container_volume
creator Choudhary, Zubin
Norouzi, Nahal
Erickson, Austin
Schubert, Ryan
Bruder, Gerd
Welch, Gregory F.
description The expression of human emotion is integral to social interaction, and in virtual reality it is increasingly common to develop virtual avatars that attempt to convey emotions by mimicking these visual and aural cues, i.e. the facial and vocal expressions. However, errors in (or the absence of) facial tracking can result in the rendering of incorrect facial expressions on these virtual avatars. For example, a virtual avatar may speak with a happy or unhappy vocal inflection while their facial expression remains otherwise neutral. In circumstances where there is conflict between the avatar's facial and vocal expressions, it is possible that users will incorrectly interpret the avatar's emotion, which may have unintended consequences in terms of social influence or in terms of the outcome of the interaction. In this paper, we present a human-subjects study (N = 22) aimed at understanding the impact of conflicting facial and vocal emotional expressions. Specifically we explored three levels of emotional valence (unhappy, neutral, and happy) expressed in both visual (facial) and aural (vocal) forms. We also investigate three levels of head scales (down-scaled, accurate, and up-scaled) to evaluate whether head scale affects user interpretation of the conveyed emotion. We find significant effects of different multimodal expressions on happiness and trust perception, while no significant effect was observed for head scales. Evidence from our results suggest that facial expressions have a stronger impact than vocal expressions. Additionally, as the difference between the two expressions increase, the less predictable the multimodal expression becomes. For example, for the happy-looking and happy-sounding multimodal expression, we expect and see high happiness rating and high trust, however if one of the two expressions change, this mismatch makes the expression less predictable. We discuss the relationships, implications, and guidelines for social applications that aim to leverage multimodal social cues.
doi_str_mv 10.1109/VR55154.2023.00072
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_10108086</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10108086</ieee_id><sourcerecordid>10108086</sourcerecordid><originalsourceid>FETCH-LOGICAL-i204t-416fe520bd6236add9c4f1b483a902d0bbe727f3e3b75b54db59d766208651c33</originalsourceid><addsrcrecordid>eNotjttKxDAYhKMguKx9AfEiL9D659j2Ukp1FxYEdfe2JG2ikTRdehD79raszMUMw8cwCN0TSAiB_PH0JgQRPKFAWQIAKb1CUZ7mGRPAeEaEvEYbKjmNBRX8FkXD8L1gjAJftEFV-Xv2Xe_CJx6_DH7vaqc83gfrJxNqgzuLT64fp6XcTa0KAz4GF0YTRtcF5f2Miy78mHkdWJL1rh7XXLbdSgx36MYqP5jo37fo-Fx-FLv48PqyL54OsVuejDEn0hpBQTeSMqmaJq-5JZpnTOVAG9DapDS1zDCdCi14o0XepFJSyKQgNWNb9HDZdcaY6ty7VvVzRYBAtiDsD4suVi0</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Exploring the Social Influence of Virtual Humans Unintentionally Conveying Conflicting Emotions</title><source>IEEE Xplore All Conference Series</source><creator>Choudhary, Zubin ; Norouzi, Nahal ; Erickson, Austin ; Schubert, Ryan ; Bruder, Gerd ; Welch, Gregory F.</creator><creatorcontrib>Choudhary, Zubin ; Norouzi, Nahal ; Erickson, Austin ; Schubert, Ryan ; Bruder, Gerd ; Welch, Gregory F.</creatorcontrib><description>The expression of human emotion is integral to social interaction, and in virtual reality it is increasingly common to develop virtual avatars that attempt to convey emotions by mimicking these visual and aural cues, i.e. the facial and vocal expressions. However, errors in (or the absence of) facial tracking can result in the rendering of incorrect facial expressions on these virtual avatars. For example, a virtual avatar may speak with a happy or unhappy vocal inflection while their facial expression remains otherwise neutral. In circumstances where there is conflict between the avatar's facial and vocal expressions, it is possible that users will incorrectly interpret the avatar's emotion, which may have unintended consequences in terms of social influence or in terms of the outcome of the interaction. In this paper, we present a human-subjects study (N = 22) aimed at understanding the impact of conflicting facial and vocal emotional expressions. Specifically we explored three levels of emotional valence (unhappy, neutral, and happy) expressed in both visual (facial) and aural (vocal) forms. We also investigate three levels of head scales (down-scaled, accurate, and up-scaled) to evaluate whether head scale affects user interpretation of the conveyed emotion. We find significant effects of different multimodal expressions on happiness and trust perception, while no significant effect was observed for head scales. Evidence from our results suggest that facial expressions have a stronger impact than vocal expressions. Additionally, as the difference between the two expressions increase, the less predictable the multimodal expression becomes. For example, for the happy-looking and happy-sounding multimodal expression, we expect and see high happiness rating and high trust, however if one of the two expressions change, this mismatch makes the expression less predictable. We discuss the relationships, implications, and guidelines for social applications that aim to leverage multimodal social cues.</description><identifier>EISSN: 2642-5254</identifier><identifier>EISBN: 9798350348156</identifier><identifier>DOI: 10.1109/VR55154.2023.00072</identifier><identifier>CODEN: IEEPAD</identifier><language>eng</language><publisher>IEEE</publisher><subject>Avatars ; Design methodology ; Dynamic range ; Human-centered computing-Human computer interaction (HCI)-HCI design and evaluation methods-User studies ; Human-centered computing-Human computer interaction (HCI)-Interaction paradigms-Virtual reality ; Rendering (computer graphics) ; Three-dimensional displays ; User interfaces ; Visualization</subject><ispartof>2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR), 2023, p.571-580</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10108086$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10108086$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Choudhary, Zubin</creatorcontrib><creatorcontrib>Norouzi, Nahal</creatorcontrib><creatorcontrib>Erickson, Austin</creatorcontrib><creatorcontrib>Schubert, Ryan</creatorcontrib><creatorcontrib>Bruder, Gerd</creatorcontrib><creatorcontrib>Welch, Gregory F.</creatorcontrib><title>Exploring the Social Influence of Virtual Humans Unintentionally Conveying Conflicting Emotions</title><title>2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)</title><addtitle>VR</addtitle><description>The expression of human emotion is integral to social interaction, and in virtual reality it is increasingly common to develop virtual avatars that attempt to convey emotions by mimicking these visual and aural cues, i.e. the facial and vocal expressions. However, errors in (or the absence of) facial tracking can result in the rendering of incorrect facial expressions on these virtual avatars. For example, a virtual avatar may speak with a happy or unhappy vocal inflection while their facial expression remains otherwise neutral. In circumstances where there is conflict between the avatar's facial and vocal expressions, it is possible that users will incorrectly interpret the avatar's emotion, which may have unintended consequences in terms of social influence or in terms of the outcome of the interaction. In this paper, we present a human-subjects study (N = 22) aimed at understanding the impact of conflicting facial and vocal emotional expressions. Specifically we explored three levels of emotional valence (unhappy, neutral, and happy) expressed in both visual (facial) and aural (vocal) forms. We also investigate three levels of head scales (down-scaled, accurate, and up-scaled) to evaluate whether head scale affects user interpretation of the conveyed emotion. We find significant effects of different multimodal expressions on happiness and trust perception, while no significant effect was observed for head scales. Evidence from our results suggest that facial expressions have a stronger impact than vocal expressions. Additionally, as the difference between the two expressions increase, the less predictable the multimodal expression becomes. For example, for the happy-looking and happy-sounding multimodal expression, we expect and see high happiness rating and high trust, however if one of the two expressions change, this mismatch makes the expression less predictable. We discuss the relationships, implications, and guidelines for social applications that aim to leverage multimodal social cues.</description><subject>Avatars</subject><subject>Design methodology</subject><subject>Dynamic range</subject><subject>Human-centered computing-Human computer interaction (HCI)-HCI design and evaluation methods-User studies</subject><subject>Human-centered computing-Human computer interaction (HCI)-Interaction paradigms-Virtual reality</subject><subject>Rendering (computer graphics)</subject><subject>Three-dimensional displays</subject><subject>User interfaces</subject><subject>Visualization</subject><issn>2642-5254</issn><isbn>9798350348156</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2023</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotjttKxDAYhKMguKx9AfEiL9D659j2Ukp1FxYEdfe2JG2ikTRdehD79raszMUMw8cwCN0TSAiB_PH0JgQRPKFAWQIAKb1CUZ7mGRPAeEaEvEYbKjmNBRX8FkXD8L1gjAJftEFV-Xv2Xe_CJx6_DH7vaqc83gfrJxNqgzuLT64fp6XcTa0KAz4GF0YTRtcF5f2Miy78mHkdWJL1rh7XXLbdSgx36MYqP5jo37fo-Fx-FLv48PqyL54OsVuejDEn0hpBQTeSMqmaJq-5JZpnTOVAG9DapDS1zDCdCi14o0XepFJSyKQgNWNb9HDZdcaY6ty7VvVzRYBAtiDsD4suVi0</recordid><startdate>202303</startdate><enddate>202303</enddate><creator>Choudhary, Zubin</creator><creator>Norouzi, Nahal</creator><creator>Erickson, Austin</creator><creator>Schubert, Ryan</creator><creator>Bruder, Gerd</creator><creator>Welch, Gregory F.</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>202303</creationdate><title>Exploring the Social Influence of Virtual Humans Unintentionally Conveying Conflicting Emotions</title><author>Choudhary, Zubin ; Norouzi, Nahal ; Erickson, Austin ; Schubert, Ryan ; Bruder, Gerd ; Welch, Gregory F.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i204t-416fe520bd6236add9c4f1b483a902d0bbe727f3e3b75b54db59d766208651c33</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Avatars</topic><topic>Design methodology</topic><topic>Dynamic range</topic><topic>Human-centered computing-Human computer interaction (HCI)-HCI design and evaluation methods-User studies</topic><topic>Human-centered computing-Human computer interaction (HCI)-Interaction paradigms-Virtual reality</topic><topic>Rendering (computer graphics)</topic><topic>Three-dimensional displays</topic><topic>User interfaces</topic><topic>Visualization</topic><toplevel>online_resources</toplevel><creatorcontrib>Choudhary, Zubin</creatorcontrib><creatorcontrib>Norouzi, Nahal</creatorcontrib><creatorcontrib>Erickson, Austin</creatorcontrib><creatorcontrib>Schubert, Ryan</creatorcontrib><creatorcontrib>Bruder, Gerd</creatorcontrib><creatorcontrib>Welch, Gregory F.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library Online</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Choudhary, Zubin</au><au>Norouzi, Nahal</au><au>Erickson, Austin</au><au>Schubert, Ryan</au><au>Bruder, Gerd</au><au>Welch, Gregory F.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Exploring the Social Influence of Virtual Humans Unintentionally Conveying Conflicting Emotions</atitle><btitle>2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)</btitle><stitle>VR</stitle><date>2023-03</date><risdate>2023</risdate><spage>571</spage><epage>580</epage><pages>571-580</pages><eissn>2642-5254</eissn><eisbn>9798350348156</eisbn><coden>IEEPAD</coden><abstract>The expression of human emotion is integral to social interaction, and in virtual reality it is increasingly common to develop virtual avatars that attempt to convey emotions by mimicking these visual and aural cues, i.e. the facial and vocal expressions. However, errors in (or the absence of) facial tracking can result in the rendering of incorrect facial expressions on these virtual avatars. For example, a virtual avatar may speak with a happy or unhappy vocal inflection while their facial expression remains otherwise neutral. In circumstances where there is conflict between the avatar's facial and vocal expressions, it is possible that users will incorrectly interpret the avatar's emotion, which may have unintended consequences in terms of social influence or in terms of the outcome of the interaction. In this paper, we present a human-subjects study (N = 22) aimed at understanding the impact of conflicting facial and vocal emotional expressions. Specifically we explored three levels of emotional valence (unhappy, neutral, and happy) expressed in both visual (facial) and aural (vocal) forms. We also investigate three levels of head scales (down-scaled, accurate, and up-scaled) to evaluate whether head scale affects user interpretation of the conveyed emotion. We find significant effects of different multimodal expressions on happiness and trust perception, while no significant effect was observed for head scales. Evidence from our results suggest that facial expressions have a stronger impact than vocal expressions. Additionally, as the difference between the two expressions increase, the less predictable the multimodal expression becomes. For example, for the happy-looking and happy-sounding multimodal expression, we expect and see high happiness rating and high trust, however if one of the two expressions change, this mismatch makes the expression less predictable. We discuss the relationships, implications, and guidelines for social applications that aim to leverage multimodal social cues.</abstract><pub>IEEE</pub><doi>10.1109/VR55154.2023.00072</doi><tpages>10</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2642-5254
ispartof 2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR), 2023, p.571-580
issn 2642-5254
language eng
recordid cdi_ieee_primary_10108086
source IEEE Xplore All Conference Series
subjects Avatars
Design methodology
Dynamic range
Human-centered computing-Human computer interaction (HCI)-HCI design and evaluation methods-User studies
Human-centered computing-Human computer interaction (HCI)-Interaction paradigms-Virtual reality
Rendering (computer graphics)
Three-dimensional displays
User interfaces
Visualization
title Exploring the Social Influence of Virtual Humans Unintentionally Conveying Conflicting Emotions
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T09%3A54%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Exploring%20the%20Social%20Influence%20of%20Virtual%20Humans%20Unintentionally%20Conveying%20Conflicting%20Emotions&rft.btitle=2023%20IEEE%20Conference%20Virtual%20Reality%20and%203D%20User%20Interfaces%20(VR)&rft.au=Choudhary,%20Zubin&rft.date=2023-03&rft.spage=571&rft.epage=580&rft.pages=571-580&rft.eissn=2642-5254&rft.coden=IEEPAD&rft_id=info:doi/10.1109/VR55154.2023.00072&rft.eisbn=9798350348156&rft_dat=%3Cieee_CHZPO%3E10108086%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i204t-416fe520bd6236add9c4f1b483a902d0bbe727f3e3b75b54db59d766208651c33%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10108086&rfr_iscdi=true