Loading…

Do you read me? (E)motion Legibility of Virtual Reality Character Representations

We compared the body movements of five virtual reality (VR) avatar representations in a user study (\mathrm{N}=53) to ascertain how well these representations could convey body motions associated with different emotions: one head-and-hands representation using only tracking data, one upper-body repr...

Full description

Saved in:
Bibliographic Details
Main Authors: Brandstatter, Klara, Congdon, Ben J., Steed, Anthony
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We compared the body movements of five virtual reality (VR) avatar representations in a user study (\mathrm{N}=53) to ascertain how well these representations could convey body motions associated with different emotions: one head-and-hands representation using only tracking data, one upper-body representation using inverse kinematics (IK), and three full-body representations using IK, motioncapture, and the state-of-the-art deep-learning model AGRoL. Participants' emotion detection accuracies were similar for the IK and AGRoL representations, highest for the full-body motion-capture representation and lowest for the head-and-hands representation. Our findings suggest that from the perspective of emotion expressivity, connected upper-body parts that provide visual continuity improve clarity, and that current techniques for algorithmically animating the lower-body are ineffective. In particular, the deep-learning technique studied did not produce more expressive results, suggesting the need for training data specifically made for social VR applications.
ISSN:2473-0726
DOI:10.1109/ISMAR62088.2024.00044