Loading…
Speaker Personality Recognition With Multimodal Explicit Many2many Interactions
Recently, speaker personality analysis has become an increasingly popular research task in human-computer interaction. Previous studies of user personality traits recognition normally focus on leveraging static information, i.e., tweets, images and social relationships in social platforms and websit...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Recently, speaker personality analysis has become an increasingly popular research task in human-computer interaction. Previous studies of user personality traits recognition normally focus on leveraging static information, i.e., tweets, images and social relationships in social platforms and websites. However, in this paper, we utilize three kinds of speaking dynamic information, i.e., textual, visual and acoustic temporal sequences, for a computer to interpret human personality traits from a face-to-face monologue. Specifically, we propose an explicit many2many (many-to-many) interactive approach to help AI efficiently recognize speaker personality traits. On the one hand, we encode the long feature sequence of human speaking for each modality with bidirectional LSTM network. On the other hand, we design a many2many attention mechanism explicitly to capture the interactions across multiple modalities for multiple interactive pairs. Empirical evaluation on 12 kinds of personality traits demonstrates the effectiveness of our proposed approach to multimodal speaker personality recognition. |
---|---|
ISSN: | 1945-788X |
DOI: | 10.1109/ICME46284.2020.9102820 |