Loading…
Sentiment-aware multimodal pre-training for multimodal sentiment analysis
Pre-trained models, together with fine-tuning on downstream labeled datasets, have demonstrated great success in various tasks, including multimodal sentiment analysis. However, most most multimodal pre-trained models focus on learning general lexical and/or visual information, while ignoring sentim...
Saved in:
Published in: | Knowledge-based systems 2022-12, Vol.258, p.110021, Article 110021 |
---|---|
Main Authors: | , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Pre-trained models, together with fine-tuning on downstream labeled datasets, have demonstrated great success in various tasks, including multimodal sentiment analysis. However, most most multimodal pre-trained models focus on learning general lexical and/or visual information, while ignoring sentiment signals. To address this problem, we propose a sentiment-aware multimodal pre-training (SMP) framework for multimodal sentiment analysis. In particular, we design a cross-modal contrastive learning module based on the interactions between visual and textual information, and introduce additional sentiment-aware pre-training objectives (e,g., fine-grained sentiment labeling) to capture fine-grained sentiment information from sentiment-rich datasets. We adopt two objectives (i.e., masked language modeling and masked auto-encoders) to capture semantic information from text and images. We conduct a series of experiments on sentence-level and target-oriented multimodal sentiment classification tasks, wherein the results of our SMP model exceeds the state-of-the-art results. Additionally, ablation studies and case studies are conducted to verify the effectiveness of our SMP model. |
---|---|
ISSN: | 0950-7051 1872-7409 |
DOI: | 10.1016/j.knosys.2022.110021 |