Loading…

Textual sentiment analysis via three different attention convolutional neural networks and cross-modality consistent regression

Word embeddings and CNN (convolutional neural networks) architecture are crucial ingredients of sentiment analysis. However, sentiment and lexicon embeddings are rarely used and CNN is incompetent to capture global features of sentence. To this end, semantic embeddings, sentiment embeddings and lexi...

Full description

Saved in:
Bibliographic Details
Published in:Neurocomputing (Amsterdam) 2018-01, Vol.275, p.1407-1415
Main Authors: Zhang, Zufan, Zou, Yang, Gan, Chenquan
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Word embeddings and CNN (convolutional neural networks) architecture are crucial ingredients of sentiment analysis. However, sentiment and lexicon embeddings are rarely used and CNN is incompetent to capture global features of sentence. To this end, semantic embeddings, sentiment embeddings and lexicon embeddings are applied for texts encoding, and three different attentions including attention vector, LSTM (long short term memory) attention and attentive pooling are integrated with CNN model in this paper. Additionally, a word and its context are explored to disambiguate the meaning of the word for rich input representation. To improve the performance of three different attention CNN models, CCR (cross-modality consistent regression) and transfer learning are presented. It is worth noticing that CCR and transfer learning are used in textual sentiment analysis for the first time. Finally, some experiments on two different datasets demonstrate that the proposed attention CNN models achieve the best or the next-best results against the existing state-of-the-art models.
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2017.09.080