Loading…

Recurrent Graph Neural Networks for Text Classification

Text classification is an essential and classical problem in natural language processing. Traditional text classifiers often rely on many human-designed features. With the rise of deep learning, Recurrent Neural Networks and Convolutional Neural Networks have widely applied into text classification....

Full description

Saved in:
Bibliographic Details
Main Authors: Wei, Xinde, Huang, Hai, Ma, Longxuan, Yang, Ze, Xu, Liutong
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Text classification is an essential and classical problem in natural language processing. Traditional text classifiers often rely on many human-designed features. With the rise of deep learning, Recurrent Neural Networks and Convolutional Neural Networks have widely applied into text classification. Meanwhile, the success of Graph Neural Networks (GNN) on structural data has attracted many researchers to apply GNN to traditional NLP applications. However, when these methods use the GNN, they commonly ignore the word order information of the sentence. In this work, we propose a model that uses a recurrent structure to capture contextual information as far as possible when learning word representations, which keeps word orders information compared to GNN-based networks. Then, we use the idea of GNN's message passing to aggregate the contextual information and update the word hidden representation. Like GNN's readout operation, we employ a max-pooling layer that automatically judges which words play key roles in text classification to capture the critical components in texts. We conduct experiments on four widely used datasets, and the experimental results show that our model achieves significant improvements against RNN-based model and GNN-based model.
ISSN:2327-0594
DOI:10.1109/ICSESS49938.2020.9237709