Loading…

Optimizing Global Representation Using Convolutional Bidirectional Recurrent Network Model for Text Categorization

Deep learning is capable of achieving remarkable performance in sentence and document modelling. Recurrent Neural Network (RNN) is the mainstream architecture for text categorization. But RNN is a biased model of which later inputs are more dominant than earlier inputs. To optimizing global represen...

Full description

Saved in:
Bibliographic Details
Main Authors: Tang, Xianlun, Yang, Jingming, Lin, Wenxing, Zou, Mi, Zhou, Fanding, Peng, Deguang
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Deep learning is capable of achieving remarkable performance in sentence and document modelling. Recurrent Neural Network (RNN) is the mainstream architecture for text categorization. But RNN is a biased model of which later inputs are more dominant than earlier inputs. To optimizing global representation in RNN for document modelling, the convolutional bidirectional recurrent network (CBI-RNN) is introduced to text categorization. One convolutional layer and one max pooling layer are utilized to extract phrase level local information from word embedding. The BI-LSTM with global pooling is adopted to extract the global information for selecting features which are most favorable for classification. Depending upon the global pooling scheme utilized in the model, model variants are named CBI-RNN-Max and CBI-RNN-Att. Advanced sub-sequence representation learning is also introduced in the proposed model and its performance is reported in comparison with different model variations. The proposed model is applied to the text classification data set Reuters21578-R8, WebKB. Experimental results indicate that the proposed model captures more contextual information and achieves state-of -the-art performance on both datasets.
ISSN:2688-0938
DOI:10.1109/CAC53003.2021.9728674