Loading…

A Hierarchical Long Short-Term Memory Encoder-Decoder Model for Abstractive Summarization

Abstractive summarization is the task of generating concise summary of a source text, which is a challenging problem in Natural Language Processing (NLP). Many recent studies have relied on encoder-decoder sequence-to-sequence deep neural networks to solve this problem. However, most of these models...

Full description

Saved in:
Bibliographic Details
Main Authors: Nguyen-Ngoc, Khuong, Le, Anh-Cuong, Nguyen, Viet-Ha
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstractive summarization is the task of generating concise summary of a source text, which is a challenging problem in Natural Language Processing (NLP). Many recent studies have relied on encoder-decoder sequence-to-sequence deep neural networks to solve this problem. However, most of these models treat the input as a sequence of words at the same level during the encoding process. This will make the encoding inefficient, especially for long input texts. Addressing this issue, in this paper we propose a model to encode text in a hierarchical manner, which helps to encode information in a way that is consistent with the nature of the text: the text is synthesized from sentences, and each sentence is synthesized from words. Our proposed model is based on Long Short Term Memory model that we called HLSTM (Hierarchical Long Short Term Memory) and applied to the problem of abstractive summarization. We conducted extensive experiments on the two most popular corpora (Gigaword and Amazon Review) and obtain significant improvements in comparison with the baseline models.
ISSN:2694-4804
DOI:10.1109/KSE53942.2021.9648836