Loading…
Neural Attention Model for Abstractive Text Summarization using Linguistic Feature Space
Summarization lessens the size of the text, extracts the main idea, and eliminate unnecessary information. The extractive summarization technique takes out important sentences whereas the abstractive summarization technique paraphrase using advanced and nearer-to human explanation. A feature-rich mo...
Saved in:
Published in: | IEEE access 2023-01, Vol.11, p.1-1 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Summarization lessens the size of the text, extracts the main idea, and eliminate unnecessary information. The extractive summarization technique takes out important sentences whereas the abstractive summarization technique paraphrase using advanced and nearer-to human explanation. A feature-rich model for text summarization is proposed by taking advantage from extractive and abstractive summarization. A feature-rich extractor can select highly informative sentences. The extracted summary can be fed to an abstracter to provide linguistic information. In addition to linguistic characteristics, sentence-level attention aids word-level attention to generate an extractive summary, that is then followed by an abstractive summary. Furthermore, a loss function is introduced to normalize the inconsistency between two attentions. Adopting a two-stage training process of a model we show that state-of-the-art results have been obtained while being comprehensive. The proposed two-staged network achieved a ROUGE score of 37.76% on the benchmark CNN/DailyMail dataset, outperforming the earlier work. Human evaluation is also conducted to measure the comprehensiveness, conciseness and informativeness of the generated summary. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2023.3249783 |