Loading…
A Comparative Study of Transformer Based Pretrained AI Models for Content Summarization
In this study, we examine different transformer based pretrained Artificial Intelligence (AI) models on their ability to summarize text content from different sources. AI has emerged as a powerful tool in this context, offering the potential to automate and improve the process of content summarizati...
Saved in:
Main Authors: | , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In this study, we examine different transformer based pretrained Artificial Intelligence (AI) models on their ability to summarize text content from different sources. AI has emerged as a powerful tool in this context, offering the potential to automate and improve the process of content summarization. We mainly focus on the pretrained transformer models, such as Pegasus, T5, Bart, and ProphetNet for key point summarization from textual contents. We aim to assess the effectiveness of these models in summarizing different contents like articles, instructions, conversational dialogues, and compare and analyze their performance across different datasets. We use ROUGE metric to evaluate the quality of the generated summaries. The Facebook's BART model had better performance across different textual datasets. We believe that our findings will offer valuable insights into the capabilities and limitations of Transformer-based AI models in the context of extracting essential points from large articles, making them useful as assistive tools for summarizing course content in educational environments. |
---|---|
ISSN: | 2473-2052 |
DOI: | 10.1109/IIT59782.2023.10366411 |