Loading…

A comparative analysis of transformer based models for figurative language classification

•This research aims to identify whether or not transformers work significantly well for figurative language classification and not just literal language classification as well as how well do they generalize over other subclasses of a figurative language class.•The models fine tuned on the dataset us...

Full description

Saved in:
Bibliographic Details
Published in:Computers & electrical engineering 2022-07, Vol.101, p.108051, Article 108051
Main Authors: Junaid, Taha, Sumathi, D., Sasikumar, A.N., Suthir, S., Manikandan, J., Khilar, Rashmita, Kuppusamy, P.G., Janardhana Raju, M.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•This research aims to identify whether or not transformers work significantly well for figurative language classification and not just literal language classification as well as how well do they generalize over other subclasses of a figurative language class.•The models fine tuned on the dataset used are LSTM, Bi-LSTM models, transformer architecture based models which are BERT (Base & talking Heads), roberta and XLNet.•An accuracy of 81% was obtained by roberta and is able to generalize better in most cases. Efficient and effective methods are required to construct a model to rapidly extractdifferent sentiments from large volumes of text. To augment the performance of the models, contemporary developments in Natural Language Processing (NLP) have been utilized by researchers to work on several model architecture and pretraining tasks. This work explores several models based on transformer architecture and analyses its performance. In this work, the researchersusea dataset to answer the question of whether or not transformers work significantly well for figurative language and not just literal language classification. The results of various models are compared and have come up as a result of research over time. The studyexplains why it is necessary for computers to understand the occurrence of figurative language, why it is yet a challenge and is being intensively worked on to date, and how it is different from literal language classification. This research also covers how well these models train on a specific type of figurative language and generalize on a few other similar types. [Display omitted]
ISSN:0045-7906
1879-0755
DOI:10.1016/j.compeleceng.2022.108051