Loading…
Large Language Models in Targeted Sentiment Analysis for Russian
In this paper, we investigate the use of decoder-based generative transformers for extracting sentiment towards the named entities in Russian news articles. We study sentiment analysis capabilities of instruction-tuned large language models (LLMs). We consider the dataset of RuSentNE-2023 in our stu...
Saved in:
Published in: | Lobachevskii journal of mathematics 2024-07, Vol.45 (7), p.3148-3158 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In this paper, we investigate the use of decoder-based generative transformers for extracting sentiment towards the named entities in Russian news articles. We study sentiment analysis capabilities of instruction-tuned large language models (LLMs). We consider the dataset of RuSentNE-2023 in our study. The first group of experiments was aimed at the evaluation of zero-shot capabilities of LLMs with closed and open transparencies. The second covers the fine-tuning of Flan-T5 using the ‘‘chain-of-thought’’ (CoT) three-hop reasoning framework (THoR). We found that the results of the zero-shot approaches are similar to the results achieved by baseline fine-tuned encoder-based transformers (BERT
). Reasoning capabilities of the fine-tuned Flan-T5 models with THoR achieve at least
increment with the base-size model compared to the results of the zero-shot experiment. The best results of sentiment analysis on RuSentNE-2023 were achieved by fine-tuned Flan-T5
, which surpassed the results of previous state-of-the-art transformer-based classifiers. Our CoT application framework is publicly available:
https://github.com/nicolay-r/Reasoning-for-Sentiment-Analysis-Framework |
---|---|
ISSN: | 1995-0802 1818-9962 |
DOI: | 10.1134/S1995080224603758 |