Loading…

Evaluating Named Entity Recognition: A comparative analysis of mono- and multilingual transformer models on a novel Brazilian corporate earnings call transcripts dataset

Since 2018, when the Transformer architecture was introduced, Natural Language Processing has gained significant momentum with pre-trained Transformer-based models that can be fine-tuned for various tasks. Most models are pre-trained on large English corpora, making them less applicable to other lan...

Full description

Saved in:
Bibliographic Details
Published in:Applied soft computing 2024-11, Vol.166, p.112158, Article 112158
Main Authors: Abilio, Ramon, Coelho, Guilherme Palermo, da Silva, Ana Estela Antunes
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Since 2018, when the Transformer architecture was introduced, Natural Language Processing has gained significant momentum with pre-trained Transformer-based models that can be fine-tuned for various tasks. Most models are pre-trained on large English corpora, making them less applicable to other languages, such as Brazilian Portuguese. In our research, we identified two models pre-trained in Brazilian Portuguese (BERTimbau and PTT5) and two multilingual models (mBERT and mT5). BERTimbau and mBERT use only the Encoder module, while PTT5 and mT5 use both the Encoder and Decoder. Our study aimed to evaluate their performance on a financial Named Entity Recognition (NER) task and determine the computational requirements for fine-tuning and inference. To this end, we developed the Brazilian Financial NER (BraFiNER) dataset, comprising sentences from Brazilian banks’ earnings calls transcripts annotated using a weakly supervised approach. Additionally, we introduced a novel approach that reframes the token classification task as a text generation problem. After fine-tuning the models, we evaluated them using performance and error metrics. Our findings reveal that BERT-based models consistently outperform T5-based models. While the multilingual models exhibit comparable macro F1-scores, BERTimbau demonstrates superior performance over PTT5. In terms of error metrics, BERTimbau outperforms the other models. We also observed that PTT5 and mT5 generated sentences with changes in monetary and percentage values, highlighting the importance of accuracy and consistency in the financial domain. Our findings provide insights into the differing performance of BERT- and T5-based models for the NER task. [Display omitted] •Compilation of 384 conference call transcripts from Brazilian banks.•Development of the BraFiNER, a Portuguese dataset for Named Entity Recognition in finance.•Framing NER as text generation with T5, surpassing prior methods.•Comparison of BERT and T5 models shows insights into their NER efficacy.•The macro F1-score achieved by the models ranged from 98.33% to 98.99%.
ISSN:1568-4946
DOI:10.1016/j.asoc.2024.112158