Loading…

HausaNLP at SemEval-2023 Task 12: Leveraging African Low Resource TweetData for Sentiment Analysis

We present the findings of SemEval-2023 Task 12, a shared task on sentiment analysis for low-resource African languages using Twitter dataset. The task featured three subtasks; subtask A is monolingual sentiment classification with 12 tracks which are all monolingual languages, subtask B is multilin...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2023-04
Main Authors: Saheed Abdullahi Salahudeen, Falalu Ibrahim Lawan, Ahmad Mustapha Wali, Amina Abubakar Imam, Aliyu Rabiu Shuaibu, Aliyu Yusuf, Nur Bala Rabiu, Bello, Musa, Adamu, Shamsuddeen Umaru, Saminu Mohammad Aliyu, Murja Sani Gadanya, Sanah Abdullahi Muaz, Mahmoud Said Ahmad, Abdullahi, Abdulkadir, Abdulmalik Yusuf Jamoh
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We present the findings of SemEval-2023 Task 12, a shared task on sentiment analysis for low-resource African languages using Twitter dataset. The task featured three subtasks; subtask A is monolingual sentiment classification with 12 tracks which are all monolingual languages, subtask B is multilingual sentiment classification using the tracks in subtask A and subtask C is a zero-shot sentiment classification. We present the results and findings of subtask A, subtask B and subtask C. We also release the code on github. Our goal is to leverage low-resource tweet data using pre-trained Afro-xlmr-large, AfriBERTa-Large, Bert-base-arabic-camelbert-da-sentiment (Arabic-camelbert), Multilingual-BERT (mBERT) and BERT models for sentiment analysis of 14 African languages. The datasets for these subtasks consists of a gold standard multi-class labeled Twitter datasets from these languages. Our results demonstrate that Afro-xlmr-large model performed better compared to the other models in most of the languages datasets. Similarly, Nigerian languages: Hausa, Igbo, and Yoruba achieved better performance compared to other languages and this can be attributed to the higher volume of data present in the languages.
ISSN:2331-8422