Loading…
CamelEval: Advancing Culturally Aligned Arabic Language Models and Benchmarks
Large Language Models (LLMs) are the cornerstones of modern artificial intelligence systems. This paper introduces Juhaina, a Arabic-English bilingual LLM specifically designed to align with the values and preferences of Arabic speakers. Juhaina inherently supports advanced functionalities such as i...
Saved in:
Published in: | arXiv.org 2024-09 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Qian, Zhaozhi Altam, Faroq Alqurishi, Muhammad Souissi, Riad |
description | Large Language Models (LLMs) are the cornerstones of modern artificial intelligence systems. This paper introduces Juhaina, a Arabic-English bilingual LLM specifically designed to align with the values and preferences of Arabic speakers. Juhaina inherently supports advanced functionalities such as instruction following, open-ended question answering, information provisioning, and text processing. Our model contains 9.24 billion parameters and is trained on a context window of up to 8,192 tokens. This paper details the creation process of Juhaina and provides an extensive empirical evaluation. Furthermore, we identify the limitations of widely-adopted Open Arabic LLM Leaderboard (OALL) and propose a new evaluation benchmark, CamelEval. Our findings demonstrate that Juhaina surpasses existing LLMs of comparable sizes, such as the Llama and Gemma families, in generating helpful responses in Arabic, providing factually accurate information about the region, and understanding nuanced cultural aspects. We aspire for Juhaina to democratize cutting-edge AI technologies, serving over 400 million Arabic speakers by offering LLMs that not only communicate in their language but also comprehend their culture. We publicly release all models on Huggingface \url{https://huggingface.co/elmrc}. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3107310941</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3107310941</sourcerecordid><originalsourceid>FETCH-proquest_journals_31073109413</originalsourceid><addsrcrecordid>eNqNirEOgjAUABsTE4nyD02cSUoLom5IMA66uZMnrRV8FG0piX8vgx_gcLnhbkYCLkQcbRPOFyR0rmWM8U3G01QE5FJAp7AcAfc0lyOYujGaFh4HbwHxQ3NstFGS5hZuTU3PYLQHreillwodBSPpQZn60YF9uhWZ3wGdCn9ekvWxvBan6GX7t1duqNreWzOlSsQsm9glsfjv-gI3Kz0b</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3107310941</pqid></control><display><type>article</type><title>CamelEval: Advancing Culturally Aligned Arabic Language Models and Benchmarks</title><source>Publicly Available Content Database</source><creator>Qian, Zhaozhi ; Altam, Faroq ; Alqurishi, Muhammad ; Souissi, Riad</creator><creatorcontrib>Qian, Zhaozhi ; Altam, Faroq ; Alqurishi, Muhammad ; Souissi, Riad</creatorcontrib><description>Large Language Models (LLMs) are the cornerstones of modern artificial intelligence systems. This paper introduces Juhaina, a Arabic-English bilingual LLM specifically designed to align with the values and preferences of Arabic speakers. Juhaina inherently supports advanced functionalities such as instruction following, open-ended question answering, information provisioning, and text processing. Our model contains 9.24 billion parameters and is trained on a context window of up to 8,192 tokens. This paper details the creation process of Juhaina and provides an extensive empirical evaluation. Furthermore, we identify the limitations of widely-adopted Open Arabic LLM Leaderboard (OALL) and propose a new evaluation benchmark, CamelEval. Our findings demonstrate that Juhaina surpasses existing LLMs of comparable sizes, such as the Llama and Gemma families, in generating helpful responses in Arabic, providing factually accurate information about the region, and understanding nuanced cultural aspects. We aspire for Juhaina to democratize cutting-edge AI technologies, serving over 400 million Arabic speakers by offering LLMs that not only communicate in their language but also comprehend their culture. We publicly release all models on Huggingface \url{https://huggingface.co/elmrc}.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Arabic language ; Artificial intelligence ; Benchmarks ; Cultural factors ; Large language models ; Parameter identification ; Provisioning</subject><ispartof>arXiv.org, 2024-09</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/3107310941?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>776,780,25732,36991,44569</link.rule.ids></links><search><creatorcontrib>Qian, Zhaozhi</creatorcontrib><creatorcontrib>Altam, Faroq</creatorcontrib><creatorcontrib>Alqurishi, Muhammad</creatorcontrib><creatorcontrib>Souissi, Riad</creatorcontrib><title>CamelEval: Advancing Culturally Aligned Arabic Language Models and Benchmarks</title><title>arXiv.org</title><description>Large Language Models (LLMs) are the cornerstones of modern artificial intelligence systems. This paper introduces Juhaina, a Arabic-English bilingual LLM specifically designed to align with the values and preferences of Arabic speakers. Juhaina inherently supports advanced functionalities such as instruction following, open-ended question answering, information provisioning, and text processing. Our model contains 9.24 billion parameters and is trained on a context window of up to 8,192 tokens. This paper details the creation process of Juhaina and provides an extensive empirical evaluation. Furthermore, we identify the limitations of widely-adopted Open Arabic LLM Leaderboard (OALL) and propose a new evaluation benchmark, CamelEval. Our findings demonstrate that Juhaina surpasses existing LLMs of comparable sizes, such as the Llama and Gemma families, in generating helpful responses in Arabic, providing factually accurate information about the region, and understanding nuanced cultural aspects. We aspire for Juhaina to democratize cutting-edge AI technologies, serving over 400 million Arabic speakers by offering LLMs that not only communicate in their language but also comprehend their culture. We publicly release all models on Huggingface \url{https://huggingface.co/elmrc}.</description><subject>Arabic language</subject><subject>Artificial intelligence</subject><subject>Benchmarks</subject><subject>Cultural factors</subject><subject>Large language models</subject><subject>Parameter identification</subject><subject>Provisioning</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNirEOgjAUABsTE4nyD02cSUoLom5IMA66uZMnrRV8FG0piX8vgx_gcLnhbkYCLkQcbRPOFyR0rmWM8U3G01QE5FJAp7AcAfc0lyOYujGaFh4HbwHxQ3NstFGS5hZuTU3PYLQHreillwodBSPpQZn60YF9uhWZ3wGdCn9ekvWxvBan6GX7t1duqNreWzOlSsQsm9glsfjv-gI3Kz0b</recordid><startdate>20240924</startdate><enddate>20240924</enddate><creator>Qian, Zhaozhi</creator><creator>Altam, Faroq</creator><creator>Alqurishi, Muhammad</creator><creator>Souissi, Riad</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240924</creationdate><title>CamelEval: Advancing Culturally Aligned Arabic Language Models and Benchmarks</title><author>Qian, Zhaozhi ; Altam, Faroq ; Alqurishi, Muhammad ; Souissi, Riad</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31073109413</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Arabic language</topic><topic>Artificial intelligence</topic><topic>Benchmarks</topic><topic>Cultural factors</topic><topic>Large language models</topic><topic>Parameter identification</topic><topic>Provisioning</topic><toplevel>online_resources</toplevel><creatorcontrib>Qian, Zhaozhi</creatorcontrib><creatorcontrib>Altam, Faroq</creatorcontrib><creatorcontrib>Alqurishi, Muhammad</creatorcontrib><creatorcontrib>Souissi, Riad</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Qian, Zhaozhi</au><au>Altam, Faroq</au><au>Alqurishi, Muhammad</au><au>Souissi, Riad</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>CamelEval: Advancing Culturally Aligned Arabic Language Models and Benchmarks</atitle><jtitle>arXiv.org</jtitle><date>2024-09-24</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Large Language Models (LLMs) are the cornerstones of modern artificial intelligence systems. This paper introduces Juhaina, a Arabic-English bilingual LLM specifically designed to align with the values and preferences of Arabic speakers. Juhaina inherently supports advanced functionalities such as instruction following, open-ended question answering, information provisioning, and text processing. Our model contains 9.24 billion parameters and is trained on a context window of up to 8,192 tokens. This paper details the creation process of Juhaina and provides an extensive empirical evaluation. Furthermore, we identify the limitations of widely-adopted Open Arabic LLM Leaderboard (OALL) and propose a new evaluation benchmark, CamelEval. Our findings demonstrate that Juhaina surpasses existing LLMs of comparable sizes, such as the Llama and Gemma families, in generating helpful responses in Arabic, providing factually accurate information about the region, and understanding nuanced cultural aspects. We aspire for Juhaina to democratize cutting-edge AI technologies, serving over 400 million Arabic speakers by offering LLMs that not only communicate in their language but also comprehend their culture. We publicly release all models on Huggingface \url{https://huggingface.co/elmrc}.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-09 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3107310941 |
source | Publicly Available Content Database |
subjects | Arabic language Artificial intelligence Benchmarks Cultural factors Large language models Parameter identification Provisioning |
title | CamelEval: Advancing Culturally Aligned Arabic Language Models and Benchmarks |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T00%3A28%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=CamelEval:%20Advancing%20Culturally%20Aligned%20Arabic%20Language%20Models%20and%20Benchmarks&rft.jtitle=arXiv.org&rft.au=Qian,%20Zhaozhi&rft.date=2024-09-24&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3107310941%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_31073109413%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3107310941&rft_id=info:pmid/&rfr_iscdi=true |