Loading…

Omni-MATH: A Universal Olympiad Level Mathematic Benchmark For Large Language Models

Recent advancements in large language models (LLMs) have led to significant breakthroughs in mathematical reasoning capabilities. However, existing benchmarks like GSM8K or MATH are now being solved with high accuracy (e.g., OpenAI o1 achieves 94.8% on MATH dataset), indicating their inadequacy for...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2024-10
Main Authors: Gao, Bofei, Song, Feifan, Yang, Zhe, Cai, Zefan, Miao, Yibo, Dong, Qingxiu, Li, Lei, Ma, Chenghao, Chen, Liang, Xu, Runxin, Tang, Zhengyang, Wang, Benyou, Daoguang Zan, Quan, Shanghaoran, Zhang, Ge, Sha, Lei, Zhang, Yichang, Ren, Xuancheng, Liu, Tianyu, Chang, Baobao
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Gao, Bofei
Song, Feifan
Yang, Zhe
Cai, Zefan
Miao, Yibo
Dong, Qingxiu
Li, Lei
Ma, Chenghao
Chen, Liang
Xu, Runxin
Tang, Zhengyang
Wang, Benyou
Daoguang Zan
Quan, Shanghaoran
Zhang, Ge
Sha, Lei
Zhang, Yichang
Ren, Xuancheng
Liu, Tianyu
Chang, Baobao
description Recent advancements in large language models (LLMs) have led to significant breakthroughs in mathematical reasoning capabilities. However, existing benchmarks like GSM8K or MATH are now being solved with high accuracy (e.g., OpenAI o1 achieves 94.8% on MATH dataset), indicating their inadequacy for truly challenging these models. To bridge this gap, we propose a comprehensive and challenging benchmark specifically designed to assess LLMs' mathematical reasoning at the Olympiad level. Unlike existing Olympiad-related benchmarks, our dataset focuses exclusively on mathematics and comprises a vast collection of 4428 competition-level problems with rigorous human annotation. These problems are meticulously categorized into over 33 sub-domains and span more than 10 distinct difficulty levels, enabling a holistic assessment of model performance in Olympiad-mathematical reasoning. Furthermore, we conducted an in-depth analysis based on this benchmark. Our experimental results show that even the most advanced models, OpenAI o1-mini and OpenAI o1-preview, struggle with highly challenging Olympiad-level problems, with 60.54% and 52.55% accuracy, highlighting significant challenges in Olympiad-level mathematical reasoning.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3115595910</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3115595910</sourcerecordid><originalsourceid>FETCH-proquest_journals_31155959103</originalsourceid><addsrcrecordid>eNqNik8LgjAchkcQJOV3GHQW5tYqu1kkHhQvdpahv_zTNm1ToW-fhz5Al_d54HlXyKGM-d75QOkGudZ2hBB6PFHOmYPyTOnWS8M8vuAQP3Q7g7FC4kx-1NCKCicwg8SpGBtQYmxLfAVdNkqYF456gxNhalhW15NYJO0rkHaH1k8hLbg_btE-uue32BtM_57AjkXXT0YvqWC-z3nAA5-w_15fkf4-3g</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3115595910</pqid></control><display><type>article</type><title>Omni-MATH: A Universal Olympiad Level Mathematic Benchmark For Large Language Models</title><source>Publicly Available Content Database</source><creator>Gao, Bofei ; Song, Feifan ; Yang, Zhe ; Cai, Zefan ; Miao, Yibo ; Dong, Qingxiu ; Li, Lei ; Ma, Chenghao ; Chen, Liang ; Xu, Runxin ; Tang, Zhengyang ; Wang, Benyou ; Daoguang Zan ; Quan, Shanghaoran ; Zhang, Ge ; Sha, Lei ; Zhang, Yichang ; Ren, Xuancheng ; Liu, Tianyu ; Chang, Baobao</creator><creatorcontrib>Gao, Bofei ; Song, Feifan ; Yang, Zhe ; Cai, Zefan ; Miao, Yibo ; Dong, Qingxiu ; Li, Lei ; Ma, Chenghao ; Chen, Liang ; Xu, Runxin ; Tang, Zhengyang ; Wang, Benyou ; Daoguang Zan ; Quan, Shanghaoran ; Zhang, Ge ; Sha, Lei ; Zhang, Yichang ; Ren, Xuancheng ; Liu, Tianyu ; Chang, Baobao</creatorcontrib><description>Recent advancements in large language models (LLMs) have led to significant breakthroughs in mathematical reasoning capabilities. However, existing benchmarks like GSM8K or MATH are now being solved with high accuracy (e.g., OpenAI o1 achieves 94.8% on MATH dataset), indicating their inadequacy for truly challenging these models. To bridge this gap, we propose a comprehensive and challenging benchmark specifically designed to assess LLMs' mathematical reasoning at the Olympiad level. Unlike existing Olympiad-related benchmarks, our dataset focuses exclusively on mathematics and comprises a vast collection of 4428 competition-level problems with rigorous human annotation. These problems are meticulously categorized into over 33 sub-domains and span more than 10 distinct difficulty levels, enabling a holistic assessment of model performance in Olympiad-mathematical reasoning. Furthermore, we conducted an in-depth analysis based on this benchmark. Our experimental results show that even the most advanced models, OpenAI o1-mini and OpenAI o1-preview, struggle with highly challenging Olympiad-level problems, with 60.54% and 52.55% accuracy, highlighting significant challenges in Olympiad-level mathematical reasoning.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Accuracy ; Annotations ; Benchmarks ; Datasets ; Large language models ; Reasoning</subject><ispartof>arXiv.org, 2024-10</ispartof><rights>2024. This work is published under http://creativecommons.org/publicdomain/zero/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/3115595910?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Gao, Bofei</creatorcontrib><creatorcontrib>Song, Feifan</creatorcontrib><creatorcontrib>Yang, Zhe</creatorcontrib><creatorcontrib>Cai, Zefan</creatorcontrib><creatorcontrib>Miao, Yibo</creatorcontrib><creatorcontrib>Dong, Qingxiu</creatorcontrib><creatorcontrib>Li, Lei</creatorcontrib><creatorcontrib>Ma, Chenghao</creatorcontrib><creatorcontrib>Chen, Liang</creatorcontrib><creatorcontrib>Xu, Runxin</creatorcontrib><creatorcontrib>Tang, Zhengyang</creatorcontrib><creatorcontrib>Wang, Benyou</creatorcontrib><creatorcontrib>Daoguang Zan</creatorcontrib><creatorcontrib>Quan, Shanghaoran</creatorcontrib><creatorcontrib>Zhang, Ge</creatorcontrib><creatorcontrib>Sha, Lei</creatorcontrib><creatorcontrib>Zhang, Yichang</creatorcontrib><creatorcontrib>Ren, Xuancheng</creatorcontrib><creatorcontrib>Liu, Tianyu</creatorcontrib><creatorcontrib>Chang, Baobao</creatorcontrib><title>Omni-MATH: A Universal Olympiad Level Mathematic Benchmark For Large Language Models</title><title>arXiv.org</title><description>Recent advancements in large language models (LLMs) have led to significant breakthroughs in mathematical reasoning capabilities. However, existing benchmarks like GSM8K or MATH are now being solved with high accuracy (e.g., OpenAI o1 achieves 94.8% on MATH dataset), indicating their inadequacy for truly challenging these models. To bridge this gap, we propose a comprehensive and challenging benchmark specifically designed to assess LLMs' mathematical reasoning at the Olympiad level. Unlike existing Olympiad-related benchmarks, our dataset focuses exclusively on mathematics and comprises a vast collection of 4428 competition-level problems with rigorous human annotation. These problems are meticulously categorized into over 33 sub-domains and span more than 10 distinct difficulty levels, enabling a holistic assessment of model performance in Olympiad-mathematical reasoning. Furthermore, we conducted an in-depth analysis based on this benchmark. Our experimental results show that even the most advanced models, OpenAI o1-mini and OpenAI o1-preview, struggle with highly challenging Olympiad-level problems, with 60.54% and 52.55% accuracy, highlighting significant challenges in Olympiad-level mathematical reasoning.</description><subject>Accuracy</subject><subject>Annotations</subject><subject>Benchmarks</subject><subject>Datasets</subject><subject>Large language models</subject><subject>Reasoning</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNik8LgjAchkcQJOV3GHQW5tYqu1kkHhQvdpahv_zTNm1ToW-fhz5Al_d54HlXyKGM-d75QOkGudZ2hBB6PFHOmYPyTOnWS8M8vuAQP3Q7g7FC4kx-1NCKCicwg8SpGBtQYmxLfAVdNkqYF456gxNhalhW15NYJO0rkHaH1k8hLbg_btE-uue32BtM_57AjkXXT0YvqWC-z3nAA5-w_15fkf4-3g</recordid><startdate>20241011</startdate><enddate>20241011</enddate><creator>Gao, Bofei</creator><creator>Song, Feifan</creator><creator>Yang, Zhe</creator><creator>Cai, Zefan</creator><creator>Miao, Yibo</creator><creator>Dong, Qingxiu</creator><creator>Li, Lei</creator><creator>Ma, Chenghao</creator><creator>Chen, Liang</creator><creator>Xu, Runxin</creator><creator>Tang, Zhengyang</creator><creator>Wang, Benyou</creator><creator>Daoguang Zan</creator><creator>Quan, Shanghaoran</creator><creator>Zhang, Ge</creator><creator>Sha, Lei</creator><creator>Zhang, Yichang</creator><creator>Ren, Xuancheng</creator><creator>Liu, Tianyu</creator><creator>Chang, Baobao</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241011</creationdate><title>Omni-MATH: A Universal Olympiad Level Mathematic Benchmark For Large Language Models</title><author>Gao, Bofei ; Song, Feifan ; Yang, Zhe ; Cai, Zefan ; Miao, Yibo ; Dong, Qingxiu ; Li, Lei ; Ma, Chenghao ; Chen, Liang ; Xu, Runxin ; Tang, Zhengyang ; Wang, Benyou ; Daoguang Zan ; Quan, Shanghaoran ; Zhang, Ge ; Sha, Lei ; Zhang, Yichang ; Ren, Xuancheng ; Liu, Tianyu ; Chang, Baobao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31155959103</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>Annotations</topic><topic>Benchmarks</topic><topic>Datasets</topic><topic>Large language models</topic><topic>Reasoning</topic><toplevel>online_resources</toplevel><creatorcontrib>Gao, Bofei</creatorcontrib><creatorcontrib>Song, Feifan</creatorcontrib><creatorcontrib>Yang, Zhe</creatorcontrib><creatorcontrib>Cai, Zefan</creatorcontrib><creatorcontrib>Miao, Yibo</creatorcontrib><creatorcontrib>Dong, Qingxiu</creatorcontrib><creatorcontrib>Li, Lei</creatorcontrib><creatorcontrib>Ma, Chenghao</creatorcontrib><creatorcontrib>Chen, Liang</creatorcontrib><creatorcontrib>Xu, Runxin</creatorcontrib><creatorcontrib>Tang, Zhengyang</creatorcontrib><creatorcontrib>Wang, Benyou</creatorcontrib><creatorcontrib>Daoguang Zan</creatorcontrib><creatorcontrib>Quan, Shanghaoran</creatorcontrib><creatorcontrib>Zhang, Ge</creatorcontrib><creatorcontrib>Sha, Lei</creatorcontrib><creatorcontrib>Zhang, Yichang</creatorcontrib><creatorcontrib>Ren, Xuancheng</creatorcontrib><creatorcontrib>Liu, Tianyu</creatorcontrib><creatorcontrib>Chang, Baobao</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Gao, Bofei</au><au>Song, Feifan</au><au>Yang, Zhe</au><au>Cai, Zefan</au><au>Miao, Yibo</au><au>Dong, Qingxiu</au><au>Li, Lei</au><au>Ma, Chenghao</au><au>Chen, Liang</au><au>Xu, Runxin</au><au>Tang, Zhengyang</au><au>Wang, Benyou</au><au>Daoguang Zan</au><au>Quan, Shanghaoran</au><au>Zhang, Ge</au><au>Sha, Lei</au><au>Zhang, Yichang</au><au>Ren, Xuancheng</au><au>Liu, Tianyu</au><au>Chang, Baobao</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Omni-MATH: A Universal Olympiad Level Mathematic Benchmark For Large Language Models</atitle><jtitle>arXiv.org</jtitle><date>2024-10-11</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Recent advancements in large language models (LLMs) have led to significant breakthroughs in mathematical reasoning capabilities. However, existing benchmarks like GSM8K or MATH are now being solved with high accuracy (e.g., OpenAI o1 achieves 94.8% on MATH dataset), indicating their inadequacy for truly challenging these models. To bridge this gap, we propose a comprehensive and challenging benchmark specifically designed to assess LLMs' mathematical reasoning at the Olympiad level. Unlike existing Olympiad-related benchmarks, our dataset focuses exclusively on mathematics and comprises a vast collection of 4428 competition-level problems with rigorous human annotation. These problems are meticulously categorized into over 33 sub-domains and span more than 10 distinct difficulty levels, enabling a holistic assessment of model performance in Olympiad-mathematical reasoning. Furthermore, we conducted an in-depth analysis based on this benchmark. Our experimental results show that even the most advanced models, OpenAI o1-mini and OpenAI o1-preview, struggle with highly challenging Olympiad-level problems, with 60.54% and 52.55% accuracy, highlighting significant challenges in Olympiad-level mathematical reasoning.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-10
issn 2331-8422
language eng
recordid cdi_proquest_journals_3115595910
source Publicly Available Content Database
subjects Accuracy
Annotations
Benchmarks
Datasets
Large language models
Reasoning
title Omni-MATH: A Universal Olympiad Level Mathematic Benchmark For Large Language Models
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T11%3A14%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Omni-MATH:%20A%20Universal%20Olympiad%20Level%20Mathematic%20Benchmark%20For%20Large%20Language%20Models&rft.jtitle=arXiv.org&rft.au=Gao,%20Bofei&rft.date=2024-10-11&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3115595910%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_31155959103%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3115595910&rft_id=info:pmid/&rfr_iscdi=true