Loading…
CS-Bench: A Comprehensive Benchmark for Large Language Models towards Computer Science Mastery
Computer Science (CS) stands as a testament to the intricacies of human intelligence, profoundly advancing the development of artificial intelligence and modern society. However, the current community of large language models (LLMs) overly focuses on benchmarks for analyzing specific foundational sk...
Saved in:
Published in: | arXiv.org 2024-06 |
---|---|
Main Authors: | , , , , , , , , , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Song, Xiaoshuai Diao, Muxi Dong, Guanting Wang, Zhengyang Fu, Yujia Qiao, Runqi Wang, Zhexu Fu, Dayuan Wu, Huangxuan Liang, Bin Zeng, Weihao Wang, Yejie GongQue, Zhuoma Yu, Jianing Tan, Qiuna Xu, Weiran |
description | Computer Science (CS) stands as a testament to the intricacies of human intelligence, profoundly advancing the development of artificial intelligence and modern society. However, the current community of large language models (LLMs) overly focuses on benchmarks for analyzing specific foundational skills (e.g. mathematics and code generation), neglecting an all-round evaluation of the computer science field. To bridge this gap, we introduce CS-Bench, the first bilingual (Chinese-English) benchmark dedicated to evaluating the performance of LLMs in computer science. CS-Bench comprises approximately 5K meticulously curated test samples, covering 26 subfields across 4 key areas of computer science, encompassing various task forms and divisions of knowledge and reasoning. Utilizing CS-Bench, we conduct a comprehensive evaluation of over 30 mainstream LLMs, revealing the relationship between CS performance and model scales. We also quantitatively analyze the reasons for failures in existing LLMs and highlight directions for improvements, including knowledge supplementation and CS-specific reasoning. Further cross-capability experiments show a high correlation between LLMs' capabilities in computer science and their abilities in mathematics and coding. Moreover, expert LLMs specialized in mathematics and coding also demonstrate strong performances in several CS subfields. Looking ahead, we envision CS-Bench serving as a cornerstone for LLM applications in the CS field and paving new avenues in assessing LLMs' diverse reasoning capabilities. The CS-Bench data and evaluation code are available at https://github.com/csbench/csbench. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3068241478</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3068241478</sourcerecordid><originalsourceid>FETCH-proquest_journals_30682414783</originalsourceid><addsrcrecordid>eNqNS9sKgjAYHkGQlO8w6FqYmye6Kym6qCu7Tob-ntLNNlf09g3pAbr5zt8COZQx30sCSlfI1bojhNAopmHIHHRPM-8Aomh2eI9TOYwKGhC6fQGe44GrB66kwheuarAoasOtuMoSeo0n-eaq1PPTTKBwVrT2Znuurf1s0LLivQb3x2u0PR1v6dkblXwa0FPeSaOErXJGooQGfhAn7L_VF697Q0Y</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3068241478</pqid></control><display><type>article</type><title>CS-Bench: A Comprehensive Benchmark for Large Language Models towards Computer Science Mastery</title><source>Publicly Available Content (ProQuest)</source><creator>Song, Xiaoshuai ; Diao, Muxi ; Dong, Guanting ; Wang, Zhengyang ; Fu, Yujia ; Qiao, Runqi ; Wang, Zhexu ; Fu, Dayuan ; Wu, Huangxuan ; Liang, Bin ; Zeng, Weihao ; Wang, Yejie ; GongQue, Zhuoma ; Yu, Jianing ; Tan, Qiuna ; Xu, Weiran</creator><creatorcontrib>Song, Xiaoshuai ; Diao, Muxi ; Dong, Guanting ; Wang, Zhengyang ; Fu, Yujia ; Qiao, Runqi ; Wang, Zhexu ; Fu, Dayuan ; Wu, Huangxuan ; Liang, Bin ; Zeng, Weihao ; Wang, Yejie ; GongQue, Zhuoma ; Yu, Jianing ; Tan, Qiuna ; Xu, Weiran</creatorcontrib><description>Computer Science (CS) stands as a testament to the intricacies of human intelligence, profoundly advancing the development of artificial intelligence and modern society. However, the current community of large language models (LLMs) overly focuses on benchmarks for analyzing specific foundational skills (e.g. mathematics and code generation), neglecting an all-round evaluation of the computer science field. To bridge this gap, we introduce CS-Bench, the first bilingual (Chinese-English) benchmark dedicated to evaluating the performance of LLMs in computer science. CS-Bench comprises approximately 5K meticulously curated test samples, covering 26 subfields across 4 key areas of computer science, encompassing various task forms and divisions of knowledge and reasoning. Utilizing CS-Bench, we conduct a comprehensive evaluation of over 30 mainstream LLMs, revealing the relationship between CS performance and model scales. We also quantitatively analyze the reasons for failures in existing LLMs and highlight directions for improvements, including knowledge supplementation and CS-specific reasoning. Further cross-capability experiments show a high correlation between LLMs' capabilities in computer science and their abilities in mathematics and coding. Moreover, expert LLMs specialized in mathematics and coding also demonstrate strong performances in several CS subfields. Looking ahead, we envision CS-Bench serving as a cornerstone for LLM applications in the CS field and paving new avenues in assessing LLMs' diverse reasoning capabilities. The CS-Bench data and evaluation code are available at https://github.com/csbench/csbench.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial intelligence ; Benchmarks ; Coding ; Computer science ; Large language models ; Mathematics ; Performance evaluation ; Reasoning</subject><ispartof>arXiv.org, 2024-06</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/3068241478?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25752,37011,44589</link.rule.ids></links><search><creatorcontrib>Song, Xiaoshuai</creatorcontrib><creatorcontrib>Diao, Muxi</creatorcontrib><creatorcontrib>Dong, Guanting</creatorcontrib><creatorcontrib>Wang, Zhengyang</creatorcontrib><creatorcontrib>Fu, Yujia</creatorcontrib><creatorcontrib>Qiao, Runqi</creatorcontrib><creatorcontrib>Wang, Zhexu</creatorcontrib><creatorcontrib>Fu, Dayuan</creatorcontrib><creatorcontrib>Wu, Huangxuan</creatorcontrib><creatorcontrib>Liang, Bin</creatorcontrib><creatorcontrib>Zeng, Weihao</creatorcontrib><creatorcontrib>Wang, Yejie</creatorcontrib><creatorcontrib>GongQue, Zhuoma</creatorcontrib><creatorcontrib>Yu, Jianing</creatorcontrib><creatorcontrib>Tan, Qiuna</creatorcontrib><creatorcontrib>Xu, Weiran</creatorcontrib><title>CS-Bench: A Comprehensive Benchmark for Large Language Models towards Computer Science Mastery</title><title>arXiv.org</title><description>Computer Science (CS) stands as a testament to the intricacies of human intelligence, profoundly advancing the development of artificial intelligence and modern society. However, the current community of large language models (LLMs) overly focuses on benchmarks for analyzing specific foundational skills (e.g. mathematics and code generation), neglecting an all-round evaluation of the computer science field. To bridge this gap, we introduce CS-Bench, the first bilingual (Chinese-English) benchmark dedicated to evaluating the performance of LLMs in computer science. CS-Bench comprises approximately 5K meticulously curated test samples, covering 26 subfields across 4 key areas of computer science, encompassing various task forms and divisions of knowledge and reasoning. Utilizing CS-Bench, we conduct a comprehensive evaluation of over 30 mainstream LLMs, revealing the relationship between CS performance and model scales. We also quantitatively analyze the reasons for failures in existing LLMs and highlight directions for improvements, including knowledge supplementation and CS-specific reasoning. Further cross-capability experiments show a high correlation between LLMs' capabilities in computer science and their abilities in mathematics and coding. Moreover, expert LLMs specialized in mathematics and coding also demonstrate strong performances in several CS subfields. Looking ahead, we envision CS-Bench serving as a cornerstone for LLM applications in the CS field and paving new avenues in assessing LLMs' diverse reasoning capabilities. The CS-Bench data and evaluation code are available at https://github.com/csbench/csbench.</description><subject>Artificial intelligence</subject><subject>Benchmarks</subject><subject>Coding</subject><subject>Computer science</subject><subject>Large language models</subject><subject>Mathematics</subject><subject>Performance evaluation</subject><subject>Reasoning</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNS9sKgjAYHkGQlO8w6FqYmye6Kym6qCu7Tob-ntLNNlf09g3pAbr5zt8COZQx30sCSlfI1bojhNAopmHIHHRPM-8Aomh2eI9TOYwKGhC6fQGe44GrB66kwheuarAoasOtuMoSeo0n-eaq1PPTTKBwVrT2Znuurf1s0LLivQb3x2u0PR1v6dkblXwa0FPeSaOErXJGooQGfhAn7L_VF697Q0Y</recordid><startdate>20240612</startdate><enddate>20240612</enddate><creator>Song, Xiaoshuai</creator><creator>Diao, Muxi</creator><creator>Dong, Guanting</creator><creator>Wang, Zhengyang</creator><creator>Fu, Yujia</creator><creator>Qiao, Runqi</creator><creator>Wang, Zhexu</creator><creator>Fu, Dayuan</creator><creator>Wu, Huangxuan</creator><creator>Liang, Bin</creator><creator>Zeng, Weihao</creator><creator>Wang, Yejie</creator><creator>GongQue, Zhuoma</creator><creator>Yu, Jianing</creator><creator>Tan, Qiuna</creator><creator>Xu, Weiran</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240612</creationdate><title>CS-Bench: A Comprehensive Benchmark for Large Language Models towards Computer Science Mastery</title><author>Song, Xiaoshuai ; Diao, Muxi ; Dong, Guanting ; Wang, Zhengyang ; Fu, Yujia ; Qiao, Runqi ; Wang, Zhexu ; Fu, Dayuan ; Wu, Huangxuan ; Liang, Bin ; Zeng, Weihao ; Wang, Yejie ; GongQue, Zhuoma ; Yu, Jianing ; Tan, Qiuna ; Xu, Weiran</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30682414783</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Artificial intelligence</topic><topic>Benchmarks</topic><topic>Coding</topic><topic>Computer science</topic><topic>Large language models</topic><topic>Mathematics</topic><topic>Performance evaluation</topic><topic>Reasoning</topic><toplevel>online_resources</toplevel><creatorcontrib>Song, Xiaoshuai</creatorcontrib><creatorcontrib>Diao, Muxi</creatorcontrib><creatorcontrib>Dong, Guanting</creatorcontrib><creatorcontrib>Wang, Zhengyang</creatorcontrib><creatorcontrib>Fu, Yujia</creatorcontrib><creatorcontrib>Qiao, Runqi</creatorcontrib><creatorcontrib>Wang, Zhexu</creatorcontrib><creatorcontrib>Fu, Dayuan</creatorcontrib><creatorcontrib>Wu, Huangxuan</creatorcontrib><creatorcontrib>Liang, Bin</creatorcontrib><creatorcontrib>Zeng, Weihao</creatorcontrib><creatorcontrib>Wang, Yejie</creatorcontrib><creatorcontrib>GongQue, Zhuoma</creatorcontrib><creatorcontrib>Yu, Jianing</creatorcontrib><creatorcontrib>Tan, Qiuna</creatorcontrib><creatorcontrib>Xu, Weiran</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Song, Xiaoshuai</au><au>Diao, Muxi</au><au>Dong, Guanting</au><au>Wang, Zhengyang</au><au>Fu, Yujia</au><au>Qiao, Runqi</au><au>Wang, Zhexu</au><au>Fu, Dayuan</au><au>Wu, Huangxuan</au><au>Liang, Bin</au><au>Zeng, Weihao</au><au>Wang, Yejie</au><au>GongQue, Zhuoma</au><au>Yu, Jianing</au><au>Tan, Qiuna</au><au>Xu, Weiran</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>CS-Bench: A Comprehensive Benchmark for Large Language Models towards Computer Science Mastery</atitle><jtitle>arXiv.org</jtitle><date>2024-06-12</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Computer Science (CS) stands as a testament to the intricacies of human intelligence, profoundly advancing the development of artificial intelligence and modern society. However, the current community of large language models (LLMs) overly focuses on benchmarks for analyzing specific foundational skills (e.g. mathematics and code generation), neglecting an all-round evaluation of the computer science field. To bridge this gap, we introduce CS-Bench, the first bilingual (Chinese-English) benchmark dedicated to evaluating the performance of LLMs in computer science. CS-Bench comprises approximately 5K meticulously curated test samples, covering 26 subfields across 4 key areas of computer science, encompassing various task forms and divisions of knowledge and reasoning. Utilizing CS-Bench, we conduct a comprehensive evaluation of over 30 mainstream LLMs, revealing the relationship between CS performance and model scales. We also quantitatively analyze the reasons for failures in existing LLMs and highlight directions for improvements, including knowledge supplementation and CS-specific reasoning. Further cross-capability experiments show a high correlation between LLMs' capabilities in computer science and their abilities in mathematics and coding. Moreover, expert LLMs specialized in mathematics and coding also demonstrate strong performances in several CS subfields. Looking ahead, we envision CS-Bench serving as a cornerstone for LLM applications in the CS field and paving new avenues in assessing LLMs' diverse reasoning capabilities. The CS-Bench data and evaluation code are available at https://github.com/csbench/csbench.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-06 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3068241478 |
source | Publicly Available Content (ProQuest) |
subjects | Artificial intelligence Benchmarks Coding Computer science Large language models Mathematics Performance evaluation Reasoning |
title | CS-Bench: A Comprehensive Benchmark for Large Language Models towards Computer Science Mastery |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T13%3A08%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=CS-Bench:%20A%20Comprehensive%20Benchmark%20for%20Large%20Language%20Models%20towards%20Computer%20Science%20Mastery&rft.jtitle=arXiv.org&rft.au=Song,%20Xiaoshuai&rft.date=2024-06-12&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3068241478%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_30682414783%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3068241478&rft_id=info:pmid/&rfr_iscdi=true |