Loading…

Code Llama: Open Foundation Models for Code

We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cov...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2024-01
Main Authors: Rozière, Baptiste, Gehring, Jonas, Gloeckle, Fabian, Sootla, Sten, Gat, Itai, Tan, Xiaoqing Ellen, Adi, Yossi, Liu, Jingyu, Sauvestre, Romain, Remez, Tal, Rapin, Jérémy, Kozhevnikov, Artyom, Evtimov, Ivan, Bitton, Joanna, Bhatt, Manish, Cristian Canton Ferrer, Grattafiori, Aaron, Xiong, Wenhan, Défossez, Alexandre, Copet, Jade, Azhar, Faisal, Touvron, Hugo, Martin, Louis, Usunier, Nicolas, Scialom, Thomas, Synnaeve, Gabriel
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Rozière, Baptiste
Gehring, Jonas
Gloeckle, Fabian
Sootla, Sten
Gat, Itai
Tan, Xiaoqing Ellen
Adi, Yossi
Liu, Jingyu
Sauvestre, Romain
Remez, Tal
Rapin, Jérémy
Kozhevnikov, Artyom
Evtimov, Ivan
Bitton, Joanna
Bhatt, Manish
Cristian Canton Ferrer
Grattafiori, Aaron
Xiong, Wenhan
Défossez, Alexandre
Copet, Jade
Azhar, Faisal
Touvron, Hugo
Martin, Louis
Usunier, Nicolas
Scialom, Thomas
Synnaeve, Gabriel
description We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2857165381</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2857165381</sourcerecordid><originalsourceid>FETCH-proquest_journals_28571653813</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mTQds5PSVXwyUnMTbRS8C9IzVNwyy_NS0ksyczPU_AFyuUUK6TlFymAlPEwsKYl5hSn8kJpbgZlN9cQZw_dgqL8wtLU4pL4rPzSojygVLyRham5oZmpsYWhMXGqANIJMDc</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2857165381</pqid></control><display><type>article</type><title>Code Llama: Open Foundation Models for Code</title><source>Publicly Available Content Database</source><creator>Rozière, Baptiste ; Gehring, Jonas ; Gloeckle, Fabian ; Sootla, Sten ; Gat, Itai ; Tan, Xiaoqing Ellen ; Adi, Yossi ; Liu, Jingyu ; Sauvestre, Romain ; Remez, Tal ; Rapin, Jérémy ; Kozhevnikov, Artyom ; Evtimov, Ivan ; Bitton, Joanna ; Bhatt, Manish ; Cristian Canton Ferrer ; Grattafiori, Aaron ; Xiong, Wenhan ; Défossez, Alexandre ; Copet, Jade ; Azhar, Faisal ; Touvron, Hugo ; Martin, Louis ; Usunier, Nicolas ; Scialom, Thomas ; Synnaeve, Gabriel</creator><creatorcontrib>Rozière, Baptiste ; Gehring, Jonas ; Gloeckle, Fabian ; Sootla, Sten ; Gat, Itai ; Tan, Xiaoqing Ellen ; Adi, Yossi ; Liu, Jingyu ; Sauvestre, Romain ; Remez, Tal ; Rapin, Jérémy ; Kozhevnikov, Artyom ; Evtimov, Ivan ; Bitton, Joanna ; Bhatt, Manish ; Cristian Canton Ferrer ; Grattafiori, Aaron ; Xiong, Wenhan ; Défossez, Alexandre ; Copet, Jade ; Azhar, Faisal ; Touvron, Hugo ; Martin, Louis ; Usunier, Nicolas ; Scialom, Thomas ; Synnaeve, Gabriel</creatorcontrib><description>We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Large language models</subject><ispartof>arXiv.org, 2024-01</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2857165381?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>776,780,25732,36991,44569</link.rule.ids></links><search><creatorcontrib>Rozière, Baptiste</creatorcontrib><creatorcontrib>Gehring, Jonas</creatorcontrib><creatorcontrib>Gloeckle, Fabian</creatorcontrib><creatorcontrib>Sootla, Sten</creatorcontrib><creatorcontrib>Gat, Itai</creatorcontrib><creatorcontrib>Tan, Xiaoqing Ellen</creatorcontrib><creatorcontrib>Adi, Yossi</creatorcontrib><creatorcontrib>Liu, Jingyu</creatorcontrib><creatorcontrib>Sauvestre, Romain</creatorcontrib><creatorcontrib>Remez, Tal</creatorcontrib><creatorcontrib>Rapin, Jérémy</creatorcontrib><creatorcontrib>Kozhevnikov, Artyom</creatorcontrib><creatorcontrib>Evtimov, Ivan</creatorcontrib><creatorcontrib>Bitton, Joanna</creatorcontrib><creatorcontrib>Bhatt, Manish</creatorcontrib><creatorcontrib>Cristian Canton Ferrer</creatorcontrib><creatorcontrib>Grattafiori, Aaron</creatorcontrib><creatorcontrib>Xiong, Wenhan</creatorcontrib><creatorcontrib>Défossez, Alexandre</creatorcontrib><creatorcontrib>Copet, Jade</creatorcontrib><creatorcontrib>Azhar, Faisal</creatorcontrib><creatorcontrib>Touvron, Hugo</creatorcontrib><creatorcontrib>Martin, Louis</creatorcontrib><creatorcontrib>Usunier, Nicolas</creatorcontrib><creatorcontrib>Scialom, Thomas</creatorcontrib><creatorcontrib>Synnaeve, Gabriel</creatorcontrib><title>Code Llama: Open Foundation Models for Code</title><title>arXiv.org</title><description>We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.</description><subject>Large language models</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mTQds5PSVXwyUnMTbRS8C9IzVNwyy_NS0ksyczPU_AFyuUUK6TlFymAlPEwsKYl5hSn8kJpbgZlN9cQZw_dgqL8wtLU4pL4rPzSojygVLyRham5oZmpsYWhMXGqANIJMDc</recordid><startdate>20240131</startdate><enddate>20240131</enddate><creator>Rozière, Baptiste</creator><creator>Gehring, Jonas</creator><creator>Gloeckle, Fabian</creator><creator>Sootla, Sten</creator><creator>Gat, Itai</creator><creator>Tan, Xiaoqing Ellen</creator><creator>Adi, Yossi</creator><creator>Liu, Jingyu</creator><creator>Sauvestre, Romain</creator><creator>Remez, Tal</creator><creator>Rapin, Jérémy</creator><creator>Kozhevnikov, Artyom</creator><creator>Evtimov, Ivan</creator><creator>Bitton, Joanna</creator><creator>Bhatt, Manish</creator><creator>Cristian Canton Ferrer</creator><creator>Grattafiori, Aaron</creator><creator>Xiong, Wenhan</creator><creator>Défossez, Alexandre</creator><creator>Copet, Jade</creator><creator>Azhar, Faisal</creator><creator>Touvron, Hugo</creator><creator>Martin, Louis</creator><creator>Usunier, Nicolas</creator><creator>Scialom, Thomas</creator><creator>Synnaeve, Gabriel</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240131</creationdate><title>Code Llama: Open Foundation Models for Code</title><author>Rozière, Baptiste ; Gehring, Jonas ; Gloeckle, Fabian ; Sootla, Sten ; Gat, Itai ; Tan, Xiaoqing Ellen ; Adi, Yossi ; Liu, Jingyu ; Sauvestre, Romain ; Remez, Tal ; Rapin, Jérémy ; Kozhevnikov, Artyom ; Evtimov, Ivan ; Bitton, Joanna ; Bhatt, Manish ; Cristian Canton Ferrer ; Grattafiori, Aaron ; Xiong, Wenhan ; Défossez, Alexandre ; Copet, Jade ; Azhar, Faisal ; Touvron, Hugo ; Martin, Louis ; Usunier, Nicolas ; Scialom, Thomas ; Synnaeve, Gabriel</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28571653813</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Large language models</topic><toplevel>online_resources</toplevel><creatorcontrib>Rozière, Baptiste</creatorcontrib><creatorcontrib>Gehring, Jonas</creatorcontrib><creatorcontrib>Gloeckle, Fabian</creatorcontrib><creatorcontrib>Sootla, Sten</creatorcontrib><creatorcontrib>Gat, Itai</creatorcontrib><creatorcontrib>Tan, Xiaoqing Ellen</creatorcontrib><creatorcontrib>Adi, Yossi</creatorcontrib><creatorcontrib>Liu, Jingyu</creatorcontrib><creatorcontrib>Sauvestre, Romain</creatorcontrib><creatorcontrib>Remez, Tal</creatorcontrib><creatorcontrib>Rapin, Jérémy</creatorcontrib><creatorcontrib>Kozhevnikov, Artyom</creatorcontrib><creatorcontrib>Evtimov, Ivan</creatorcontrib><creatorcontrib>Bitton, Joanna</creatorcontrib><creatorcontrib>Bhatt, Manish</creatorcontrib><creatorcontrib>Cristian Canton Ferrer</creatorcontrib><creatorcontrib>Grattafiori, Aaron</creatorcontrib><creatorcontrib>Xiong, Wenhan</creatorcontrib><creatorcontrib>Défossez, Alexandre</creatorcontrib><creatorcontrib>Copet, Jade</creatorcontrib><creatorcontrib>Azhar, Faisal</creatorcontrib><creatorcontrib>Touvron, Hugo</creatorcontrib><creatorcontrib>Martin, Louis</creatorcontrib><creatorcontrib>Usunier, Nicolas</creatorcontrib><creatorcontrib>Scialom, Thomas</creatorcontrib><creatorcontrib>Synnaeve, Gabriel</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Rozière, Baptiste</au><au>Gehring, Jonas</au><au>Gloeckle, Fabian</au><au>Sootla, Sten</au><au>Gat, Itai</au><au>Tan, Xiaoqing Ellen</au><au>Adi, Yossi</au><au>Liu, Jingyu</au><au>Sauvestre, Romain</au><au>Remez, Tal</au><au>Rapin, Jérémy</au><au>Kozhevnikov, Artyom</au><au>Evtimov, Ivan</au><au>Bitton, Joanna</au><au>Bhatt, Manish</au><au>Cristian Canton Ferrer</au><au>Grattafiori, Aaron</au><au>Xiong, Wenhan</au><au>Défossez, Alexandre</au><au>Copet, Jade</au><au>Azhar, Faisal</au><au>Touvron, Hugo</au><au>Martin, Louis</au><au>Usunier, Nicolas</au><au>Scialom, Thomas</au><au>Synnaeve, Gabriel</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Code Llama: Open Foundation Models for Code</atitle><jtitle>arXiv.org</jtitle><date>2024-01-31</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-01
issn 2331-8422
language eng
recordid cdi_proquest_journals_2857165381
source Publicly Available Content Database
subjects Large language models
title Code Llama: Open Foundation Models for Code
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-26T12%3A54%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Code%20Llama:%20Open%20Foundation%20Models%20for%20Code&rft.jtitle=arXiv.org&rft.au=Rozi%C3%A8re,%20Baptiste&rft.date=2024-01-31&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2857165381%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_28571653813%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2857165381&rft_id=info:pmid/&rfr_iscdi=true