Loading…

Evaluating the Generalization Ability of Quantized LLMs: Benchmark, Analysis, and Toolbox

Large language models (LLMs) have exhibited exciting progress in multiple scenarios, while the huge computational demands hinder their deployments in lots of real-world applications. As an effective means to reduce memory footprint and inference cost, quantization also faces challenges in performanc...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2024-06
Main Authors: Liu, Yijun, Meng, Yuan, Wu, Fang, Peng, Shenhao, Yao, Hang, Guan, Chaoyu, Tang, Chen, Ma, Xinzhu, Wang, Zhi, Zhu, Wenwu
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Liu, Yijun
Meng, Yuan
Wu, Fang
Peng, Shenhao
Yao, Hang
Guan, Chaoyu
Tang, Chen
Ma, Xinzhu
Wang, Zhi
Zhu, Wenwu
description Large language models (LLMs) have exhibited exciting progress in multiple scenarios, while the huge computational demands hinder their deployments in lots of real-world applications. As an effective means to reduce memory footprint and inference cost, quantization also faces challenges in performance degradation at low bit-widths. Understanding the impact of quantization on LLM capabilities, especially the generalization ability, is crucial. However, the community's main focus remains on the algorithms and models of quantization, with insufficient attention given to whether the quantized models can retain the strong generalization abilities of LLMs. In this work, we fill this gap by providing a comprehensive benchmark suite for this research topic, including an evaluation system, detailed analyses, and a general toolbox. Specifically, based on the dominant pipeline in LLM quantization, we primarily explore the impact of calibration data distribution on the generalization of quantized LLMs and conduct the benchmark using more than 40 datasets within two main scenarios. Based on this benchmark, we conduct extensive experiments with two well-known LLMs (English and Chinese) and four quantization algorithms to investigate this topic in-depth, yielding several counter-intuitive and valuable findings, e.g., models quantized using a calibration set with the same distribution as the test data are not necessarily optimal. Besides, to facilitate future research, we also release a modular-designed toolbox, which decouples the overall pipeline into several separate components, e.g., base LLM module, dataset module, quantizer module, etc. and allows subsequent researchers to easily assemble their methods through a simple configuration. Our benchmark suite is publicly available at https://github.com/TsingmaoAI/MI-optimize
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3070857758</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3070857758</sourcerecordid><originalsourceid>FETCH-proquest_journals_30708577583</originalsourceid><addsrcrecordid>eNqNi0ELgjAYQEcQJOV_-KCrwtoypZuF1cEOgZdOMnPmbG3lZqS_vg79gE4PHu-NkEMoXfjRkpAJco1pMMZkFZIgoA46Jy8mO2aFuoKtOey54i2TYvgqrSAuhBS2B13BqWPKioGXkKZHs4YNV5f6ztqbB7FisjfCeMBUCZnWstDvGRpXTBru_jhF812SbQ_-o9XPjhubN7prv6fJKQ5xFIRhENH_qg-kdUG6</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3070857758</pqid></control><display><type>article</type><title>Evaluating the Generalization Ability of Quantized LLMs: Benchmark, Analysis, and Toolbox</title><source>Publicly Available Content (ProQuest)</source><creator>Liu, Yijun ; Meng, Yuan ; Wu, Fang ; Peng, Shenhao ; Yao, Hang ; Guan, Chaoyu ; Tang, Chen ; Ma, Xinzhu ; Wang, Zhi ; Zhu, Wenwu</creator><creatorcontrib>Liu, Yijun ; Meng, Yuan ; Wu, Fang ; Peng, Shenhao ; Yao, Hang ; Guan, Chaoyu ; Tang, Chen ; Ma, Xinzhu ; Wang, Zhi ; Zhu, Wenwu</creatorcontrib><description>Large language models (LLMs) have exhibited exciting progress in multiple scenarios, while the huge computational demands hinder their deployments in lots of real-world applications. As an effective means to reduce memory footprint and inference cost, quantization also faces challenges in performance degradation at low bit-widths. Understanding the impact of quantization on LLM capabilities, especially the generalization ability, is crucial. However, the community's main focus remains on the algorithms and models of quantization, with insufficient attention given to whether the quantized models can retain the strong generalization abilities of LLMs. In this work, we fill this gap by providing a comprehensive benchmark suite for this research topic, including an evaluation system, detailed analyses, and a general toolbox. Specifically, based on the dominant pipeline in LLM quantization, we primarily explore the impact of calibration data distribution on the generalization of quantized LLMs and conduct the benchmark using more than 40 datasets within two main scenarios. Based on this benchmark, we conduct extensive experiments with two well-known LLMs (English and Chinese) and four quantization algorithms to investigate this topic in-depth, yielding several counter-intuitive and valuable findings, e.g., models quantized using a calibration set with the same distribution as the test data are not necessarily optimal. Besides, to facilitate future research, we also release a modular-designed toolbox, which decouples the overall pipeline into several separate components, e.g., base LLM module, dataset module, quantizer module, etc. and allows subsequent researchers to easily assemble their methods through a simple configuration. Our benchmark suite is publicly available at https://github.com/TsingmaoAI/MI-optimize</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Benchmarks ; Calibration ; Datasets ; Large language models ; Modules ; Optimization ; Performance degradation</subject><ispartof>arXiv.org, 2024-06</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/3070857758?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Liu, Yijun</creatorcontrib><creatorcontrib>Meng, Yuan</creatorcontrib><creatorcontrib>Wu, Fang</creatorcontrib><creatorcontrib>Peng, Shenhao</creatorcontrib><creatorcontrib>Yao, Hang</creatorcontrib><creatorcontrib>Guan, Chaoyu</creatorcontrib><creatorcontrib>Tang, Chen</creatorcontrib><creatorcontrib>Ma, Xinzhu</creatorcontrib><creatorcontrib>Wang, Zhi</creatorcontrib><creatorcontrib>Zhu, Wenwu</creatorcontrib><title>Evaluating the Generalization Ability of Quantized LLMs: Benchmark, Analysis, and Toolbox</title><title>arXiv.org</title><description>Large language models (LLMs) have exhibited exciting progress in multiple scenarios, while the huge computational demands hinder their deployments in lots of real-world applications. As an effective means to reduce memory footprint and inference cost, quantization also faces challenges in performance degradation at low bit-widths. Understanding the impact of quantization on LLM capabilities, especially the generalization ability, is crucial. However, the community's main focus remains on the algorithms and models of quantization, with insufficient attention given to whether the quantized models can retain the strong generalization abilities of LLMs. In this work, we fill this gap by providing a comprehensive benchmark suite for this research topic, including an evaluation system, detailed analyses, and a general toolbox. Specifically, based on the dominant pipeline in LLM quantization, we primarily explore the impact of calibration data distribution on the generalization of quantized LLMs and conduct the benchmark using more than 40 datasets within two main scenarios. Based on this benchmark, we conduct extensive experiments with two well-known LLMs (English and Chinese) and four quantization algorithms to investigate this topic in-depth, yielding several counter-intuitive and valuable findings, e.g., models quantized using a calibration set with the same distribution as the test data are not necessarily optimal. Besides, to facilitate future research, we also release a modular-designed toolbox, which decouples the overall pipeline into several separate components, e.g., base LLM module, dataset module, quantizer module, etc. and allows subsequent researchers to easily assemble their methods through a simple configuration. Our benchmark suite is publicly available at https://github.com/TsingmaoAI/MI-optimize</description><subject>Algorithms</subject><subject>Benchmarks</subject><subject>Calibration</subject><subject>Datasets</subject><subject>Large language models</subject><subject>Modules</subject><subject>Optimization</subject><subject>Performance degradation</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNi0ELgjAYQEcQJOV_-KCrwtoypZuF1cEOgZdOMnPmbG3lZqS_vg79gE4PHu-NkEMoXfjRkpAJco1pMMZkFZIgoA46Jy8mO2aFuoKtOey54i2TYvgqrSAuhBS2B13BqWPKioGXkKZHs4YNV5f6ztqbB7FisjfCeMBUCZnWstDvGRpXTBru_jhF812SbQ_-o9XPjhubN7prv6fJKQ5xFIRhENH_qg-kdUG6</recordid><startdate>20240615</startdate><enddate>20240615</enddate><creator>Liu, Yijun</creator><creator>Meng, Yuan</creator><creator>Wu, Fang</creator><creator>Peng, Shenhao</creator><creator>Yao, Hang</creator><creator>Guan, Chaoyu</creator><creator>Tang, Chen</creator><creator>Ma, Xinzhu</creator><creator>Wang, Zhi</creator><creator>Zhu, Wenwu</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240615</creationdate><title>Evaluating the Generalization Ability of Quantized LLMs: Benchmark, Analysis, and Toolbox</title><author>Liu, Yijun ; Meng, Yuan ; Wu, Fang ; Peng, Shenhao ; Yao, Hang ; Guan, Chaoyu ; Tang, Chen ; Ma, Xinzhu ; Wang, Zhi ; Zhu, Wenwu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30708577583</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Algorithms</topic><topic>Benchmarks</topic><topic>Calibration</topic><topic>Datasets</topic><topic>Large language models</topic><topic>Modules</topic><topic>Optimization</topic><topic>Performance degradation</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Yijun</creatorcontrib><creatorcontrib>Meng, Yuan</creatorcontrib><creatorcontrib>Wu, Fang</creatorcontrib><creatorcontrib>Peng, Shenhao</creatorcontrib><creatorcontrib>Yao, Hang</creatorcontrib><creatorcontrib>Guan, Chaoyu</creatorcontrib><creatorcontrib>Tang, Chen</creatorcontrib><creatorcontrib>Ma, Xinzhu</creatorcontrib><creatorcontrib>Wang, Zhi</creatorcontrib><creatorcontrib>Zhu, Wenwu</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Liu, Yijun</au><au>Meng, Yuan</au><au>Wu, Fang</au><au>Peng, Shenhao</au><au>Yao, Hang</au><au>Guan, Chaoyu</au><au>Tang, Chen</au><au>Ma, Xinzhu</au><au>Wang, Zhi</au><au>Zhu, Wenwu</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Evaluating the Generalization Ability of Quantized LLMs: Benchmark, Analysis, and Toolbox</atitle><jtitle>arXiv.org</jtitle><date>2024-06-15</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Large language models (LLMs) have exhibited exciting progress in multiple scenarios, while the huge computational demands hinder their deployments in lots of real-world applications. As an effective means to reduce memory footprint and inference cost, quantization also faces challenges in performance degradation at low bit-widths. Understanding the impact of quantization on LLM capabilities, especially the generalization ability, is crucial. However, the community's main focus remains on the algorithms and models of quantization, with insufficient attention given to whether the quantized models can retain the strong generalization abilities of LLMs. In this work, we fill this gap by providing a comprehensive benchmark suite for this research topic, including an evaluation system, detailed analyses, and a general toolbox. Specifically, based on the dominant pipeline in LLM quantization, we primarily explore the impact of calibration data distribution on the generalization of quantized LLMs and conduct the benchmark using more than 40 datasets within two main scenarios. Based on this benchmark, we conduct extensive experiments with two well-known LLMs (English and Chinese) and four quantization algorithms to investigate this topic in-depth, yielding several counter-intuitive and valuable findings, e.g., models quantized using a calibration set with the same distribution as the test data are not necessarily optimal. Besides, to facilitate future research, we also release a modular-designed toolbox, which decouples the overall pipeline into several separate components, e.g., base LLM module, dataset module, quantizer module, etc. and allows subsequent researchers to easily assemble their methods through a simple configuration. Our benchmark suite is publicly available at https://github.com/TsingmaoAI/MI-optimize</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-06
issn 2331-8422
language eng
recordid cdi_proquest_journals_3070857758
source Publicly Available Content (ProQuest)
subjects Algorithms
Benchmarks
Calibration
Datasets
Large language models
Modules
Optimization
Performance degradation
title Evaluating the Generalization Ability of Quantized LLMs: Benchmark, Analysis, and Toolbox
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T17%3A06%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Evaluating%20the%20Generalization%20Ability%20of%20Quantized%20LLMs:%20Benchmark,%20Analysis,%20and%20Toolbox&rft.jtitle=arXiv.org&rft.au=Liu,%20Yijun&rft.date=2024-06-15&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3070857758%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_30708577583%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3070857758&rft_id=info:pmid/&rfr_iscdi=true