Loading…
AFPQ: Asymmetric Floating Point Quantization for LLMs
Large language models (LLMs) show great performance in various tasks, but face deployment challenges from limited memory capacity and bandwidth. Low-bit weight quantization can save memory and accelerate inference. Although floating-point (FP) formats show good performance in LLM quantization, they...
Saved in:
Published in: | arXiv.org 2023-11 |
---|---|
Main Authors: | , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Zhang, Yijia Zhang, Sicheng Cao, Shijie Du, Dayou Wei, Jianyu Cao, Ting Xu, Ningyi |
description | Large language models (LLMs) show great performance in various tasks, but face deployment challenges from limited memory capacity and bandwidth. Low-bit weight quantization can save memory and accelerate inference. Although floating-point (FP) formats show good performance in LLM quantization, they tend to perform poorly with small group sizes or sub-4 bits. We find the reason is that the absence of asymmetry in previous FP quantization makes it unsuitable for handling asymmetric value distribution of LLM weight tensors. In this work, we propose asymmetric FP quantization (AFPQ), which sets separate scales for positive and negative values. Our method leads to large accuracy improvements and can be easily plugged into other quantization methods, including GPTQ and AWQ, for better performance. Besides, no additional storage is needed compared with asymmetric integer (INT) quantization. The code is available at https://github.com/zhangsichengsjtu/AFPQ. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2886462806</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2886462806</sourcerecordid><originalsourceid>FETCH-proquest_journals_28864628063</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQwdXQLCLRScCyuzM1NLSnKTFZwy8lPLMnMS1cIyM_MK1EILE3MK8msAgrl5ymk5Rcp-Pj4FvMwsKYl5hSn8kJpbgZlN9cQZw_dgqL8wtLU4pL4rPzSojygVLyRhYWZiZmRhYGZMXGqAC8_NEI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2886462806</pqid></control><display><type>article</type><title>AFPQ: Asymmetric Floating Point Quantization for LLMs</title><source>Publicly Available Content Database</source><creator>Zhang, Yijia ; Zhang, Sicheng ; Cao, Shijie ; Du, Dayou ; Wei, Jianyu ; Cao, Ting ; Xu, Ningyi</creator><creatorcontrib>Zhang, Yijia ; Zhang, Sicheng ; Cao, Shijie ; Du, Dayou ; Wei, Jianyu ; Cao, Ting ; Xu, Ningyi</creatorcontrib><description>Large language models (LLMs) show great performance in various tasks, but face deployment challenges from limited memory capacity and bandwidth. Low-bit weight quantization can save memory and accelerate inference. Although floating-point (FP) formats show good performance in LLM quantization, they tend to perform poorly with small group sizes or sub-4 bits. We find the reason is that the absence of asymmetry in previous FP quantization makes it unsuitable for handling asymmetric value distribution of LLM weight tensors. In this work, we propose asymmetric FP quantization (AFPQ), which sets separate scales for positive and negative values. Our method leads to large accuracy improvements and can be easily plugged into other quantization methods, including GPTQ and AWQ, for better performance. Besides, no additional storage is needed compared with asymmetric integer (INT) quantization. The code is available at https://github.com/zhangsichengsjtu/AFPQ.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Floating point arithmetic ; Large language models ; Skewed distributions ; Tensors</subject><ispartof>arXiv.org, 2023-11</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2886462806?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25751,37010,44588</link.rule.ids></links><search><creatorcontrib>Zhang, Yijia</creatorcontrib><creatorcontrib>Zhang, Sicheng</creatorcontrib><creatorcontrib>Cao, Shijie</creatorcontrib><creatorcontrib>Du, Dayou</creatorcontrib><creatorcontrib>Wei, Jianyu</creatorcontrib><creatorcontrib>Cao, Ting</creatorcontrib><creatorcontrib>Xu, Ningyi</creatorcontrib><title>AFPQ: Asymmetric Floating Point Quantization for LLMs</title><title>arXiv.org</title><description>Large language models (LLMs) show great performance in various tasks, but face deployment challenges from limited memory capacity and bandwidth. Low-bit weight quantization can save memory and accelerate inference. Although floating-point (FP) formats show good performance in LLM quantization, they tend to perform poorly with small group sizes or sub-4 bits. We find the reason is that the absence of asymmetry in previous FP quantization makes it unsuitable for handling asymmetric value distribution of LLM weight tensors. In this work, we propose asymmetric FP quantization (AFPQ), which sets separate scales for positive and negative values. Our method leads to large accuracy improvements and can be easily plugged into other quantization methods, including GPTQ and AWQ, for better performance. Besides, no additional storage is needed compared with asymmetric integer (INT) quantization. The code is available at https://github.com/zhangsichengsjtu/AFPQ.</description><subject>Floating point arithmetic</subject><subject>Large language models</subject><subject>Skewed distributions</subject><subject>Tensors</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQwdXQLCLRScCyuzM1NLSnKTFZwy8lPLMnMS1cIyM_MK1EILE3MK8msAgrl5ymk5Rcp-Pj4FvMwsKYl5hSn8kJpbgZlN9cQZw_dgqL8wtLU4pL4rPzSojygVLyRhYWZiZmRhYGZMXGqAC8_NEI</recordid><startdate>20231103</startdate><enddate>20231103</enddate><creator>Zhang, Yijia</creator><creator>Zhang, Sicheng</creator><creator>Cao, Shijie</creator><creator>Du, Dayou</creator><creator>Wei, Jianyu</creator><creator>Cao, Ting</creator><creator>Xu, Ningyi</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20231103</creationdate><title>AFPQ: Asymmetric Floating Point Quantization for LLMs</title><author>Zhang, Yijia ; Zhang, Sicheng ; Cao, Shijie ; Du, Dayou ; Wei, Jianyu ; Cao, Ting ; Xu, Ningyi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28864628063</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Floating point arithmetic</topic><topic>Large language models</topic><topic>Skewed distributions</topic><topic>Tensors</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Yijia</creatorcontrib><creatorcontrib>Zhang, Sicheng</creatorcontrib><creatorcontrib>Cao, Shijie</creatorcontrib><creatorcontrib>Du, Dayou</creatorcontrib><creatorcontrib>Wei, Jianyu</creatorcontrib><creatorcontrib>Cao, Ting</creatorcontrib><creatorcontrib>Xu, Ningyi</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhang, Yijia</au><au>Zhang, Sicheng</au><au>Cao, Shijie</au><au>Du, Dayou</au><au>Wei, Jianyu</au><au>Cao, Ting</au><au>Xu, Ningyi</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>AFPQ: Asymmetric Floating Point Quantization for LLMs</atitle><jtitle>arXiv.org</jtitle><date>2023-11-03</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Large language models (LLMs) show great performance in various tasks, but face deployment challenges from limited memory capacity and bandwidth. Low-bit weight quantization can save memory and accelerate inference. Although floating-point (FP) formats show good performance in LLM quantization, they tend to perform poorly with small group sizes or sub-4 bits. We find the reason is that the absence of asymmetry in previous FP quantization makes it unsuitable for handling asymmetric value distribution of LLM weight tensors. In this work, we propose asymmetric FP quantization (AFPQ), which sets separate scales for positive and negative values. Our method leads to large accuracy improvements and can be easily plugged into other quantization methods, including GPTQ and AWQ, for better performance. Besides, no additional storage is needed compared with asymmetric integer (INT) quantization. The code is available at https://github.com/zhangsichengsjtu/AFPQ.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-11 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2886462806 |
source | Publicly Available Content Database |
subjects | Floating point arithmetic Large language models Skewed distributions Tensors |
title | AFPQ: Asymmetric Floating Point Quantization for LLMs |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T10%3A01%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=AFPQ:%20Asymmetric%20Floating%20Point%20Quantization%20for%20LLMs&rft.jtitle=arXiv.org&rft.au=Zhang,%20Yijia&rft.date=2023-11-03&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2886462806%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_28864628063%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2886462806&rft_id=info:pmid/&rfr_iscdi=true |