Loading…

TEQ: Trainable Equivalent Transformation for Quantization of LLMs

As large language models (LLMs) become more prevalent, there is a growing need for new and improved quantization methods that can meet the computationalast layer demands of these modern architectures while maintaining the accuracy. In this paper, we present TEQ, a trainable equivalent transformation...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2023-10
Main Authors: Cheng, Wenhua, Cai, Yiyang, Lv, Kaokao, Shen, Haihao
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Cheng, Wenhua
Cai, Yiyang
Lv, Kaokao
Shen, Haihao
description As large language models (LLMs) become more prevalent, there is a growing need for new and improved quantization methods that can meet the computationalast layer demands of these modern architectures while maintaining the accuracy. In this paper, we present TEQ, a trainable equivalent transformation that preserves the FP32 precision of the model output while taking advantage of low-precision quantization, especially 3 and 4 bits weight-only quantization. The training process is lightweight, requiring only 1K steps and fewer than 0.1 percent of the original model's trainable parameters. Furthermore, the transformation does not add any computational overhead during inference. Our results are on-par with the state-of-the-art (SOTA) methods on typical LLMs. Our approach can be combined with other methods to achieve even better performance. The code is available at https://github.com/intel/neural-compressor.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2878534270</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2878534270</sourcerecordid><originalsourceid>FETCH-proquest_journals_28785342703</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mRwDHENtFIIKUrMzEtMyklVcC0szSxLzEnNKwEJ5hWn5RflJpZk5ucpAFkKgaWJeSWZVRCB_DQFHx_fYh4G1rTEnOJUXijNzaDs5hri7KFbUJRfWJpaXBKflV9alAeUijeyMLcwNTYxMjcwJk4VAE4XOSg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2878534270</pqid></control><display><type>article</type><title>TEQ: Trainable Equivalent Transformation for Quantization of LLMs</title><source>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</source><creator>Cheng, Wenhua ; Cai, Yiyang ; Lv, Kaokao ; Shen, Haihao</creator><creatorcontrib>Cheng, Wenhua ; Cai, Yiyang ; Lv, Kaokao ; Shen, Haihao</creatorcontrib><description>As large language models (LLMs) become more prevalent, there is a growing need for new and improved quantization methods that can meet the computationalast layer demands of these modern architectures while maintaining the accuracy. In this paper, we present TEQ, a trainable equivalent transformation that preserves the FP32 precision of the model output while taking advantage of low-precision quantization, especially 3 and 4 bits weight-only quantization. The training process is lightweight, requiring only 1K steps and fewer than 0.1 percent of the original model's trainable parameters. Furthermore, the transformation does not add any computational overhead during inference. Our results are on-par with the state-of-the-art (SOTA) methods on typical LLMs. Our approach can be combined with other methods to achieve even better performance. The code is available at https://github.com/intel/neural-compressor.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Equivalence ; Large language models ; Transformations</subject><ispartof>arXiv.org, 2023-10</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2878534270?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>776,780,25731,36989,44566</link.rule.ids></links><search><creatorcontrib>Cheng, Wenhua</creatorcontrib><creatorcontrib>Cai, Yiyang</creatorcontrib><creatorcontrib>Lv, Kaokao</creatorcontrib><creatorcontrib>Shen, Haihao</creatorcontrib><title>TEQ: Trainable Equivalent Transformation for Quantization of LLMs</title><title>arXiv.org</title><description>As large language models (LLMs) become more prevalent, there is a growing need for new and improved quantization methods that can meet the computationalast layer demands of these modern architectures while maintaining the accuracy. In this paper, we present TEQ, a trainable equivalent transformation that preserves the FP32 precision of the model output while taking advantage of low-precision quantization, especially 3 and 4 bits weight-only quantization. The training process is lightweight, requiring only 1K steps and fewer than 0.1 percent of the original model's trainable parameters. Furthermore, the transformation does not add any computational overhead during inference. Our results are on-par with the state-of-the-art (SOTA) methods on typical LLMs. Our approach can be combined with other methods to achieve even better performance. The code is available at https://github.com/intel/neural-compressor.</description><subject>Equivalence</subject><subject>Large language models</subject><subject>Transformations</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mRwDHENtFIIKUrMzEtMyklVcC0szSxLzEnNKwEJ5hWn5RflJpZk5ucpAFkKgaWJeSWZVRCB_DQFHx_fYh4G1rTEnOJUXijNzaDs5hri7KFbUJRfWJpaXBKflV9alAeUijeyMLcwNTYxMjcwJk4VAE4XOSg</recordid><startdate>20231017</startdate><enddate>20231017</enddate><creator>Cheng, Wenhua</creator><creator>Cai, Yiyang</creator><creator>Lv, Kaokao</creator><creator>Shen, Haihao</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20231017</creationdate><title>TEQ: Trainable Equivalent Transformation for Quantization of LLMs</title><author>Cheng, Wenhua ; Cai, Yiyang ; Lv, Kaokao ; Shen, Haihao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28785342703</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Equivalence</topic><topic>Large language models</topic><topic>Transformations</topic><toplevel>online_resources</toplevel><creatorcontrib>Cheng, Wenhua</creatorcontrib><creatorcontrib>Cai, Yiyang</creatorcontrib><creatorcontrib>Lv, Kaokao</creatorcontrib><creatorcontrib>Shen, Haihao</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Database (Proquest)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Cheng, Wenhua</au><au>Cai, Yiyang</au><au>Lv, Kaokao</au><au>Shen, Haihao</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>TEQ: Trainable Equivalent Transformation for Quantization of LLMs</atitle><jtitle>arXiv.org</jtitle><date>2023-10-17</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>As large language models (LLMs) become more prevalent, there is a growing need for new and improved quantization methods that can meet the computationalast layer demands of these modern architectures while maintaining the accuracy. In this paper, we present TEQ, a trainable equivalent transformation that preserves the FP32 precision of the model output while taking advantage of low-precision quantization, especially 3 and 4 bits weight-only quantization. The training process is lightweight, requiring only 1K steps and fewer than 0.1 percent of the original model's trainable parameters. Furthermore, the transformation does not add any computational overhead during inference. Our results are on-par with the state-of-the-art (SOTA) methods on typical LLMs. Our approach can be combined with other methods to achieve even better performance. The code is available at https://github.com/intel/neural-compressor.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-10
issn 2331-8422
language eng
recordid cdi_proquest_journals_2878534270
source Publicly Available Content Database (Proquest) (PQ_SDU_P3)
subjects Equivalence
Large language models
Transformations
title TEQ: Trainable Equivalent Transformation for Quantization of LLMs
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-31T07%3A45%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=TEQ:%20Trainable%20Equivalent%20Transformation%20for%20Quantization%20of%20LLMs&rft.jtitle=arXiv.org&rft.au=Cheng,%20Wenhua&rft.date=2023-10-17&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2878534270%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_28785342703%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2878534270&rft_id=info:pmid/&rfr_iscdi=true