Loading…

Over-parameterized Student Model via Tensor Decomposition Boosted Knowledge Distillation

Increased training parameters have enabled large pre-trained models to excel in various downstream tasks. Nevertheless, the extensive computational requirements associated with these models hinder their widespread adoption within the community. We focus on Knowledge Distillation (KD), where a compac...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2024-11
Main Authors: Yu-Liang, Zhan, Zhong-Yi, Lu, Sun, Hao, Ze-Feng Gao
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Yu-Liang, Zhan
Zhong-Yi, Lu
Sun, Hao
Ze-Feng Gao
description Increased training parameters have enabled large pre-trained models to excel in various downstream tasks. Nevertheless, the extensive computational requirements associated with these models hinder their widespread adoption within the community. We focus on Knowledge Distillation (KD), where a compact student model is trained to mimic a larger teacher model, facilitating the transfer of knowledge of large models. In contrast to much of the previous work, we scale up the parameters of the student model during training, to benefit from overparameterization without increasing the inference latency. In particular, we propose a tensor decomposition strategy that effectively over-parameterizes the relatively small student model through an efficient and nearly lossless decomposition of its parameter matrices into higher-dimensional tensors. To ensure efficiency, we further introduce a tensor constraint loss to align the high-dimensional tensors between the student and teacher models. Comprehensive experiments validate the significant performance enhancement by our approach in various KD tasks, covering computer vision and natural language processing areas. Our code is available at https://github.com/intell-sci-comput/OPDF.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3127413145</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3127413145</sourcerecordid><originalsourceid>FETCH-proquest_journals_31274131453</originalsourceid><addsrcrecordid>eNqNykELgjAYgOERBEn5HwadBbdpdi6LIKJDHrrJaF8xmftsmwb9-gz6AZ3ew_NOSMSFYMk643xGYu-bNE35quB5LiJyPQ_gkk462UIAp9-g6CX0CmygJ1Rg6KAlrcB6dLSEG7Ydeh00WrpB9GHcjxZfBtQDaKl90MbILy_I9C6Nh_jXOVnud9X2kHQOnz34UDfYOztSLRgvMiZYlov_rg_-NkKh</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3127413145</pqid></control><display><type>article</type><title>Over-parameterized Student Model via Tensor Decomposition Boosted Knowledge Distillation</title><source>Publicly Available Content Database</source><creator>Yu-Liang, Zhan ; Zhong-Yi, Lu ; Sun, Hao ; Ze-Feng Gao</creator><creatorcontrib>Yu-Liang, Zhan ; Zhong-Yi, Lu ; Sun, Hao ; Ze-Feng Gao</creatorcontrib><description>Increased training parameters have enabled large pre-trained models to excel in various downstream tasks. Nevertheless, the extensive computational requirements associated with these models hinder their widespread adoption within the community. We focus on Knowledge Distillation (KD), where a compact student model is trained to mimic a larger teacher model, facilitating the transfer of knowledge of large models. In contrast to much of the previous work, we scale up the parameters of the student model during training, to benefit from overparameterization without increasing the inference latency. In particular, we propose a tensor decomposition strategy that effectively over-parameterizes the relatively small student model through an efficient and nearly lossless decomposition of its parameter matrices into higher-dimensional tensors. To ensure efficiency, we further introduce a tensor constraint loss to align the high-dimensional tensors between the student and teacher models. Comprehensive experiments validate the significant performance enhancement by our approach in various KD tasks, covering computer vision and natural language processing areas. Our code is available at https://github.com/intell-sci-comput/OPDF.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Computer vision ; Decomposition ; Knowledge management ; Natural language processing ; Parameters ; Teachers ; Tensors</subject><ispartof>arXiv.org, 2024-11</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/3127413145?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Yu-Liang, Zhan</creatorcontrib><creatorcontrib>Zhong-Yi, Lu</creatorcontrib><creatorcontrib>Sun, Hao</creatorcontrib><creatorcontrib>Ze-Feng Gao</creatorcontrib><title>Over-parameterized Student Model via Tensor Decomposition Boosted Knowledge Distillation</title><title>arXiv.org</title><description>Increased training parameters have enabled large pre-trained models to excel in various downstream tasks. Nevertheless, the extensive computational requirements associated with these models hinder their widespread adoption within the community. We focus on Knowledge Distillation (KD), where a compact student model is trained to mimic a larger teacher model, facilitating the transfer of knowledge of large models. In contrast to much of the previous work, we scale up the parameters of the student model during training, to benefit from overparameterization without increasing the inference latency. In particular, we propose a tensor decomposition strategy that effectively over-parameterizes the relatively small student model through an efficient and nearly lossless decomposition of its parameter matrices into higher-dimensional tensors. To ensure efficiency, we further introduce a tensor constraint loss to align the high-dimensional tensors between the student and teacher models. Comprehensive experiments validate the significant performance enhancement by our approach in various KD tasks, covering computer vision and natural language processing areas. Our code is available at https://github.com/intell-sci-comput/OPDF.</description><subject>Computer vision</subject><subject>Decomposition</subject><subject>Knowledge management</subject><subject>Natural language processing</subject><subject>Parameters</subject><subject>Teachers</subject><subject>Tensors</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNykELgjAYgOERBEn5HwadBbdpdi6LIKJDHrrJaF8xmftsmwb9-gz6AZ3ew_NOSMSFYMk643xGYu-bNE35quB5LiJyPQ_gkk462UIAp9-g6CX0CmygJ1Rg6KAlrcB6dLSEG7Ydeh00WrpB9GHcjxZfBtQDaKl90MbILy_I9C6Nh_jXOVnud9X2kHQOnz34UDfYOztSLRgvMiZYlov_rg_-NkKh</recordid><startdate>20241110</startdate><enddate>20241110</enddate><creator>Yu-Liang, Zhan</creator><creator>Zhong-Yi, Lu</creator><creator>Sun, Hao</creator><creator>Ze-Feng Gao</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241110</creationdate><title>Over-parameterized Student Model via Tensor Decomposition Boosted Knowledge Distillation</title><author>Yu-Liang, Zhan ; Zhong-Yi, Lu ; Sun, Hao ; Ze-Feng Gao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31274131453</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer vision</topic><topic>Decomposition</topic><topic>Knowledge management</topic><topic>Natural language processing</topic><topic>Parameters</topic><topic>Teachers</topic><topic>Tensors</topic><toplevel>online_resources</toplevel><creatorcontrib>Yu-Liang, Zhan</creatorcontrib><creatorcontrib>Zhong-Yi, Lu</creatorcontrib><creatorcontrib>Sun, Hao</creatorcontrib><creatorcontrib>Ze-Feng Gao</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yu-Liang, Zhan</au><au>Zhong-Yi, Lu</au><au>Sun, Hao</au><au>Ze-Feng Gao</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Over-parameterized Student Model via Tensor Decomposition Boosted Knowledge Distillation</atitle><jtitle>arXiv.org</jtitle><date>2024-11-10</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Increased training parameters have enabled large pre-trained models to excel in various downstream tasks. Nevertheless, the extensive computational requirements associated with these models hinder their widespread adoption within the community. We focus on Knowledge Distillation (KD), where a compact student model is trained to mimic a larger teacher model, facilitating the transfer of knowledge of large models. In contrast to much of the previous work, we scale up the parameters of the student model during training, to benefit from overparameterization without increasing the inference latency. In particular, we propose a tensor decomposition strategy that effectively over-parameterizes the relatively small student model through an efficient and nearly lossless decomposition of its parameter matrices into higher-dimensional tensors. To ensure efficiency, we further introduce a tensor constraint loss to align the high-dimensional tensors between the student and teacher models. Comprehensive experiments validate the significant performance enhancement by our approach in various KD tasks, covering computer vision and natural language processing areas. Our code is available at https://github.com/intell-sci-comput/OPDF.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-11
issn 2331-8422
language eng
recordid cdi_proquest_journals_3127413145
source Publicly Available Content Database
subjects Computer vision
Decomposition
Knowledge management
Natural language processing
Parameters
Teachers
Tensors
title Over-parameterized Student Model via Tensor Decomposition Boosted Knowledge Distillation
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T22%3A20%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Over-parameterized%20Student%20Model%20via%20Tensor%20Decomposition%20Boosted%20Knowledge%20Distillation&rft.jtitle=arXiv.org&rft.au=Yu-Liang,%20Zhan&rft.date=2024-11-10&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3127413145%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_31274131453%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3127413145&rft_id=info:pmid/&rfr_iscdi=true