Loading…
Solving Token Gradient Conflict in Mixture-of-Experts for Large Vision-Language Model
The Mixture-of-Experts (MoE) has gained increasing attention in studying Large Vision-Language Models (LVLMs). It uses a sparse model to replace the dense model, achieving comparable performance while activating fewer parameters during inference, thus significantly reducing the inference cost. Exist...
Saved in:
Published in: | arXiv.org 2024-08 |
---|---|
Main Authors: | , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Yang, Longrong Shen, Dong Cai, Chaoxiang Yang, Fan Size Li Zhang, Di Li, Xi |
description | The Mixture-of-Experts (MoE) has gained increasing attention in studying Large Vision-Language Models (LVLMs). It uses a sparse model to replace the dense model, achieving comparable performance while activating fewer parameters during inference, thus significantly reducing the inference cost. Existing MoE methods in LVLMs encourage different experts to handle different tokens, and they usually employ a router to predict the routing of each token. However, the predictions are based solely on sample features and do not truly reveal the optimization directions of tokens. This may lead to severe optimization interference between different tokens assigned to an expert. To address this problem, this paper proposes a novel method based on token-level gradient analysis, i.e., Solving Token Gradient Conflict (STGC). Specifically, we first use token-level gradients to identify conflicting tokens in experts. After that, we add a specialized loss tailored to eliminate conflicts among tokens within each expert. Our method can serve as a plug-in for diverse Large Vision-Language Models, and extensive experimental results demonstrate its effectiveness. The code will be publicly available at https://github.com/longrongyang/STGC. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3074215315</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3074215315</sourcerecordid><originalsourceid>FETCH-proquest_journals_30742153153</originalsourceid><addsrcrecordid>eNqNit8KgjAcRkcQJOU7DLoezE2ze7G6sKusWxm5yWzsZ_sTPn5e9ADBB4fD-VYoYZxn5JgztkGp9yOllB1KVhQ8QfcbmI-2A27hJS0-O9FraQOuwCqjnwFri696DtFJAorU8yRd8FiBw41wg8QP7TVY0gg7RLH4FXppdmithPEy_XGL9qe6rS5kcvCO0oduhOjskjpOy5xlBV_23-sLiX5ARg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3074215315</pqid></control><display><type>article</type><title>Solving Token Gradient Conflict in Mixture-of-Experts for Large Vision-Language Model</title><source>Publicly Available Content Database</source><creator>Yang, Longrong ; Shen, Dong ; Cai, Chaoxiang ; Yang, Fan ; Size Li ; Zhang, Di ; Li, Xi</creator><creatorcontrib>Yang, Longrong ; Shen, Dong ; Cai, Chaoxiang ; Yang, Fan ; Size Li ; Zhang, Di ; Li, Xi</creatorcontrib><description>The Mixture-of-Experts (MoE) has gained increasing attention in studying Large Vision-Language Models (LVLMs). It uses a sparse model to replace the dense model, achieving comparable performance while activating fewer parameters during inference, thus significantly reducing the inference cost. Existing MoE methods in LVLMs encourage different experts to handle different tokens, and they usually employ a router to predict the routing of each token. However, the predictions are based solely on sample features and do not truly reveal the optimization directions of tokens. This may lead to severe optimization interference between different tokens assigned to an expert. To address this problem, this paper proposes a novel method based on token-level gradient analysis, i.e., Solving Token Gradient Conflict (STGC). Specifically, we first use token-level gradients to identify conflicting tokens in experts. After that, we add a specialized loss tailored to eliminate conflicts among tokens within each expert. Our method can serve as a plug-in for diverse Large Vision-Language Models, and extensive experimental results demonstrate its effectiveness. The code will be publicly available at https://github.com/longrongyang/STGC.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Inference ; Mixtures ; Optimization</subject><ispartof>arXiv.org, 2024-08</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/3074215315?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>776,780,25731,36989,44566</link.rule.ids></links><search><creatorcontrib>Yang, Longrong</creatorcontrib><creatorcontrib>Shen, Dong</creatorcontrib><creatorcontrib>Cai, Chaoxiang</creatorcontrib><creatorcontrib>Yang, Fan</creatorcontrib><creatorcontrib>Size Li</creatorcontrib><creatorcontrib>Zhang, Di</creatorcontrib><creatorcontrib>Li, Xi</creatorcontrib><title>Solving Token Gradient Conflict in Mixture-of-Experts for Large Vision-Language Model</title><title>arXiv.org</title><description>The Mixture-of-Experts (MoE) has gained increasing attention in studying Large Vision-Language Models (LVLMs). It uses a sparse model to replace the dense model, achieving comparable performance while activating fewer parameters during inference, thus significantly reducing the inference cost. Existing MoE methods in LVLMs encourage different experts to handle different tokens, and they usually employ a router to predict the routing of each token. However, the predictions are based solely on sample features and do not truly reveal the optimization directions of tokens. This may lead to severe optimization interference between different tokens assigned to an expert. To address this problem, this paper proposes a novel method based on token-level gradient analysis, i.e., Solving Token Gradient Conflict (STGC). Specifically, we first use token-level gradients to identify conflicting tokens in experts. After that, we add a specialized loss tailored to eliminate conflicts among tokens within each expert. Our method can serve as a plug-in for diverse Large Vision-Language Models, and extensive experimental results demonstrate its effectiveness. The code will be publicly available at https://github.com/longrongyang/STGC.</description><subject>Inference</subject><subject>Mixtures</subject><subject>Optimization</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNit8KgjAcRkcQJOU7DLoezE2ze7G6sKusWxm5yWzsZ_sTPn5e9ADBB4fD-VYoYZxn5JgztkGp9yOllB1KVhQ8QfcbmI-2A27hJS0-O9FraQOuwCqjnwFri696DtFJAorU8yRd8FiBw41wg8QP7TVY0gg7RLH4FXppdmithPEy_XGL9qe6rS5kcvCO0oduhOjskjpOy5xlBV_23-sLiX5ARg</recordid><startdate>20240805</startdate><enddate>20240805</enddate><creator>Yang, Longrong</creator><creator>Shen, Dong</creator><creator>Cai, Chaoxiang</creator><creator>Yang, Fan</creator><creator>Size Li</creator><creator>Zhang, Di</creator><creator>Li, Xi</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240805</creationdate><title>Solving Token Gradient Conflict in Mixture-of-Experts for Large Vision-Language Model</title><author>Yang, Longrong ; Shen, Dong ; Cai, Chaoxiang ; Yang, Fan ; Size Li ; Zhang, Di ; Li, Xi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30742153153</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Inference</topic><topic>Mixtures</topic><topic>Optimization</topic><toplevel>online_resources</toplevel><creatorcontrib>Yang, Longrong</creatorcontrib><creatorcontrib>Shen, Dong</creatorcontrib><creatorcontrib>Cai, Chaoxiang</creatorcontrib><creatorcontrib>Yang, Fan</creatorcontrib><creatorcontrib>Size Li</creatorcontrib><creatorcontrib>Zhang, Di</creatorcontrib><creatorcontrib>Li, Xi</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yang, Longrong</au><au>Shen, Dong</au><au>Cai, Chaoxiang</au><au>Yang, Fan</au><au>Size Li</au><au>Zhang, Di</au><au>Li, Xi</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Solving Token Gradient Conflict in Mixture-of-Experts for Large Vision-Language Model</atitle><jtitle>arXiv.org</jtitle><date>2024-08-05</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>The Mixture-of-Experts (MoE) has gained increasing attention in studying Large Vision-Language Models (LVLMs). It uses a sparse model to replace the dense model, achieving comparable performance while activating fewer parameters during inference, thus significantly reducing the inference cost. Existing MoE methods in LVLMs encourage different experts to handle different tokens, and they usually employ a router to predict the routing of each token. However, the predictions are based solely on sample features and do not truly reveal the optimization directions of tokens. This may lead to severe optimization interference between different tokens assigned to an expert. To address this problem, this paper proposes a novel method based on token-level gradient analysis, i.e., Solving Token Gradient Conflict (STGC). Specifically, we first use token-level gradients to identify conflicting tokens in experts. After that, we add a specialized loss tailored to eliminate conflicts among tokens within each expert. Our method can serve as a plug-in for diverse Large Vision-Language Models, and extensive experimental results demonstrate its effectiveness. The code will be publicly available at https://github.com/longrongyang/STGC.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-08 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3074215315 |
source | Publicly Available Content Database |
subjects | Inference Mixtures Optimization |
title | Solving Token Gradient Conflict in Mixture-of-Experts for Large Vision-Language Model |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-12T20%3A17%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Solving%20Token%20Gradient%20Conflict%20in%20Mixture-of-Experts%20for%20Large%20Vision-Language%20Model&rft.jtitle=arXiv.org&rft.au=Yang,%20Longrong&rft.date=2024-08-05&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3074215315%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_30742153153%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3074215315&rft_id=info:pmid/&rfr_iscdi=true |