Loading…
Constraint embedding for prompt tuning in vision-language pre-trained model: Constraint embedding for prompt tuning in vision-language pre-trained model
Prompt tuning, which fine-tunes the feature distributions in pre-trained vision-language (VL) models by adding learnable tokens or contexts into image and text branches, has emerged as a popular method for enhancing task-specific performance. However, this approach may result in overfitting specific...
Saved in:
Published in: | Multimedia systems 2025, Vol.31 (1) |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Prompt tuning, which fine-tunes the feature distributions in pre-trained vision-language (VL) models by adding learnable tokens or contexts into image and text branches, has emerged as a popular method for enhancing task-specific performance. However, this approach may result in overfitting specific target data distributions, thereby undermining the original generalization capabilities of frozen models such as CLIP. To tackle this issue, a novel framework named constraint embedding for prompt tuning (CEPT) is proposed for optimizing the learnable prompt tokens. To maintain the feature extraction capabilities of the pre-trained CLIP model while extracting relevant data features for downstream tasks, the block consistency constraint (BCC) approach is proposed. This approach adjusts the feature extraction step by ensuring that block-wise embeddings are aligned, thereby preserving the original generalization performance of the pre-trained VL model. Additionally, to achieve a more harmonious distribution of image-text features in the potential space, the distribution constraint (DC) strategy is introduced. This strategy enhances multimodal data feature alignment by evenly dispersing different classes of data features and concentrating the same class of image features within the potential space. Finally, CEPT surpassed the state-of-the-art for base-to-novel generalization, achieving a harmonic mean improvement of over 1.04%. Additionally, for few-shot learning, it demonstrates an average improvement of 1.63% across five few-shot scenarios. |
---|---|
ISSN: | 0942-4962 1432-1882 |
DOI: | 10.1007/s00530-024-01600-9 |