Loading…
TeachText: CrossModal text-video retrieval through generalized distillation
In recent years, considerable progress on the task of text-video retrieval has been achieved by leveraging large-scale pretraining on visual and audio datasets to construct powerful video encoders. By contrast, despite the natural symmetry, the design of effective algorithms for exploiting large-sca...
Saved in:
Published in: | Artificial intelligence 2025-01, Vol.338, p.104235, Article 104235 |
---|---|
Main Authors: | , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In recent years, considerable progress on the task of text-video retrieval has been achieved by leveraging large-scale pretraining on visual and audio datasets to construct powerful video encoders. By contrast, despite the natural symmetry, the design of effective algorithms for exploiting large-scale language pretraining remains under-explored. In this work, we investigate the design of such algorithms and propose a novel generalized distillation method, TeachText, which leverages complementary cues from multiple text encoders to provide an enhanced supervisory signal to the retrieval model. TeachText yields significant gains on a number of video retrieval benchmarks without incurring additional computational overhead during inference and was used to produce the winning entry in the Condensed Movie Challenge at ICCV 2021. We show how TeachText can be extended to include multiple video modalities, reducing computational cost at inference without compromising performance. Finally, we demonstrate the application of our method to the task of removing noisy descriptions from the training partitions of retrieval datasets to improve performance. Code and data can be found at https://www.robots.ox.ac.uk/~vgg/research/teachtext/.
•TeachText leverages the additional information brought by the usage of multiple text embeddings.•We propose learning the retrieval similarity matrix between joint query-video embeddings.•We achieve significant gains across six text-video retrieval benchmarks.•We improve the CE+ architecture with GPT-J embeddings, boosting performance.•A thorough error analysis highlights the benefits of multiple text embeddings in text-video retrieval. |
---|---|
ISSN: | 0004-3702 |
DOI: | 10.1016/j.artint.2024.104235 |