Loading…

Video–text retrieval via multi-modal masked transformer and adaptive attribute-aware graph convolutional network

Despite significant advancements in deep learning-based video–text retrieval methods, three challenges persist: the alignment of fine-grained semantic information from text and video, ensuring that the obtained textual and video feature representations capture primary semantic information while main...

Full description

Saved in:
Bibliographic Details
Published in:Multimedia systems 2024-02, Vol.30 (1), Article 35
Main Authors: Lv, Gang, Sun, Yining, Nian, Fudong
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Despite significant advancements in deep learning-based video–text retrieval methods, three challenges persist: the alignment of fine-grained semantic information from text and video, ensuring that the obtained textual and video feature representations capture primary semantic information while maintaining good discriminability, and measuring the semantic similarity between different instances. To tackle these issues, we introduce an end-to-end video–text retrieval framework which exploit Multi-Modal Masked Transformer and Adaptive Attribute-Aware Graph Convolutional Network (M 3 Trans-A 3 GCN). Specifically, the features extracted from videos and texts are fed into M 3 Trans to jointly integrate the multi-modal content and mask irrelevant multi-modal context. Subsequently, a novel GCN with an adaptive correlation matrix (i.e., A 3 GCN) is constructed to obtain discriminative video representation for video–text retrieval. To better measure the semantic similarity between video–text pairs during training, we propose a novel Text-semantic-guided Multi-Modal Cross-Entropy (TMCE) loss function. Here, the similarity between different video–text pairs within a batch is computed based on the features of the corresponding text rather than their instance labels. Comprehensive experimental results on three benchmark datasets, MSR-VTT, MSVD and LSMDC, demonstrate the superiority of M 3 Trans-A 3 GCN, compared with the state-of-the-art methods in video–text retrieval.
ISSN:0942-4962
1432-1882
DOI:10.1007/s00530-023-01205-8