Loading…

VLCDoC: Vision-Language contrastive pre-training model for cross-Modal document classification

•Design vision-language multimodal attention-based model VLCDoC for document analysis.•InterMCA and IntraMSA attention modules effectively align the crossmodal features.•Multimodal contrastive pretraining is proposed to learn vision-language features.•A good generality of the learned multimodal doma...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition 2023-07, Vol.139, p.109419, Article 109419
Main Authors: Bakkali, Souhail, Ming, Zuheng, Coustaty, Mickael, Rusiñol, Marçal, Terrades, Oriol Ramos
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•Design vision-language multimodal attention-based model VLCDoC for document analysis.•InterMCA and IntraMSA attention modules effectively align the crossmodal features.•Multimodal contrastive pretraining is proposed to learn vision-language features.•A good generality of the learned multimodal domain-agnostic features is demonstrated. Multimodal learning from document data has achieved great success lately as it allows to pre-train semantically meaningful features as a prior into a learnable downstream task. In this paper, we approach the document classification problem by learning cross-modal representations through language and vision cues, considering intra- and inter-modality relationships. Instead of merging features from different modalities into a joint representation space, the proposed method exploits high-level interactions and learns relevant semantic information from effective attention flows within and across modalities. The proposed learning objective is devised between intra- and inter-modality alignment tasks, where the similarity distribution per task is computed by contracting positive sample pairs while simultaneously contrasting negative ones in the joint representation space. Extensive experiments on public benchmark datasets demonstrate the effectiveness and the generality of our model both on low-scale and large-scale datasets.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2023.109419