Loading…

Improving fine-grained understanding in image-text pre-training

We introduce SPARse Fine-grained Contrastive Alignment (SPARC), a simple method for pretraining more fine-grained multimodal representations from image-text pairs. Given that multiple image patches often correspond to single words, we propose to learn a grouping of image patches for every token in t...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2024-01
Main Authors: Bica, Ioana, Ilić, Anastasija, Bauer, Matthias, Goker Erdogan, Bošnjak, Matko, Kaplanis, Christos, Gritsenko, Alexey A, Minderer, Matthias, Blundell, Charles, Pascanu, Razvan, Mitrović, Jovana
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Bica, Ioana
Ilić, Anastasija
Bauer, Matthias
Goker Erdogan
Bošnjak, Matko
Kaplanis, Christos
Gritsenko, Alexey A
Minderer, Matthias
Blundell, Charles
Pascanu, Razvan
Mitrović, Jovana
description We introduce SPARse Fine-grained Contrastive Alignment (SPARC), a simple method for pretraining more fine-grained multimodal representations from image-text pairs. Given that multiple image patches often correspond to single words, we propose to learn a grouping of image patches for every token in the caption. To achieve this, we use a sparse similarity metric between image patches and language tokens and compute for each token a language-grouped vision embedding as the weighted average of patches. The token and language-grouped vision embeddings are then contrasted through a fine-grained sequence-wise loss that only depends on individual samples and does not require other batch samples as negatives. This enables more detailed information to be learned in a computationally inexpensive manner. SPARC combines this fine-grained loss with a contrastive loss between global image and text embeddings to learn representations that simultaneously encode global and local information. We thoroughly evaluate our proposed method and show improved performance over competing approaches both on image-level tasks relying on coarse-grained information, e.g. classification, as well as region-level tasks relying on fine-grained information, e.g. retrieval, object detection, and segmentation. Moreover, SPARC improves model faithfulness and captioning in foundational vision-language models.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2916503647</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2916503647</sourcerecordid><originalsourceid>FETCH-proquest_journals_29165036473</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSw98wtKMovy8xLV0jLzEvVTS9KBFIpCqV5KalFxSWJeSkgqcw8hczcxPRU3ZLUihKFgiIgA6QOKMXDwJqWmFOcyguluRmU3VxDnD10gaYWlqYWl8Rn5ZcW5QGl4o0sDc1MDYzNTMyNiVMFADnhOTA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2916503647</pqid></control><display><type>article</type><title>Improving fine-grained understanding in image-text pre-training</title><source>Publicly Available Content (ProQuest)</source><creator>Bica, Ioana ; Ilić, Anastasija ; Bauer, Matthias ; Goker Erdogan ; Bošnjak, Matko ; Kaplanis, Christos ; Gritsenko, Alexey A ; Minderer, Matthias ; Blundell, Charles ; Pascanu, Razvan ; Mitrović, Jovana</creator><creatorcontrib>Bica, Ioana ; Ilić, Anastasija ; Bauer, Matthias ; Goker Erdogan ; Bošnjak, Matko ; Kaplanis, Christos ; Gritsenko, Alexey A ; Minderer, Matthias ; Blundell, Charles ; Pascanu, Razvan ; Mitrović, Jovana</creatorcontrib><description>We introduce SPARse Fine-grained Contrastive Alignment (SPARC), a simple method for pretraining more fine-grained multimodal representations from image-text pairs. Given that multiple image patches often correspond to single words, we propose to learn a grouping of image patches for every token in the caption. To achieve this, we use a sparse similarity metric between image patches and language tokens and compute for each token a language-grouped vision embedding as the weighted average of patches. The token and language-grouped vision embeddings are then contrasted through a fine-grained sequence-wise loss that only depends on individual samples and does not require other batch samples as negatives. This enables more detailed information to be learned in a computationally inexpensive manner. SPARC combines this fine-grained loss with a contrastive loss between global image and text embeddings to learn representations that simultaneously encode global and local information. We thoroughly evaluate our proposed method and show improved performance over competing approaches both on image-level tasks relying on coarse-grained information, e.g. classification, as well as region-level tasks relying on fine-grained information, e.g. retrieval, object detection, and segmentation. Moreover, SPARC improves model faithfulness and captioning in foundational vision-language models.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Information retrieval ; Object recognition ; Representations</subject><ispartof>arXiv.org, 2024-01</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2916503647?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Bica, Ioana</creatorcontrib><creatorcontrib>Ilić, Anastasija</creatorcontrib><creatorcontrib>Bauer, Matthias</creatorcontrib><creatorcontrib>Goker Erdogan</creatorcontrib><creatorcontrib>Bošnjak, Matko</creatorcontrib><creatorcontrib>Kaplanis, Christos</creatorcontrib><creatorcontrib>Gritsenko, Alexey A</creatorcontrib><creatorcontrib>Minderer, Matthias</creatorcontrib><creatorcontrib>Blundell, Charles</creatorcontrib><creatorcontrib>Pascanu, Razvan</creatorcontrib><creatorcontrib>Mitrović, Jovana</creatorcontrib><title>Improving fine-grained understanding in image-text pre-training</title><title>arXiv.org</title><description>We introduce SPARse Fine-grained Contrastive Alignment (SPARC), a simple method for pretraining more fine-grained multimodal representations from image-text pairs. Given that multiple image patches often correspond to single words, we propose to learn a grouping of image patches for every token in the caption. To achieve this, we use a sparse similarity metric between image patches and language tokens and compute for each token a language-grouped vision embedding as the weighted average of patches. The token and language-grouped vision embeddings are then contrasted through a fine-grained sequence-wise loss that only depends on individual samples and does not require other batch samples as negatives. This enables more detailed information to be learned in a computationally inexpensive manner. SPARC combines this fine-grained loss with a contrastive loss between global image and text embeddings to learn representations that simultaneously encode global and local information. We thoroughly evaluate our proposed method and show improved performance over competing approaches both on image-level tasks relying on coarse-grained information, e.g. classification, as well as region-level tasks relying on fine-grained information, e.g. retrieval, object detection, and segmentation. Moreover, SPARC improves model faithfulness and captioning in foundational vision-language models.</description><subject>Information retrieval</subject><subject>Object recognition</subject><subject>Representations</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSw98wtKMovy8xLV0jLzEvVTS9KBFIpCqV5KalFxSWJeSkgqcw8hczcxPRU3ZLUihKFgiIgA6QOKMXDwJqWmFOcyguluRmU3VxDnD10gaYWlqYWl8Rn5ZcW5QGl4o0sDc1MDYzNTMyNiVMFADnhOTA</recordid><startdate>20240118</startdate><enddate>20240118</enddate><creator>Bica, Ioana</creator><creator>Ilić, Anastasija</creator><creator>Bauer, Matthias</creator><creator>Goker Erdogan</creator><creator>Bošnjak, Matko</creator><creator>Kaplanis, Christos</creator><creator>Gritsenko, Alexey A</creator><creator>Minderer, Matthias</creator><creator>Blundell, Charles</creator><creator>Pascanu, Razvan</creator><creator>Mitrović, Jovana</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240118</creationdate><title>Improving fine-grained understanding in image-text pre-training</title><author>Bica, Ioana ; Ilić, Anastasija ; Bauer, Matthias ; Goker Erdogan ; Bošnjak, Matko ; Kaplanis, Christos ; Gritsenko, Alexey A ; Minderer, Matthias ; Blundell, Charles ; Pascanu, Razvan ; Mitrović, Jovana</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_29165036473</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Information retrieval</topic><topic>Object recognition</topic><topic>Representations</topic><toplevel>online_resources</toplevel><creatorcontrib>Bica, Ioana</creatorcontrib><creatorcontrib>Ilić, Anastasija</creatorcontrib><creatorcontrib>Bauer, Matthias</creatorcontrib><creatorcontrib>Goker Erdogan</creatorcontrib><creatorcontrib>Bošnjak, Matko</creatorcontrib><creatorcontrib>Kaplanis, Christos</creatorcontrib><creatorcontrib>Gritsenko, Alexey A</creatorcontrib><creatorcontrib>Minderer, Matthias</creatorcontrib><creatorcontrib>Blundell, Charles</creatorcontrib><creatorcontrib>Pascanu, Razvan</creatorcontrib><creatorcontrib>Mitrović, Jovana</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Bica, Ioana</au><au>Ilić, Anastasija</au><au>Bauer, Matthias</au><au>Goker Erdogan</au><au>Bošnjak, Matko</au><au>Kaplanis, Christos</au><au>Gritsenko, Alexey A</au><au>Minderer, Matthias</au><au>Blundell, Charles</au><au>Pascanu, Razvan</au><au>Mitrović, Jovana</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Improving fine-grained understanding in image-text pre-training</atitle><jtitle>arXiv.org</jtitle><date>2024-01-18</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>We introduce SPARse Fine-grained Contrastive Alignment (SPARC), a simple method for pretraining more fine-grained multimodal representations from image-text pairs. Given that multiple image patches often correspond to single words, we propose to learn a grouping of image patches for every token in the caption. To achieve this, we use a sparse similarity metric between image patches and language tokens and compute for each token a language-grouped vision embedding as the weighted average of patches. The token and language-grouped vision embeddings are then contrasted through a fine-grained sequence-wise loss that only depends on individual samples and does not require other batch samples as negatives. This enables more detailed information to be learned in a computationally inexpensive manner. SPARC combines this fine-grained loss with a contrastive loss between global image and text embeddings to learn representations that simultaneously encode global and local information. We thoroughly evaluate our proposed method and show improved performance over competing approaches both on image-level tasks relying on coarse-grained information, e.g. classification, as well as region-level tasks relying on fine-grained information, e.g. retrieval, object detection, and segmentation. Moreover, SPARC improves model faithfulness and captioning in foundational vision-language models.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-01
issn 2331-8422
language eng
recordid cdi_proquest_journals_2916503647
source Publicly Available Content (ProQuest)
subjects Information retrieval
Object recognition
Representations
title Improving fine-grained understanding in image-text pre-training
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T02%3A42%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Improving%20fine-grained%20understanding%20in%20image-text%20pre-training&rft.jtitle=arXiv.org&rft.au=Bica,%20Ioana&rft.date=2024-01-18&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2916503647%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_29165036473%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2916503647&rft_id=info:pmid/&rfr_iscdi=true