Loading…

Mitigating Biases with Diverse Ensembles and Diffusion Models

Spurious correlations in the data, where multiple cues are predictive of the target labels, often lead to a phenomenon known as shortcut learning, where a model relies on erroneous, easy-to-learn cues while ignoring reliable ones. In this work, we propose an ensemble diversification framework exploi...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2024-03
Main Authors: Scimeca, Luca, Rubinstein, Alexander, Teney, Damien, Oh, Seong Joon, Armand Mihai Nicolicioiu, Bengio, Yoshua
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Scimeca, Luca
Rubinstein, Alexander
Teney, Damien
Oh, Seong Joon
Armand Mihai Nicolicioiu
Bengio, Yoshua
description Spurious correlations in the data, where multiple cues are predictive of the target labels, often lead to a phenomenon known as shortcut learning, where a model relies on erroneous, easy-to-learn cues while ignoring reliable ones. In this work, we propose an ensemble diversification framework exploiting Diffusion Probabilistic Models (DPMs) to mitigate this form of bias. We show that at particular training intervals, DPMs can generate images with novel feature combinations, even when trained on samples displaying correlated input features. We leverage this crucial property to generate synthetic counterfactuals to increase model diversity via ensemble disagreement. We show that DPM-guided diversification is sufficient to remove dependence on primary shortcut cues, without a need for additional supervised signals. We further empirically quantify its efficacy on several diversification objectives, and finally show improved generalization and diversification performance on par with prior work that relies on auxiliary data collection.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2895042391</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2895042391</sourcerecordid><originalsourceid>FETCH-proquest_journals_28950423913</originalsourceid><addsrcrecordid>eNqNirEOgjAUABsTE4nyD02cScorKAwuKsaFzZ2U8MBHsFVe0d-XwQ9wuuTuFiIAreMoSwBWImTulVKw20Oa6kAcSvLUGU-2k0cyjCw_5O_yTG8cGWVhGR_1MGtjm9m27cTkrCxdgwNvxLI1A2P441psL8XtdI2eo3tNyL7q3TTaOVWQ5alKQOex_u_6AmeGN8Y</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2895042391</pqid></control><display><type>article</type><title>Mitigating Biases with Diverse Ensembles and Diffusion Models</title><source>Publicly Available Content Database</source><creator>Scimeca, Luca ; Rubinstein, Alexander ; Teney, Damien ; Oh, Seong Joon ; Armand Mihai Nicolicioiu ; Bengio, Yoshua</creator><creatorcontrib>Scimeca, Luca ; Rubinstein, Alexander ; Teney, Damien ; Oh, Seong Joon ; Armand Mihai Nicolicioiu ; Bengio, Yoshua</creatorcontrib><description>Spurious correlations in the data, where multiple cues are predictive of the target labels, often lead to a phenomenon known as shortcut learning, where a model relies on erroneous, easy-to-learn cues while ignoring reliable ones. In this work, we propose an ensemble diversification framework exploiting Diffusion Probabilistic Models (DPMs) to mitigate this form of bias. We show that at particular training intervals, DPMs can generate images with novel feature combinations, even when trained on samples displaying correlated input features. We leverage this crucial property to generate synthetic counterfactuals to increase model diversity via ensemble disagreement. We show that DPM-guided diversification is sufficient to remove dependence on primary shortcut cues, without a need for additional supervised signals. We further empirically quantify its efficacy on several diversification objectives, and finally show improved generalization and diversification performance on par with prior work that relies on auxiliary data collection.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Bias ; Data collection ; Probabilistic models</subject><ispartof>arXiv.org, 2024-03</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2895042391?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25751,37010,44588</link.rule.ids></links><search><creatorcontrib>Scimeca, Luca</creatorcontrib><creatorcontrib>Rubinstein, Alexander</creatorcontrib><creatorcontrib>Teney, Damien</creatorcontrib><creatorcontrib>Oh, Seong Joon</creatorcontrib><creatorcontrib>Armand Mihai Nicolicioiu</creatorcontrib><creatorcontrib>Bengio, Yoshua</creatorcontrib><title>Mitigating Biases with Diverse Ensembles and Diffusion Models</title><title>arXiv.org</title><description>Spurious correlations in the data, where multiple cues are predictive of the target labels, often lead to a phenomenon known as shortcut learning, where a model relies on erroneous, easy-to-learn cues while ignoring reliable ones. In this work, we propose an ensemble diversification framework exploiting Diffusion Probabilistic Models (DPMs) to mitigate this form of bias. We show that at particular training intervals, DPMs can generate images with novel feature combinations, even when trained on samples displaying correlated input features. We leverage this crucial property to generate synthetic counterfactuals to increase model diversity via ensemble disagreement. We show that DPM-guided diversification is sufficient to remove dependence on primary shortcut cues, without a need for additional supervised signals. We further empirically quantify its efficacy on several diversification objectives, and finally show improved generalization and diversification performance on par with prior work that relies on auxiliary data collection.</description><subject>Bias</subject><subject>Data collection</subject><subject>Probabilistic models</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNirEOgjAUABsTE4nyD02cScorKAwuKsaFzZ2U8MBHsFVe0d-XwQ9wuuTuFiIAreMoSwBWImTulVKw20Oa6kAcSvLUGU-2k0cyjCw_5O_yTG8cGWVhGR_1MGtjm9m27cTkrCxdgwNvxLI1A2P441psL8XtdI2eo3tNyL7q3TTaOVWQ5alKQOex_u_6AmeGN8Y</recordid><startdate>20240306</startdate><enddate>20240306</enddate><creator>Scimeca, Luca</creator><creator>Rubinstein, Alexander</creator><creator>Teney, Damien</creator><creator>Oh, Seong Joon</creator><creator>Armand Mihai Nicolicioiu</creator><creator>Bengio, Yoshua</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240306</creationdate><title>Mitigating Biases with Diverse Ensembles and Diffusion Models</title><author>Scimeca, Luca ; Rubinstein, Alexander ; Teney, Damien ; Oh, Seong Joon ; Armand Mihai Nicolicioiu ; Bengio, Yoshua</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28950423913</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Bias</topic><topic>Data collection</topic><topic>Probabilistic models</topic><toplevel>online_resources</toplevel><creatorcontrib>Scimeca, Luca</creatorcontrib><creatorcontrib>Rubinstein, Alexander</creatorcontrib><creatorcontrib>Teney, Damien</creatorcontrib><creatorcontrib>Oh, Seong Joon</creatorcontrib><creatorcontrib>Armand Mihai Nicolicioiu</creatorcontrib><creatorcontrib>Bengio, Yoshua</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Scimeca, Luca</au><au>Rubinstein, Alexander</au><au>Teney, Damien</au><au>Oh, Seong Joon</au><au>Armand Mihai Nicolicioiu</au><au>Bengio, Yoshua</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Mitigating Biases with Diverse Ensembles and Diffusion Models</atitle><jtitle>arXiv.org</jtitle><date>2024-03-06</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Spurious correlations in the data, where multiple cues are predictive of the target labels, often lead to a phenomenon known as shortcut learning, where a model relies on erroneous, easy-to-learn cues while ignoring reliable ones. In this work, we propose an ensemble diversification framework exploiting Diffusion Probabilistic Models (DPMs) to mitigate this form of bias. We show that at particular training intervals, DPMs can generate images with novel feature combinations, even when trained on samples displaying correlated input features. We leverage this crucial property to generate synthetic counterfactuals to increase model diversity via ensemble disagreement. We show that DPM-guided diversification is sufficient to remove dependence on primary shortcut cues, without a need for additional supervised signals. We further empirically quantify its efficacy on several diversification objectives, and finally show improved generalization and diversification performance on par with prior work that relies on auxiliary data collection.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-03
issn 2331-8422
language eng
recordid cdi_proquest_journals_2895042391
source Publicly Available Content Database
subjects Bias
Data collection
Probabilistic models
title Mitigating Biases with Diverse Ensembles and Diffusion Models
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T11%3A31%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Mitigating%20Biases%20with%20Diverse%20Ensembles%20and%20Diffusion%20Models&rft.jtitle=arXiv.org&rft.au=Scimeca,%20Luca&rft.date=2024-03-06&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2895042391%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_28950423913%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2895042391&rft_id=info:pmid/&rfr_iscdi=true