Loading…

Bootstrap Generalization Ability from Loss Landscape Perspective

Domain generalization aims to learn a model that can generalize well on the unseen test dataset, i.e., out-of-distribution data, which has different distribution from the training dataset. To address domain generalization in computer vision, we introduce the loss landscape theory into this field. Sp...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2023-04
Main Authors: Chen, Huanran, Shao, Shitong, Wang, Ziyi, Shang, Zirui, Chen, Jin, Ji, Xiaofeng, Wu, Xinxiao
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Chen, Huanran
Shao, Shitong
Wang, Ziyi
Shang, Zirui
Chen, Jin
Ji, Xiaofeng
Wu, Xinxiao
description Domain generalization aims to learn a model that can generalize well on the unseen test dataset, i.e., out-of-distribution data, which has different distribution from the training dataset. To address domain generalization in computer vision, we introduce the loss landscape theory into this field. Specifically, we bootstrap the generalization ability of the deep learning model from the loss landscape perspective in four aspects, including backbone, regularization, training paradigm, and learning rate. We verify the proposed theory on the NICO++, PACS, and VLCS datasets by doing extensive ablation studies as well as visualizations. In addition, we apply this theory in the ECCV 2022 NICO Challenge1 and achieve the 3rd place without using any domain invariant methods.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2715914729</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2715914729</sourcerecordid><originalsourceid>FETCH-proquest_journals_27159147293</originalsourceid><addsrcrecordid>eNqNyrsKwjAUgOEgCBbtOwScC-1Ja-3mBS-Dg4N7ifUUUmpOzEkFfXodfACnf_j-kYhAqSxZ5gATETN3aZrCooSiUJFYbYgCB6-dPKBFr3vz1sGQleur6U14ydbTXZ6IWZ60vXGjHcozenbYBPPEmRi3umeMf52K-X532R4T5-kxIIe6o8HbL9VQZkWV5SVU6r_rA3jrOZU</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2715914729</pqid></control><display><type>article</type><title>Bootstrap Generalization Ability from Loss Landscape Perspective</title><source>Publicly Available Content (ProQuest)</source><creator>Chen, Huanran ; Shao, Shitong ; Wang, Ziyi ; Shang, Zirui ; Chen, Jin ; Ji, Xiaofeng ; Wu, Xinxiao</creator><creatorcontrib>Chen, Huanran ; Shao, Shitong ; Wang, Ziyi ; Shang, Zirui ; Chen, Jin ; Ji, Xiaofeng ; Wu, Xinxiao</creatorcontrib><description>Domain generalization aims to learn a model that can generalize well on the unseen test dataset, i.e., out-of-distribution data, which has different distribution from the training dataset. To address domain generalization in computer vision, we introduce the loss landscape theory into this field. Specifically, we bootstrap the generalization ability of the deep learning model from the loss landscape perspective in four aspects, including backbone, regularization, training paradigm, and learning rate. We verify the proposed theory on the NICO++, PACS, and VLCS datasets by doing extensive ablation studies as well as visualizations. In addition, we apply this theory in the ECCV 2022 NICO Challenge1 and achieve the 3rd place without using any domain invariant methods.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Ablation ; Computer vision ; Datasets ; Deep learning ; Domains ; Regularization ; Training</subject><ispartof>arXiv.org, 2023-04</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2715914729?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25752,37011,44589</link.rule.ids></links><search><creatorcontrib>Chen, Huanran</creatorcontrib><creatorcontrib>Shao, Shitong</creatorcontrib><creatorcontrib>Wang, Ziyi</creatorcontrib><creatorcontrib>Shang, Zirui</creatorcontrib><creatorcontrib>Chen, Jin</creatorcontrib><creatorcontrib>Ji, Xiaofeng</creatorcontrib><creatorcontrib>Wu, Xinxiao</creatorcontrib><title>Bootstrap Generalization Ability from Loss Landscape Perspective</title><title>arXiv.org</title><description>Domain generalization aims to learn a model that can generalize well on the unseen test dataset, i.e., out-of-distribution data, which has different distribution from the training dataset. To address domain generalization in computer vision, we introduce the loss landscape theory into this field. Specifically, we bootstrap the generalization ability of the deep learning model from the loss landscape perspective in four aspects, including backbone, regularization, training paradigm, and learning rate. We verify the proposed theory on the NICO++, PACS, and VLCS datasets by doing extensive ablation studies as well as visualizations. In addition, we apply this theory in the ECCV 2022 NICO Challenge1 and achieve the 3rd place without using any domain invariant methods.</description><subject>Ablation</subject><subject>Computer vision</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Domains</subject><subject>Regularization</subject><subject>Training</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNyrsKwjAUgOEgCBbtOwScC-1Ja-3mBS-Dg4N7ifUUUmpOzEkFfXodfACnf_j-kYhAqSxZ5gATETN3aZrCooSiUJFYbYgCB6-dPKBFr3vz1sGQleur6U14ydbTXZ6IWZ60vXGjHcozenbYBPPEmRi3umeMf52K-X532R4T5-kxIIe6o8HbL9VQZkWV5SVU6r_rA3jrOZU</recordid><startdate>20230421</startdate><enddate>20230421</enddate><creator>Chen, Huanran</creator><creator>Shao, Shitong</creator><creator>Wang, Ziyi</creator><creator>Shang, Zirui</creator><creator>Chen, Jin</creator><creator>Ji, Xiaofeng</creator><creator>Wu, Xinxiao</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230421</creationdate><title>Bootstrap Generalization Ability from Loss Landscape Perspective</title><author>Chen, Huanran ; Shao, Shitong ; Wang, Ziyi ; Shang, Zirui ; Chen, Jin ; Ji, Xiaofeng ; Wu, Xinxiao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_27159147293</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Ablation</topic><topic>Computer vision</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Domains</topic><topic>Regularization</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Chen, Huanran</creatorcontrib><creatorcontrib>Shao, Shitong</creatorcontrib><creatorcontrib>Wang, Ziyi</creatorcontrib><creatorcontrib>Shang, Zirui</creatorcontrib><creatorcontrib>Chen, Jin</creatorcontrib><creatorcontrib>Ji, Xiaofeng</creatorcontrib><creatorcontrib>Wu, Xinxiao</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chen, Huanran</au><au>Shao, Shitong</au><au>Wang, Ziyi</au><au>Shang, Zirui</au><au>Chen, Jin</au><au>Ji, Xiaofeng</au><au>Wu, Xinxiao</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Bootstrap Generalization Ability from Loss Landscape Perspective</atitle><jtitle>arXiv.org</jtitle><date>2023-04-21</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Domain generalization aims to learn a model that can generalize well on the unseen test dataset, i.e., out-of-distribution data, which has different distribution from the training dataset. To address domain generalization in computer vision, we introduce the loss landscape theory into this field. Specifically, we bootstrap the generalization ability of the deep learning model from the loss landscape perspective in four aspects, including backbone, regularization, training paradigm, and learning rate. We verify the proposed theory on the NICO++, PACS, and VLCS datasets by doing extensive ablation studies as well as visualizations. In addition, we apply this theory in the ECCV 2022 NICO Challenge1 and achieve the 3rd place without using any domain invariant methods.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-04
issn 2331-8422
language eng
recordid cdi_proquest_journals_2715914729
source Publicly Available Content (ProQuest)
subjects Ablation
Computer vision
Datasets
Deep learning
Domains
Regularization
Training
title Bootstrap Generalization Ability from Loss Landscape Perspective
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T13%3A17%3A28IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Bootstrap%20Generalization%20Ability%20from%20Loss%20Landscape%20Perspective&rft.jtitle=arXiv.org&rft.au=Chen,%20Huanran&rft.date=2023-04-21&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2715914729%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_27159147293%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2715914729&rft_id=info:pmid/&rfr_iscdi=true