Loading…
DRÆM - A discriminatively trained reconstruction embedding for surface anomaly detection
Visual surface anomaly detection aims to detect local image regions that significantly deviate from normal appearance. Recent surface anomaly detection methods rely on generative models to accurately reconstruct the normal areas and to fail on anomalies. These methods are trained only on anomaly-fre...
Saved in:
Main Authors: | , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Citations: | Items that cite this one |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c320t-eef6d8e0514db03041f2e927c80b0e618da57c2454ee11d87be3994f8efd66603 |
---|---|
cites | |
container_end_page | 8319 |
container_issue | |
container_start_page | 8310 |
container_title | |
container_volume | |
creator | Zavrtanik, Vitjan Kristan, Matej Skocaj, Danijel |
description | Visual surface anomaly detection aims to detect local image regions that significantly deviate from normal appearance. Recent surface anomaly detection methods rely on generative models to accurately reconstruct the normal areas and to fail on anomalies. These methods are trained only on anomaly-free images, and often require hand-crafted post-processing steps to localize the anomalies, which prohibits optimizing the feature extraction for maximal detection capability. In addition to reconstructive approach, we cast surface anomaly detection primarily as a discriminative problem and propose a discriminatively trained reconstruction anomaly embedding model (DRÆM). The proposed method learns a joint representation of an anomalous image and its anomaly-free reconstruction, while simultaneously learning a decision boundary between normal and anomalous examples. The method enables direct anomaly localization without the need for additional complicated post-processing of the network output and can be trained using simple and general anomaly simulations. On the challenging MVTec anomaly detection dataset, DRÆM outperforms the current state-of-the-art unsupervised methods by a large margin and even de-livers detection performance close to the fully-supervised methods on the widely used DAGM surface-defect detection dataset, while substantially outperforming them in localization accuracy. Code at github.com/VitjanZ/DRAEM. |
doi_str_mv | 10.1109/ICCV48922.2021.00822 |
format | conference_proceeding |
fullrecord | <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_9710329</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9710329</ieee_id><sourcerecordid>9710329</sourcerecordid><originalsourceid>FETCH-LOGICAL-c320t-eef6d8e0514db03041f2e927c80b0e618da57c2454ee11d87be3994f8efd66603</originalsourceid><addsrcrecordid>eNotj8FKAzEURaMgWGu_QBf5gRlfXjKZZFlGrYWKICq4KpnkRSLtjGSmQn_AH_PHLOrqbs45cBm7FFAKAfZq2TQvyljEEgFFCWAQj9jM1kZoXSk0AqtjNkFpoKgrUKfsbBjeAaRFoyfs9frx--ueF3zOQxp8TtvUuTF90mbPx-xSR4Fn8n03jHnnx9R3nLYthZC6Nx77zIddjs4Td12_dQcp0Ei_3Dk7iW4z0Ox_p-z59uapuStWD4tlM18VXiKMBVHUwRBUQoUWJCgRkSzW3kALpIUJrqo9qkoRCRFM3ZK0VkVDMWitQU7ZxV83EdH64_DA5f3a1gIkWvkDAAJUIw</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>DRÆM - A discriminatively trained reconstruction embedding for surface anomaly detection</title><source>IEEE Xplore All Conference Series</source><creator>Zavrtanik, Vitjan ; Kristan, Matej ; Skocaj, Danijel</creator><creatorcontrib>Zavrtanik, Vitjan ; Kristan, Matej ; Skocaj, Danijel</creatorcontrib><description>Visual surface anomaly detection aims to detect local image regions that significantly deviate from normal appearance. Recent surface anomaly detection methods rely on generative models to accurately reconstruct the normal areas and to fail on anomalies. These methods are trained only on anomaly-free images, and often require hand-crafted post-processing steps to localize the anomalies, which prohibits optimizing the feature extraction for maximal detection capability. In addition to reconstructive approach, we cast surface anomaly detection primarily as a discriminative problem and propose a discriminatively trained reconstruction anomaly embedding model (DRÆM). The proposed method learns a joint representation of an anomalous image and its anomaly-free reconstruction, while simultaneously learning a decision boundary between normal and anomalous examples. The method enables direct anomaly localization without the need for additional complicated post-processing of the network output and can be trained using simple and general anomaly simulations. On the challenging MVTec anomaly detection dataset, DRÆM outperforms the current state-of-the-art unsupervised methods by a large margin and even de-livers detection performance close to the fully-supervised methods on the widely used DAGM surface-defect detection dataset, while substantially outperforming them in localization accuracy. Code at github.com/VitjanZ/DRAEM.</description><identifier>EISSN: 2380-7504</identifier><identifier>EISBN: 9781665428125</identifier><identifier>EISBN: 1665428120</identifier><identifier>DOI: 10.1109/ICCV48922.2021.00822</identifier><identifier>CODEN: IEEPAD</identifier><language>eng</language><publisher>IEEE</publisher><subject>Computational modeling ; Computer vision ; Feature extraction ; Image reconstruction ; Location awareness ; Recognition and classification ; Surface reconstruction ; Task analysis ; Transfer/Low-shot/Semi/Unsupervised Learning ; Vision applications and systems</subject><ispartof>2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, p.8310-8319</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c320t-eef6d8e0514db03041f2e927c80b0e618da57c2454ee11d87be3994f8efd66603</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9710329$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,27924,54554,54931</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9710329$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Zavrtanik, Vitjan</creatorcontrib><creatorcontrib>Kristan, Matej</creatorcontrib><creatorcontrib>Skocaj, Danijel</creatorcontrib><title>DRÆM - A discriminatively trained reconstruction embedding for surface anomaly detection</title><title>2021 IEEE/CVF International Conference on Computer Vision (ICCV)</title><addtitle>ICCV</addtitle><description>Visual surface anomaly detection aims to detect local image regions that significantly deviate from normal appearance. Recent surface anomaly detection methods rely on generative models to accurately reconstruct the normal areas and to fail on anomalies. These methods are trained only on anomaly-free images, and often require hand-crafted post-processing steps to localize the anomalies, which prohibits optimizing the feature extraction for maximal detection capability. In addition to reconstructive approach, we cast surface anomaly detection primarily as a discriminative problem and propose a discriminatively trained reconstruction anomaly embedding model (DRÆM). The proposed method learns a joint representation of an anomalous image and its anomaly-free reconstruction, while simultaneously learning a decision boundary between normal and anomalous examples. The method enables direct anomaly localization without the need for additional complicated post-processing of the network output and can be trained using simple and general anomaly simulations. On the challenging MVTec anomaly detection dataset, DRÆM outperforms the current state-of-the-art unsupervised methods by a large margin and even de-livers detection performance close to the fully-supervised methods on the widely used DAGM surface-defect detection dataset, while substantially outperforming them in localization accuracy. Code at github.com/VitjanZ/DRAEM.</description><subject>Computational modeling</subject><subject>Computer vision</subject><subject>Feature extraction</subject><subject>Image reconstruction</subject><subject>Location awareness</subject><subject>Recognition and classification</subject><subject>Surface reconstruction</subject><subject>Task analysis</subject><subject>Transfer/Low-shot/Semi/Unsupervised Learning</subject><subject>Vision applications and systems</subject><issn>2380-7504</issn><isbn>9781665428125</isbn><isbn>1665428120</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2021</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotj8FKAzEURaMgWGu_QBf5gRlfXjKZZFlGrYWKICq4KpnkRSLtjGSmQn_AH_PHLOrqbs45cBm7FFAKAfZq2TQvyljEEgFFCWAQj9jM1kZoXSk0AqtjNkFpoKgrUKfsbBjeAaRFoyfs9frx--ueF3zOQxp8TtvUuTF90mbPx-xSR4Fn8n03jHnnx9R3nLYthZC6Nx77zIddjs4Td12_dQcp0Ei_3Dk7iW4z0Ox_p-z59uapuStWD4tlM18VXiKMBVHUwRBUQoUWJCgRkSzW3kALpIUJrqo9qkoRCRFM3ZK0VkVDMWitQU7ZxV83EdH64_DA5f3a1gIkWvkDAAJUIw</recordid><startdate>20210101</startdate><enddate>20210101</enddate><creator>Zavrtanik, Vitjan</creator><creator>Kristan, Matej</creator><creator>Skocaj, Danijel</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>20210101</creationdate><title>DRÆM - A discriminatively trained reconstruction embedding for surface anomaly detection</title><author>Zavrtanik, Vitjan ; Kristan, Matej ; Skocaj, Danijel</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c320t-eef6d8e0514db03041f2e927c80b0e618da57c2454ee11d87be3994f8efd66603</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computational modeling</topic><topic>Computer vision</topic><topic>Feature extraction</topic><topic>Image reconstruction</topic><topic>Location awareness</topic><topic>Recognition and classification</topic><topic>Surface reconstruction</topic><topic>Task analysis</topic><topic>Transfer/Low-shot/Semi/Unsupervised Learning</topic><topic>Vision applications and systems</topic><toplevel>online_resources</toplevel><creatorcontrib>Zavrtanik, Vitjan</creatorcontrib><creatorcontrib>Kristan, Matej</creatorcontrib><creatorcontrib>Skocaj, Danijel</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE/IET Electronic Library</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zavrtanik, Vitjan</au><au>Kristan, Matej</au><au>Skocaj, Danijel</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>DRÆM - A discriminatively trained reconstruction embedding for surface anomaly detection</atitle><btitle>2021 IEEE/CVF International Conference on Computer Vision (ICCV)</btitle><stitle>ICCV</stitle><date>2021-01-01</date><risdate>2021</risdate><spage>8310</spage><epage>8319</epage><pages>8310-8319</pages><eissn>2380-7504</eissn><eisbn>9781665428125</eisbn><eisbn>1665428120</eisbn><coden>IEEPAD</coden><abstract>Visual surface anomaly detection aims to detect local image regions that significantly deviate from normal appearance. Recent surface anomaly detection methods rely on generative models to accurately reconstruct the normal areas and to fail on anomalies. These methods are trained only on anomaly-free images, and often require hand-crafted post-processing steps to localize the anomalies, which prohibits optimizing the feature extraction for maximal detection capability. In addition to reconstructive approach, we cast surface anomaly detection primarily as a discriminative problem and propose a discriminatively trained reconstruction anomaly embedding model (DRÆM). The proposed method learns a joint representation of an anomalous image and its anomaly-free reconstruction, while simultaneously learning a decision boundary between normal and anomalous examples. The method enables direct anomaly localization without the need for additional complicated post-processing of the network output and can be trained using simple and general anomaly simulations. On the challenging MVTec anomaly detection dataset, DRÆM outperforms the current state-of-the-art unsupervised methods by a large margin and even de-livers detection performance close to the fully-supervised methods on the widely used DAGM surface-defect detection dataset, while substantially outperforming them in localization accuracy. Code at github.com/VitjanZ/DRAEM.</abstract><pub>IEEE</pub><doi>10.1109/ICCV48922.2021.00822</doi><tpages>10</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | EISSN: 2380-7504 |
ispartof | 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, p.8310-8319 |
issn | 2380-7504 |
language | eng |
recordid | cdi_ieee_primary_9710329 |
source | IEEE Xplore All Conference Series |
subjects | Computational modeling Computer vision Feature extraction Image reconstruction Location awareness Recognition and classification Surface reconstruction Task analysis Transfer/Low-shot/Semi/Unsupervised Learning Vision applications and systems |
title | DRÆM - A discriminatively trained reconstruction embedding for surface anomaly detection |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T09%3A47%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=DR%C3%86M%20-%20A%20discriminatively%20trained%20reconstruction%20embedding%20for%20surface%20anomaly%20detection&rft.btitle=2021%20IEEE/CVF%20International%20Conference%20on%20Computer%20Vision%20(ICCV)&rft.au=Zavrtanik,%20Vitjan&rft.date=2021-01-01&rft.spage=8310&rft.epage=8319&rft.pages=8310-8319&rft.eissn=2380-7504&rft.coden=IEEPAD&rft_id=info:doi/10.1109/ICCV48922.2021.00822&rft.eisbn=9781665428125&rft.eisbn_list=1665428120&rft_dat=%3Cieee_CHZPO%3E9710329%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c320t-eef6d8e0514db03041f2e927c80b0e618da57c2454ee11d87be3994f8efd66603%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=9710329&rfr_iscdi=true |