Loading…
Anomaly Detection via Learnable Pretext Task
Deep anomaly detection has become over the years an appealing solution in many fields, and has seen many recent developments. One of the most promising avenues is the use of pretext tasks, which have greatly improved one-class anomaly detection. However this approach is limited by the lack of anomal...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Deep anomaly detection has become over the years an appealing solution in many fields, and has seen many recent developments. One of the most promising avenues is the use of pretext tasks, which have greatly improved one-class anomaly detection. However this approach is limited by the lack of anomalous samples and carries an important inductive bias. Indeed we could further improve the discrimination power of pretext tasks by incorporating a small set of anomalies, which in practice is often available.To this end, we introduce the concept of learnable pretext tasks, where a pretext task itself is learned to succeed on normal samples while failing on anomalies. To our knowledge it is the first work to explore this direction. By applying the learnable task on a thin plate transform recognition task, our method helps discriminating harder edge-case anomalies and greatly improves anomaly detection. It outperforms state-of-the-art with up to 49% relative error reduction measured with AUROC on various anomaly detection problems including one-vs-all and face presentation attack detection. |
---|---|
ISSN: | 2831-7475 |
DOI: | 10.1109/ICPR56361.2022.9956420 |