Loading…
Appearance-Motion United Auto-Encoder Framework for Video Anomaly Detection
The key to video anomaly detection is understanding the appearance and motion differences between normal and abnormal events. However, previous works either considered the characteristics of appearance or motion in isolation or treated them without distinction, making the model fail to exploit the u...
Saved in:
Published in: | IEEE transactions on circuits and systems. II, Express briefs Express briefs, 2022-05, Vol.69 (5), p.2498-2502 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The key to video anomaly detection is understanding the appearance and motion differences between normal and abnormal events. However, previous works either considered the characteristics of appearance or motion in isolation or treated them without distinction, making the model fail to exploit the unique characteristics of both. In this brief, we propose an appearance-motion united auto-encoder (AMAE) framework to jointly learn the prototypical spatial and temporal patterns of normal events. The AMAE framework includes a spatial auto-encoder to learn appearance normality, a temporal auto-encoder to learn motion normality, and a channel attention-based spatial-temporal decoder to fuse the spatial-temporal features. The experimental results on standard benchmarks demonstrate the validity of the united appearance-motion normality learning. The proposed AMAE framework outperforms the state-of-the-art methods with AUCs of 97.4%, 88.2%, and 73.6% on the UCSD Ped2, CUHK Avenue, and ShanghaiTech datasets, respectively. |
---|---|
ISSN: | 1549-7747 1558-3791 |
DOI: | 10.1109/TCSII.2022.3161049 |