Loading…
A Novel Magnification-Robust Network with Sparse Self-Attention for Micro-expression Recognition
Existing works for spontaneous Micro-Expression Recognition (MER) tend to encode Micro-Expression (ME) movements to get more discriminative features. However, MEs' low intensity makes the capture for motion extremely difficult, and the widely adopted unified-magnification strategy is prone to n...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Existing works for spontaneous Micro-Expression Recognition (MER) tend to encode Micro-Expression (ME) movements to get more discriminative features. However, MEs' low intensity makes the capture for motion extremely difficult, and the widely adopted unified-magnification strategy is prone to noise and lacks flexibility. To this end, this paper provides a new insight to encode ME motion and tackle magnification noise. Specifically, we reconstruct a new sequence via magnification techniques to make subtle ME movements more distinguishable. Afterward, Sparse Self-Attention (SSA) rectifies self-attention with Locality Sensitive Hashing (LSH), cutting the space into several hush buckets of related features. Only keys in the same bucket are operated in the attention term for every query feature. The resulting sparsity in the attention matrix prevents the network from attending features stemming from less-informative magnification degrees which could be regarded as noise, while retains the sequence modelling capability of standard self-attention. Extensive experiments on three public MER databases demonstrate our superiority against the state-of-the-art methods. |
---|---|
ISSN: | 2831-7475 |
DOI: | 10.1109/ICPR56361.2022.9956629 |