Loading…
CMNet: Contrastive Magnification Network for Micro-Expression Recognition
Micro-Expression Recognition (MER) is challenging because the Micro-Expressions' (ME) motion is too weak to distinguish. This hurdle can be tackled by enhancing intensity for a more accurate acquisition of movements. However, existing magnification strategies tend to use the features of facial...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | 127 |
container_issue | 1 |
container_start_page | 119 |
container_title | |
container_volume | 37 |
creator | Wei, Mengting Jiang, Xingxun Zheng, Wenming Zong, Yuan Lu, Cheng Liu, Jiateng |
description | Micro-Expression Recognition (MER) is challenging because the Micro-Expressions' (ME) motion is too weak to distinguish. This hurdle can be tackled by enhancing intensity for a more accurate acquisition of movements. However, existing magnification strategies tend to use the features of facial images that include not only intensity clues as intensity features, leading to the intensity representation deficient of credibility. In addition, the intensity variation over time, which is crucial for encoding movements, is also neglected. To this end, we provide a reliable scheme to extract intensity clues while considering their variation on the time scale. First, we devise an Intensity Distillation (ID) loss to acquire the intensity clues by contrasting the difference between frames, given that the difference in the same video lies only in the intensity. Then, the intensity clues are calibrated to follow the trend of the original video. Specifically, due to the lack of truth intensity annotation of the original video, we build the intensity tendency by setting each intensity vacancy an uncertain value, which guides the extracted intensity clues to converge towards this trend rather some fixed values. A Wilcoxon rank sum test (Wrst) method is enforced to implement the calibration. Experimental results on three public ME databases i.e. CASME II, SAMM, and SMIC-HS validate the superiority against state-of-the-art methods. |
doi_str_mv | 10.1609/aaai.v37i1.25083 |
format | conference_proceeding |
fullrecord | <record><control><sourceid>crossref</sourceid><recordid>TN_cdi_crossref_primary_10_1609_aaai_v37i1_25083</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_1609_aaai_v37i1_25083</sourcerecordid><originalsourceid>FETCH-LOGICAL-c126t-42fcf2eadcddf9b0ca7dccd8ad0658b69a8d63e10061710eef6c7ae413d515d73</originalsourceid><addsrcrecordid>eNotkEtLxDAUhYMoOIyzd9k_kJrbNEnjTsqoA1MF0XW4k4fERzskZdR_bzt6N-fCORwOHyGXwEqQTF8hYiwPXEUoK8EafkIWFVc15bVsTqcfhKaCa31OVjm_selqDQBqQTZt9-DH66Id-jFhHuPBFx2-9jFEi2Mc-mKyv4b0XoQhFV20aaDr733yOc_mk7fDFJ6DF-Qs4Ef2q39dkpfb9XN7T7ePd5v2ZkstVHKkdRVsqDw661zQO2ZROWtdg45J0eykxsZJ7oExCQqY90Fahb4G7gQIp_iSsL_eaUrOyQezT_ET048BZmYaZqZhjjTMkQb_BbwxVdU</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>CMNet: Contrastive Magnification Network for Micro-Expression Recognition</title><source>Freely Accessible Journals</source><creator>Wei, Mengting ; Jiang, Xingxun ; Zheng, Wenming ; Zong, Yuan ; Lu, Cheng ; Liu, Jiateng</creator><creatorcontrib>Wei, Mengting ; Jiang, Xingxun ; Zheng, Wenming ; Zong, Yuan ; Lu, Cheng ; Liu, Jiateng</creatorcontrib><description>Micro-Expression Recognition (MER) is challenging because the Micro-Expressions' (ME) motion is too weak to distinguish. This hurdle can be tackled by enhancing intensity for a more accurate acquisition of movements. However, existing magnification strategies tend to use the features of facial images that include not only intensity clues as intensity features, leading to the intensity representation deficient of credibility. In addition, the intensity variation over time, which is crucial for encoding movements, is also neglected. To this end, we provide a reliable scheme to extract intensity clues while considering their variation on the time scale. First, we devise an Intensity Distillation (ID) loss to acquire the intensity clues by contrasting the difference between frames, given that the difference in the same video lies only in the intensity. Then, the intensity clues are calibrated to follow the trend of the original video. Specifically, due to the lack of truth intensity annotation of the original video, we build the intensity tendency by setting each intensity vacancy an uncertain value, which guides the extracted intensity clues to converge towards this trend rather some fixed values. A Wilcoxon rank sum test (Wrst) method is enforced to implement the calibration. Experimental results on three public ME databases i.e. CASME II, SAMM, and SMIC-HS validate the superiority against state-of-the-art methods.</description><identifier>ISSN: 2159-5399</identifier><identifier>EISSN: 2374-3468</identifier><identifier>DOI: 10.1609/aaai.v37i1.25083</identifier><language>eng</language><ispartof>Proceedings of the ... AAAI Conference on Artificial Intelligence, 2023, Vol.37 (1), p.119-127</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Wei, Mengting</creatorcontrib><creatorcontrib>Jiang, Xingxun</creatorcontrib><creatorcontrib>Zheng, Wenming</creatorcontrib><creatorcontrib>Zong, Yuan</creatorcontrib><creatorcontrib>Lu, Cheng</creatorcontrib><creatorcontrib>Liu, Jiateng</creatorcontrib><title>CMNet: Contrastive Magnification Network for Micro-Expression Recognition</title><title>Proceedings of the ... AAAI Conference on Artificial Intelligence</title><description>Micro-Expression Recognition (MER) is challenging because the Micro-Expressions' (ME) motion is too weak to distinguish. This hurdle can be tackled by enhancing intensity for a more accurate acquisition of movements. However, existing magnification strategies tend to use the features of facial images that include not only intensity clues as intensity features, leading to the intensity representation deficient of credibility. In addition, the intensity variation over time, which is crucial for encoding movements, is also neglected. To this end, we provide a reliable scheme to extract intensity clues while considering their variation on the time scale. First, we devise an Intensity Distillation (ID) loss to acquire the intensity clues by contrasting the difference between frames, given that the difference in the same video lies only in the intensity. Then, the intensity clues are calibrated to follow the trend of the original video. Specifically, due to the lack of truth intensity annotation of the original video, we build the intensity tendency by setting each intensity vacancy an uncertain value, which guides the extracted intensity clues to converge towards this trend rather some fixed values. A Wilcoxon rank sum test (Wrst) method is enforced to implement the calibration. Experimental results on three public ME databases i.e. CASME II, SAMM, and SMIC-HS validate the superiority against state-of-the-art methods.</description><issn>2159-5399</issn><issn>2374-3468</issn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2023</creationdate><recordtype>conference_proceeding</recordtype><recordid>eNotkEtLxDAUhYMoOIyzd9k_kJrbNEnjTsqoA1MF0XW4k4fERzskZdR_bzt6N-fCORwOHyGXwEqQTF8hYiwPXEUoK8EafkIWFVc15bVsTqcfhKaCa31OVjm_selqDQBqQTZt9-DH66Id-jFhHuPBFx2-9jFEi2Mc-mKyv4b0XoQhFV20aaDr733yOc_mk7fDFJ6DF-Qs4Ef2q39dkpfb9XN7T7ePd5v2ZkstVHKkdRVsqDw661zQO2ZROWtdg45J0eykxsZJ7oExCQqY90Fahb4G7gQIp_iSsL_eaUrOyQezT_ET048BZmYaZqZhjjTMkQb_BbwxVdU</recordid><startdate>20230626</startdate><enddate>20230626</enddate><creator>Wei, Mengting</creator><creator>Jiang, Xingxun</creator><creator>Zheng, Wenming</creator><creator>Zong, Yuan</creator><creator>Lu, Cheng</creator><creator>Liu, Jiateng</creator><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20230626</creationdate><title>CMNet: Contrastive Magnification Network for Micro-Expression Recognition</title><author>Wei, Mengting ; Jiang, Xingxun ; Zheng, Wenming ; Zong, Yuan ; Lu, Cheng ; Liu, Jiateng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c126t-42fcf2eadcddf9b0ca7dccd8ad0658b69a8d63e10061710eef6c7ae413d515d73</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2023</creationdate><toplevel>online_resources</toplevel><creatorcontrib>Wei, Mengting</creatorcontrib><creatorcontrib>Jiang, Xingxun</creatorcontrib><creatorcontrib>Zheng, Wenming</creatorcontrib><creatorcontrib>Zong, Yuan</creatorcontrib><creatorcontrib>Lu, Cheng</creatorcontrib><creatorcontrib>Liu, Jiateng</creatorcontrib><collection>CrossRef</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wei, Mengting</au><au>Jiang, Xingxun</au><au>Zheng, Wenming</au><au>Zong, Yuan</au><au>Lu, Cheng</au><au>Liu, Jiateng</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>CMNet: Contrastive Magnification Network for Micro-Expression Recognition</atitle><btitle>Proceedings of the ... AAAI Conference on Artificial Intelligence</btitle><date>2023-06-26</date><risdate>2023</risdate><volume>37</volume><issue>1</issue><spage>119</spage><epage>127</epage><pages>119-127</pages><issn>2159-5399</issn><eissn>2374-3468</eissn><abstract>Micro-Expression Recognition (MER) is challenging because the Micro-Expressions' (ME) motion is too weak to distinguish. This hurdle can be tackled by enhancing intensity for a more accurate acquisition of movements. However, existing magnification strategies tend to use the features of facial images that include not only intensity clues as intensity features, leading to the intensity representation deficient of credibility. In addition, the intensity variation over time, which is crucial for encoding movements, is also neglected. To this end, we provide a reliable scheme to extract intensity clues while considering their variation on the time scale. First, we devise an Intensity Distillation (ID) loss to acquire the intensity clues by contrasting the difference between frames, given that the difference in the same video lies only in the intensity. Then, the intensity clues are calibrated to follow the trend of the original video. Specifically, due to the lack of truth intensity annotation of the original video, we build the intensity tendency by setting each intensity vacancy an uncertain value, which guides the extracted intensity clues to converge towards this trend rather some fixed values. A Wilcoxon rank sum test (Wrst) method is enforced to implement the calibration. Experimental results on three public ME databases i.e. CASME II, SAMM, and SMIC-HS validate the superiority against state-of-the-art methods.</abstract><doi>10.1609/aaai.v37i1.25083</doi><tpages>9</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2159-5399 |
ispartof | Proceedings of the ... AAAI Conference on Artificial Intelligence, 2023, Vol.37 (1), p.119-127 |
issn | 2159-5399 2374-3468 |
language | eng |
recordid | cdi_crossref_primary_10_1609_aaai_v37i1_25083 |
source | Freely Accessible Journals |
title | CMNet: Contrastive Magnification Network for Micro-Expression Recognition |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T00%3A41%3A17IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=CMNet:%20Contrastive%20Magnification%20Network%20for%20Micro-Expression%20Recognition&rft.btitle=Proceedings%20of%20the%20...%20AAAI%20Conference%20on%20Artificial%20Intelligence&rft.au=Wei,%20Mengting&rft.date=2023-06-26&rft.volume=37&rft.issue=1&rft.spage=119&rft.epage=127&rft.pages=119-127&rft.issn=2159-5399&rft.eissn=2374-3468&rft_id=info:doi/10.1609/aaai.v37i1.25083&rft_dat=%3Ccrossref%3E10_1609_aaai_v37i1_25083%3C/crossref%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c126t-42fcf2eadcddf9b0ca7dccd8ad0658b69a8d63e10061710eef6c7ae413d515d73%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |