Loading…
Semantic Relationships Guided Representation Learning for Facial Action Unit Recognition
Facial action unit (AU) recognition is a crucial task for facial expressions analysis and has attracted extensive attention in the field of artificial intelligence and computer vision. Existing works have either focused on designing or learning complex regional feature representations, or delved int...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Citations: | Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c181t-b674c13b9c59b7ffa775adad494d3689f6d2f67e63a3f693a22fe61a378c6743 |
---|---|
cites | |
container_end_page | 8601 |
container_issue | 1 |
container_start_page | 8594 |
container_title | |
container_volume | 33 |
creator | Li, Guanbin Zhu, Xin Zeng, Yirui Wang, Qing Lin, Liang |
description | Facial action unit (AU) recognition is a crucial task for facial expressions analysis and has attracted extensive attention in the field of artificial intelligence and computer vision. Existing works have either focused on designing or learning complex regional feature representations, or delved into various types of AU relationship modeling. Albeit with varying degrees of progress, it is still arduous for existing methods to handle complex situations. In this paper, we investigate how to integrate the semantic relationship propagation between AUs in a deep neural network framework to enhance the feature representation of facial regions, and propose an AU semantic relationship embedded representation learning (SRERL) framework. Specifically, by analyzing the symbiosis and mutual exclusion of AUs in various facial expressions, we organize the facial AUs in the form of structured knowledge-graph and integrate a Gated Graph Neural Network (GGNN) in a multi-scale CNN framework to propagate node information through the graph for generating enhanced AU representation. As the learned feature involves both the appearance characteristics and the AU relationship reasoning, the proposed model is more robust and can cope with more challenging cases, e.g., illumination change and partial occlusion. Extensive experiments on the two public benchmarks demonstrate that our method outperforms the previous work and achieves state of the art performance. |
doi_str_mv | 10.1609/aaai.v33i01.33018594 |
format | conference_proceeding |
fullrecord | <record><control><sourceid>crossref</sourceid><recordid>TN_cdi_crossref_primary_10_1609_aaai_v33i01_33018594</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_1609_aaai_v33i01_33018594</sourcerecordid><originalsourceid>FETCH-LOGICAL-c181t-b674c13b9c59b7ffa775adad494d3689f6d2f67e63a3f693a22fe61a378c6743</originalsourceid><addsrcrecordid>eNo1kEFLw0AQhRdRsNT-Aw_5A4mZzGY3eyzFViEgaAVvYbLZrStpUnaj4L9329q5zOO9mXf4GLuHPAORqwcictkPosshQ8yhKhW_YrMCJU-Ri-o6aihVWqJSt2wRwlcehysAkDP28Wb2NExOJ6-mp8mNQ_h0h5Bsvl1numgevAlmmE5RUhvygxt2iR19sibtqE-W-hS9D26K53rcRRGNO3ZjqQ9m8b_nbLt-3K6e0vpl87xa1qmGCqa0FZJrwFbpUrXSWpKypI46rniHolJWdIUV0ggktEIhFYU1AghlpeMrzhk_12o_huCNbQ7e7cn_NpA3Rz7NkU9z5tNc-OAfgFdbvA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Semantic Relationships Guided Representation Learning for Facial Action Unit Recognition</title><source>Freely Accessible Journals</source><creator>Li, Guanbin ; Zhu, Xin ; Zeng, Yirui ; Wang, Qing ; Lin, Liang</creator><creatorcontrib>Li, Guanbin ; Zhu, Xin ; Zeng, Yirui ; Wang, Qing ; Lin, Liang</creatorcontrib><description>Facial action unit (AU) recognition is a crucial task for facial expressions analysis and has attracted extensive attention in the field of artificial intelligence and computer vision. Existing works have either focused on designing or learning complex regional feature representations, or delved into various types of AU relationship modeling. Albeit with varying degrees of progress, it is still arduous for existing methods to handle complex situations. In this paper, we investigate how to integrate the semantic relationship propagation between AUs in a deep neural network framework to enhance the feature representation of facial regions, and propose an AU semantic relationship embedded representation learning (SRERL) framework. Specifically, by analyzing the symbiosis and mutual exclusion of AUs in various facial expressions, we organize the facial AUs in the form of structured knowledge-graph and integrate a Gated Graph Neural Network (GGNN) in a multi-scale CNN framework to propagate node information through the graph for generating enhanced AU representation. As the learned feature involves both the appearance characteristics and the AU relationship reasoning, the proposed model is more robust and can cope with more challenging cases, e.g., illumination change and partial occlusion. Extensive experiments on the two public benchmarks demonstrate that our method outperforms the previous work and achieves state of the art performance.</description><identifier>ISSN: 2159-5399</identifier><identifier>EISSN: 2374-3468</identifier><identifier>DOI: 10.1609/aaai.v33i01.33018594</identifier><language>eng</language><ispartof>Proceedings of the ... AAAI Conference on Artificial Intelligence, 2019, Vol.33 (1), p.8594-8601</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c181t-b674c13b9c59b7ffa775adad494d3689f6d2f67e63a3f693a22fe61a378c6743</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27923,27924</link.rule.ids></links><search><creatorcontrib>Li, Guanbin</creatorcontrib><creatorcontrib>Zhu, Xin</creatorcontrib><creatorcontrib>Zeng, Yirui</creatorcontrib><creatorcontrib>Wang, Qing</creatorcontrib><creatorcontrib>Lin, Liang</creatorcontrib><title>Semantic Relationships Guided Representation Learning for Facial Action Unit Recognition</title><title>Proceedings of the ... AAAI Conference on Artificial Intelligence</title><description>Facial action unit (AU) recognition is a crucial task for facial expressions analysis and has attracted extensive attention in the field of artificial intelligence and computer vision. Existing works have either focused on designing or learning complex regional feature representations, or delved into various types of AU relationship modeling. Albeit with varying degrees of progress, it is still arduous for existing methods to handle complex situations. In this paper, we investigate how to integrate the semantic relationship propagation between AUs in a deep neural network framework to enhance the feature representation of facial regions, and propose an AU semantic relationship embedded representation learning (SRERL) framework. Specifically, by analyzing the symbiosis and mutual exclusion of AUs in various facial expressions, we organize the facial AUs in the form of structured knowledge-graph and integrate a Gated Graph Neural Network (GGNN) in a multi-scale CNN framework to propagate node information through the graph for generating enhanced AU representation. As the learned feature involves both the appearance characteristics and the AU relationship reasoning, the proposed model is more robust and can cope with more challenging cases, e.g., illumination change and partial occlusion. Extensive experiments on the two public benchmarks demonstrate that our method outperforms the previous work and achieves state of the art performance.</description><issn>2159-5399</issn><issn>2374-3468</issn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2019</creationdate><recordtype>conference_proceeding</recordtype><recordid>eNo1kEFLw0AQhRdRsNT-Aw_5A4mZzGY3eyzFViEgaAVvYbLZrStpUnaj4L9329q5zOO9mXf4GLuHPAORqwcictkPosshQ8yhKhW_YrMCJU-Ri-o6aihVWqJSt2wRwlcehysAkDP28Wb2NExOJ6-mp8mNQ_h0h5Bsvl1numgevAlmmE5RUhvygxt2iR19sibtqE-W-hS9D26K53rcRRGNO3ZjqQ9m8b_nbLt-3K6e0vpl87xa1qmGCqa0FZJrwFbpUrXSWpKypI46rniHolJWdIUV0ggktEIhFYU1AghlpeMrzhk_12o_huCNbQ7e7cn_NpA3Rz7NkU9z5tNc-OAfgFdbvA</recordid><startdate>20190717</startdate><enddate>20190717</enddate><creator>Li, Guanbin</creator><creator>Zhu, Xin</creator><creator>Zeng, Yirui</creator><creator>Wang, Qing</creator><creator>Lin, Liang</creator><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20190717</creationdate><title>Semantic Relationships Guided Representation Learning for Facial Action Unit Recognition</title><author>Li, Guanbin ; Zhu, Xin ; Zeng, Yirui ; Wang, Qing ; Lin, Liang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c181t-b674c13b9c59b7ffa775adad494d3689f6d2f67e63a3f693a22fe61a378c6743</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2019</creationdate><toplevel>online_resources</toplevel><creatorcontrib>Li, Guanbin</creatorcontrib><creatorcontrib>Zhu, Xin</creatorcontrib><creatorcontrib>Zeng, Yirui</creatorcontrib><creatorcontrib>Wang, Qing</creatorcontrib><creatorcontrib>Lin, Liang</creatorcontrib><collection>CrossRef</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Li, Guanbin</au><au>Zhu, Xin</au><au>Zeng, Yirui</au><au>Wang, Qing</au><au>Lin, Liang</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Semantic Relationships Guided Representation Learning for Facial Action Unit Recognition</atitle><btitle>Proceedings of the ... AAAI Conference on Artificial Intelligence</btitle><date>2019-07-17</date><risdate>2019</risdate><volume>33</volume><issue>1</issue><spage>8594</spage><epage>8601</epage><pages>8594-8601</pages><issn>2159-5399</issn><eissn>2374-3468</eissn><abstract>Facial action unit (AU) recognition is a crucial task for facial expressions analysis and has attracted extensive attention in the field of artificial intelligence and computer vision. Existing works have either focused on designing or learning complex regional feature representations, or delved into various types of AU relationship modeling. Albeit with varying degrees of progress, it is still arduous for existing methods to handle complex situations. In this paper, we investigate how to integrate the semantic relationship propagation between AUs in a deep neural network framework to enhance the feature representation of facial regions, and propose an AU semantic relationship embedded representation learning (SRERL) framework. Specifically, by analyzing the symbiosis and mutual exclusion of AUs in various facial expressions, we organize the facial AUs in the form of structured knowledge-graph and integrate a Gated Graph Neural Network (GGNN) in a multi-scale CNN framework to propagate node information through the graph for generating enhanced AU representation. As the learned feature involves both the appearance characteristics and the AU relationship reasoning, the proposed model is more robust and can cope with more challenging cases, e.g., illumination change and partial occlusion. Extensive experiments on the two public benchmarks demonstrate that our method outperforms the previous work and achieves state of the art performance.</abstract><doi>10.1609/aaai.v33i01.33018594</doi><tpages>8</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2159-5399 |
ispartof | Proceedings of the ... AAAI Conference on Artificial Intelligence, 2019, Vol.33 (1), p.8594-8601 |
issn | 2159-5399 2374-3468 |
language | eng |
recordid | cdi_crossref_primary_10_1609_aaai_v33i01_33018594 |
source | Freely Accessible Journals |
title | Semantic Relationships Guided Representation Learning for Facial Action Unit Recognition |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T12%3A45%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Semantic%20Relationships%20Guided%20Representation%20Learning%20for%20Facial%20Action%20Unit%20Recognition&rft.btitle=Proceedings%20of%20the%20...%20AAAI%20Conference%20on%20Artificial%20Intelligence&rft.au=Li,%20Guanbin&rft.date=2019-07-17&rft.volume=33&rft.issue=1&rft.spage=8594&rft.epage=8601&rft.pages=8594-8601&rft.issn=2159-5399&rft.eissn=2374-3468&rft_id=info:doi/10.1609/aaai.v33i01.33018594&rft_dat=%3Ccrossref%3E10_1609_aaai_v33i01_33018594%3C/crossref%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c181t-b674c13b9c59b7ffa775adad494d3689f6d2f67e63a3f693a22fe61a378c6743%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |