Loading…

Recognizing action units for facial expression analysis

Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few d...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on pattern analysis and machine intelligence 2001-02, Vol.23 (2), p.97-115
Main Authors: Tian, Y.-I., Kanade, T., Cohn, J.F.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c579t-fba4f1290b13b6f49d83d91ca8c678ce99f2ee578ae31fb54e9056d456cf0d163
cites cdi_FETCH-LOGICAL-c579t-fba4f1290b13b6f49d83d91ca8c678ce99f2ee578ae31fb54e9056d456cf0d163
container_end_page 115
container_issue 2
container_start_page 97
container_title IEEE transactions on pattern analysis and machine intelligence
container_volume 23
creator Tian, Y.-I.
Kanade, T.
Cohn, J.F.
description Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an automatic face analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AU) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AU and 10 lower face AU) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AU and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AU. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams.
doi_str_mv 10.1109/34.908962
format article
fullrecord <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_pubmed_primary_25210210</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>908962</ieee_id><sourcerecordid>25958993</sourcerecordid><originalsourceid>FETCH-LOGICAL-c579t-fba4f1290b13b6f49d83d91ca8c678ce99f2ee578ae31fb54e9056d456cf0d163</originalsourceid><addsrcrecordid>eNqF0s9rFDEUB_Agil1XD149yNCD2sPU_J68iyClaqEgiJ5DJvOypsxO1mRG2v71zrrbRT1UCOTwPnmQ73uEPGf0lDEKb4U8BWpA8wdkwUBALZSAh2RBmea1MdwckSelXFHKpKLiMTniijM6nwVpvqBPqyHexmFVOT_GNFTTEMdShZSr4Hx0fYXXm4ylbGtucP1NieUpeRRcX_DZ_l6Sbx_Ov559qi8_f7w4e39Ze9XAWIfWycA40JaJVgcJnREdMO-M143xCBA4omqMQ8FCqyQCVbqTSvtAO6bFkrzb9d1M7Ro7j8OYXW83Oa5dvrHJRft3ZYjf7Sr9tJLNXeccluT1vkFOPyYso13H4rHv3YBpKhaY1JoZ4LN8da_kRmlDQfwfKlAGfsM390JmuNaUSrH95_E_9CpNeQ67WGNkA0IaOaOTHfI5lZIxHHJg1G43wQppd5sw25d_BneQd6OfwYsdiIh4KO9f_wLHk7WJ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>884793484</pqid></control><display><type>article</type><title>Recognizing action units for facial expression analysis</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Tian, Y.-I. ; Kanade, T. ; Cohn, J.F.</creator><creatorcontrib>Tian, Y.-I. ; Kanade, T. ; Cohn, J.F.</creatorcontrib><description>Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an automatic face analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AU) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AU and 10 lower face AU) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AU and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AU. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams.</description><identifier>ISSN: 0162-8828</identifier><identifier>EISSN: 1939-3539</identifier><identifier>DOI: 10.1109/34.908962</identifier><identifier>PMID: 25210210</identifier><identifier>CODEN: ITPIDJ</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Eyes ; Face recognition ; Facial ; Facial features ; Furrows ; Gold ; Humans ; Image analysis ; Image sequence analysis ; Mathematical models ; Mouth ; Prototypes ; Recognition ; Tracking ; Transient analysis</subject><ispartof>IEEE transactions on pattern analysis and machine intelligence, 2001-02, Vol.23 (2), p.97-115</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2001</rights><rights>2001 IEEE 2001</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c579t-fba4f1290b13b6f49d83d91ca8c678ce99f2ee578ae31fb54e9056d456cf0d163</citedby><cites>FETCH-LOGICAL-c579t-fba4f1290b13b6f49d83d91ca8c678ce99f2ee578ae31fb54e9056d456cf0d163</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/908962$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>230,314,780,784,885,27924,27925,54796</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/25210210$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Tian, Y.-I.</creatorcontrib><creatorcontrib>Kanade, T.</creatorcontrib><creatorcontrib>Cohn, J.F.</creatorcontrib><title>Recognizing action units for facial expression analysis</title><title>IEEE transactions on pattern analysis and machine intelligence</title><addtitle>TPAMI</addtitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><description>Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an automatic face analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AU) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AU and 10 lower face AU) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AU and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AU. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams.</description><subject>Eyes</subject><subject>Face recognition</subject><subject>Facial</subject><subject>Facial features</subject><subject>Furrows</subject><subject>Gold</subject><subject>Humans</subject><subject>Image analysis</subject><subject>Image sequence analysis</subject><subject>Mathematical models</subject><subject>Mouth</subject><subject>Prototypes</subject><subject>Recognition</subject><subject>Tracking</subject><subject>Transient analysis</subject><issn>0162-8828</issn><issn>1939-3539</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2001</creationdate><recordtype>article</recordtype><recordid>eNqF0s9rFDEUB_Agil1XD149yNCD2sPU_J68iyClaqEgiJ5DJvOypsxO1mRG2v71zrrbRT1UCOTwPnmQ73uEPGf0lDEKb4U8BWpA8wdkwUBALZSAh2RBmea1MdwckSelXFHKpKLiMTniijM6nwVpvqBPqyHexmFVOT_GNFTTEMdShZSr4Hx0fYXXm4ylbGtucP1NieUpeRRcX_DZ_l6Sbx_Ov559qi8_f7w4e39Ze9XAWIfWycA40JaJVgcJnREdMO-M143xCBA4omqMQ8FCqyQCVbqTSvtAO6bFkrzb9d1M7Ro7j8OYXW83Oa5dvrHJRft3ZYjf7Sr9tJLNXeccluT1vkFOPyYso13H4rHv3YBpKhaY1JoZ4LN8da_kRmlDQfwfKlAGfsM390JmuNaUSrH95_E_9CpNeQ67WGNkA0IaOaOTHfI5lZIxHHJg1G43wQppd5sw25d_BneQd6OfwYsdiIh4KO9f_wLHk7WJ</recordid><startdate>20010201</startdate><enddate>20010201</enddate><creator>Tian, Y.-I.</creator><creator>Kanade, T.</creator><creator>Cohn, J.F.</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><scope>F28</scope><scope>FR3</scope><scope>5PM</scope></search><sort><creationdate>20010201</creationdate><title>Recognizing action units for facial expression analysis</title><author>Tian, Y.-I. ; Kanade, T. ; Cohn, J.F.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c579t-fba4f1290b13b6f49d83d91ca8c678ce99f2ee578ae31fb54e9056d456cf0d163</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2001</creationdate><topic>Eyes</topic><topic>Face recognition</topic><topic>Facial</topic><topic>Facial features</topic><topic>Furrows</topic><topic>Gold</topic><topic>Humans</topic><topic>Image analysis</topic><topic>Image sequence analysis</topic><topic>Mathematical models</topic><topic>Mouth</topic><topic>Prototypes</topic><topic>Recognition</topic><topic>Tracking</topic><topic>Transient analysis</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Tian, Y.-I.</creatorcontrib><creatorcontrib>Kanade, T.</creatorcontrib><creatorcontrib>Cohn, J.F.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Xplore</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Tian, Y.-I.</au><au>Kanade, T.</au><au>Cohn, J.F.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Recognizing action units for facial expression analysis</atitle><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle><stitle>TPAMI</stitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><date>2001-02-01</date><risdate>2001</risdate><volume>23</volume><issue>2</issue><spage>97</spage><epage>115</epage><pages>97-115</pages><issn>0162-8828</issn><eissn>1939-3539</eissn><coden>ITPIDJ</coden><abstract>Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an automatic face analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AU) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AU and 10 lower face AU) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AU and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AU. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>25210210</pmid><doi>10.1109/34.908962</doi><tpages>19</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0162-8828
ispartof IEEE transactions on pattern analysis and machine intelligence, 2001-02, Vol.23 (2), p.97-115
issn 0162-8828
1939-3539
language eng
recordid cdi_pubmed_primary_25210210
source IEEE Electronic Library (IEL) Journals
subjects Eyes
Face recognition
Facial
Facial features
Furrows
Gold
Humans
Image analysis
Image sequence analysis
Mathematical models
Mouth
Prototypes
Recognition
Tracking
Transient analysis
title Recognizing action units for facial expression analysis
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T16%3A21%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Recognizing%20action%20units%20for%20facial%20expression%20analysis&rft.jtitle=IEEE%20transactions%20on%20pattern%20analysis%20and%20machine%20intelligence&rft.au=Tian,%20Y.-I.&rft.date=2001-02-01&rft.volume=23&rft.issue=2&rft.spage=97&rft.epage=115&rft.pages=97-115&rft.issn=0162-8828&rft.eissn=1939-3539&rft.coden=ITPIDJ&rft_id=info:doi/10.1109/34.908962&rft_dat=%3Cproquest_pubme%3E25958993%3C/proquest_pubme%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c579t-fba4f1290b13b6f49d83d91ca8c678ce99f2ee578ae31fb54e9056d456cf0d163%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=884793484&rft_id=info:pmid/25210210&rft_ieee_id=908962&rfr_iscdi=true