Loading…

Combining Head Pose and Eye Location Information for Gaze Estimation

Head pose and eye location for gaze estimation have been separately studied in numerous works in the literature. Previous research shows that satisfactory accuracy in head pose and eye location estimation can be achieved in constrained settings. However, in the presence of nonfrontal faces, eye loca...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on image processing 2012-02, Vol.21 (2), p.802-815
Main Authors: Valenti, R., Sebe, N., Gevers, T.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c516t-e1606aeb6d8df286a6fe64f234be40560d713e7dc0de1827c6b06837c1a0b9653
cites cdi_FETCH-LOGICAL-c516t-e1606aeb6d8df286a6fe64f234be40560d713e7dc0de1827c6b06837c1a0b9653
container_end_page 815
container_issue 2
container_start_page 802
container_title IEEE transactions on image processing
container_volume 21
creator Valenti, R.
Sebe, N.
Gevers, T.
description Head pose and eye location for gaze estimation have been separately studied in numerous works in the literature. Previous research shows that satisfactory accuracy in head pose and eye location estimation can be achieved in constrained settings. However, in the presence of nonfrontal faces, eye locators are not adequate to accurately locate the center of the eyes. On the other hand, head pose estimation techniques are able to deal with these conditions; hence, they may be suited to enhance the accuracy of eye localization. Therefore, in this paper, a hybrid scheme is proposed to combine head pose and eye location information to obtain enhanced gaze estimation. To this end, the transformation matrix obtained from the head pose is used to normalize the eye regions, and in turn, the transformation matrix generated by the found eye location is used to correct the pose estimation procedure. The scheme is designed to enhance the accuracy of eye location estimations, particularly in low-resolution videos, to extend the operative range of the eye locators, and to improve the accuracy of the head pose tracker. These enhanced estimations are then combined to obtain a novel visual gaze estimation system, which uses both eye location and head information to refine the gaze estimates. From the experimental results, it can be derived that the proposed unified scheme improves the accuracy of eye estimations by 16% to 23%. Furthermore, it considerably extends its operating range by more than 15 ° by overcoming the problems introduced by extreme head poses. Moreover, the accuracy of the head pose tracker is improved by 12% to 24%. Finally, the experimentation on the proposed combined gaze estimation system shows that it is accurate (with a mean error between 2 ° and 5 ° ) and that it can be used in cases where classic approaches would fail without imposing restraints on the position of the head.
doi_str_mv 10.1109/TIP.2011.2162740
format article
fullrecord <record><control><sourceid>proquest_pasca</sourceid><recordid>TN_cdi_pascalfrancis_primary_25482393</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>5959981</ieee_id><sourcerecordid>1010893766</sourcerecordid><originalsourceid>FETCH-LOGICAL-c516t-e1606aeb6d8df286a6fe64f234be40560d713e7dc0de1827c6b06837c1a0b9653</originalsourceid><addsrcrecordid>eNp9kc1LxDAQxYMoft8FQYogeuk6k6T5OMq66sKCHvRc0nQqld1Gm92D_vVGuip48DSPyW_eMHmMHSGMEMFePk4fRhwQRxwV1xI22C5aiTmA5JtJQ6FzjdLusL0YXwBQFqi22Q5HbQxa3GXX47Co2q7tnrM7cnX2ECJlrquzyTtls-Ddsg1dNu2a0C8GnVR26z4om8RlO_QO2Fbj5pEO13WfPd1MHsd3-ez-djq-muU-rV3mhAqUo0rVpm64UU41pGTDhaxIQqGg1ihI1x5qQsO1VxUoI7RHB5VVhdhn54Pvax_eVhSX5aKNnuZz11FYxdKiRoUgMJEX_5IICMYKrVRCT_-gL2HVd-mO5JfWS2V5gmCAfB9i7KkpX_t0fP-enMqvKMoURfkVRbmOIo2crH1X1YLqn4Hvv0_A2Rpw0bt507vOt_GXK6ThworEHQ9cS0Q_z4UtrDUoPgHd-Zbz</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>916834692</pqid></control><display><type>article</type><title>Combining Head Pose and Eye Location Information for Gaze Estimation</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Valenti, R. ; Sebe, N. ; Gevers, T.</creator><creatorcontrib>Valenti, R. ; Sebe, N. ; Gevers, T.</creatorcontrib><description>Head pose and eye location for gaze estimation have been separately studied in numerous works in the literature. Previous research shows that satisfactory accuracy in head pose and eye location estimation can be achieved in constrained settings. However, in the presence of nonfrontal faces, eye locators are not adequate to accurately locate the center of the eyes. On the other hand, head pose estimation techniques are able to deal with these conditions; hence, they may be suited to enhance the accuracy of eye localization. Therefore, in this paper, a hybrid scheme is proposed to combine head pose and eye location information to obtain enhanced gaze estimation. To this end, the transformation matrix obtained from the head pose is used to normalize the eye regions, and in turn, the transformation matrix generated by the found eye location is used to correct the pose estimation procedure. The scheme is designed to enhance the accuracy of eye location estimations, particularly in low-resolution videos, to extend the operative range of the eye locators, and to improve the accuracy of the head pose tracker. These enhanced estimations are then combined to obtain a novel visual gaze estimation system, which uses both eye location and head information to refine the gaze estimates. From the experimental results, it can be derived that the proposed unified scheme improves the accuracy of eye estimations by 16% to 23%. Furthermore, it considerably extends its operating range by more than 15 ° by overcoming the problems introduced by extreme head poses. Moreover, the accuracy of the head pose tracker is improved by 12% to 24%. Finally, the experimentation on the proposed combined gaze estimation system shows that it is accurate (with a mean error between 2 ° and 5 ° ) and that it can be used in cases where classic approaches would fail without imposing restraints on the position of the head.</description><identifier>ISSN: 1057-7149</identifier><identifier>EISSN: 1941-0042</identifier><identifier>DOI: 10.1109/TIP.2011.2162740</identifier><identifier>PMID: 21788191</identifier><identifier>CODEN: IIPRE4</identifier><language>eng</language><publisher>New York, NY: IEEE</publisher><subject>Accuracy ; Applied sciences ; Cameras ; Constraints ; Estimation ; Exact sciences and technology ; Experimentation ; Eye - anatomy &amp; histology ; Eye center location ; Eye Movements - physiology ; Eyes ; gaze estimation ; Head ; Head Movements - physiology ; head pose estimation ; Humans ; Image processing ; Image Processing, Computer-Assisted - methods ; Information, signal and communications theory ; Localization ; Locators ; Position (location) ; Posture - physiology ; Signal processing ; Solid modeling ; Studies ; Telecommunications and information theory ; Three dimensional displays ; Transformations ; Visual ; Visualization</subject><ispartof>IEEE transactions on image processing, 2012-02, Vol.21 (2), p.802-815</ispartof><rights>2015 INIST-CNRS</rights><rights>2011 IEEE</rights><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Feb 2012</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c516t-e1606aeb6d8df286a6fe64f234be40560d713e7dc0de1827c6b06837c1a0b9653</citedby><cites>FETCH-LOGICAL-c516t-e1606aeb6d8df286a6fe64f234be40560d713e7dc0de1827c6b06837c1a0b9653</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/5959981$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&amp;idt=25482393$$DView record in Pascal Francis$$Hfree_for_read</backlink><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/21788191$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Valenti, R.</creatorcontrib><creatorcontrib>Sebe, N.</creatorcontrib><creatorcontrib>Gevers, T.</creatorcontrib><title>Combining Head Pose and Eye Location Information for Gaze Estimation</title><title>IEEE transactions on image processing</title><addtitle>TIP</addtitle><addtitle>IEEE Trans Image Process</addtitle><description>Head pose and eye location for gaze estimation have been separately studied in numerous works in the literature. Previous research shows that satisfactory accuracy in head pose and eye location estimation can be achieved in constrained settings. However, in the presence of nonfrontal faces, eye locators are not adequate to accurately locate the center of the eyes. On the other hand, head pose estimation techniques are able to deal with these conditions; hence, they may be suited to enhance the accuracy of eye localization. Therefore, in this paper, a hybrid scheme is proposed to combine head pose and eye location information to obtain enhanced gaze estimation. To this end, the transformation matrix obtained from the head pose is used to normalize the eye regions, and in turn, the transformation matrix generated by the found eye location is used to correct the pose estimation procedure. The scheme is designed to enhance the accuracy of eye location estimations, particularly in low-resolution videos, to extend the operative range of the eye locators, and to improve the accuracy of the head pose tracker. These enhanced estimations are then combined to obtain a novel visual gaze estimation system, which uses both eye location and head information to refine the gaze estimates. From the experimental results, it can be derived that the proposed unified scheme improves the accuracy of eye estimations by 16% to 23%. Furthermore, it considerably extends its operating range by more than 15 ° by overcoming the problems introduced by extreme head poses. Moreover, the accuracy of the head pose tracker is improved by 12% to 24%. Finally, the experimentation on the proposed combined gaze estimation system shows that it is accurate (with a mean error between 2 ° and 5 ° ) and that it can be used in cases where classic approaches would fail without imposing restraints on the position of the head.</description><subject>Accuracy</subject><subject>Applied sciences</subject><subject>Cameras</subject><subject>Constraints</subject><subject>Estimation</subject><subject>Exact sciences and technology</subject><subject>Experimentation</subject><subject>Eye - anatomy &amp; histology</subject><subject>Eye center location</subject><subject>Eye Movements - physiology</subject><subject>Eyes</subject><subject>gaze estimation</subject><subject>Head</subject><subject>Head Movements - physiology</subject><subject>head pose estimation</subject><subject>Humans</subject><subject>Image processing</subject><subject>Image Processing, Computer-Assisted - methods</subject><subject>Information, signal and communications theory</subject><subject>Localization</subject><subject>Locators</subject><subject>Position (location)</subject><subject>Posture - physiology</subject><subject>Signal processing</subject><subject>Solid modeling</subject><subject>Studies</subject><subject>Telecommunications and information theory</subject><subject>Three dimensional displays</subject><subject>Transformations</subject><subject>Visual</subject><subject>Visualization</subject><issn>1057-7149</issn><issn>1941-0042</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2012</creationdate><recordtype>article</recordtype><recordid>eNp9kc1LxDAQxYMoft8FQYogeuk6k6T5OMq66sKCHvRc0nQqld1Gm92D_vVGuip48DSPyW_eMHmMHSGMEMFePk4fRhwQRxwV1xI22C5aiTmA5JtJQ6FzjdLusL0YXwBQFqi22Q5HbQxa3GXX47Co2q7tnrM7cnX2ECJlrquzyTtls-Ddsg1dNu2a0C8GnVR26z4om8RlO_QO2Fbj5pEO13WfPd1MHsd3-ez-djq-muU-rV3mhAqUo0rVpm64UU41pGTDhaxIQqGg1ihI1x5qQsO1VxUoI7RHB5VVhdhn54Pvax_eVhSX5aKNnuZz11FYxdKiRoUgMJEX_5IICMYKrVRCT_-gL2HVd-mO5JfWS2V5gmCAfB9i7KkpX_t0fP-enMqvKMoURfkVRbmOIo2crH1X1YLqn4Hvv0_A2Rpw0bt507vOt_GXK6ThworEHQ9cS0Q_z4UtrDUoPgHd-Zbz</recordid><startdate>20120201</startdate><enddate>20120201</enddate><creator>Valenti, R.</creator><creator>Sebe, N.</creator><creator>Gevers, T.</creator><general>IEEE</general><general>Institute of Electrical and Electronics Engineers</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>IQODW</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>F28</scope><scope>FR3</scope><scope>7X8</scope></search><sort><creationdate>20120201</creationdate><title>Combining Head Pose and Eye Location Information for Gaze Estimation</title><author>Valenti, R. ; Sebe, N. ; Gevers, T.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c516t-e1606aeb6d8df286a6fe64f234be40560d713e7dc0de1827c6b06837c1a0b9653</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2012</creationdate><topic>Accuracy</topic><topic>Applied sciences</topic><topic>Cameras</topic><topic>Constraints</topic><topic>Estimation</topic><topic>Exact sciences and technology</topic><topic>Experimentation</topic><topic>Eye - anatomy &amp; histology</topic><topic>Eye center location</topic><topic>Eye Movements - physiology</topic><topic>Eyes</topic><topic>gaze estimation</topic><topic>Head</topic><topic>Head Movements - physiology</topic><topic>head pose estimation</topic><topic>Humans</topic><topic>Image processing</topic><topic>Image Processing, Computer-Assisted - methods</topic><topic>Information, signal and communications theory</topic><topic>Localization</topic><topic>Locators</topic><topic>Position (location)</topic><topic>Posture - physiology</topic><topic>Signal processing</topic><topic>Solid modeling</topic><topic>Studies</topic><topic>Telecommunications and information theory</topic><topic>Three dimensional displays</topic><topic>Transformations</topic><topic>Visual</topic><topic>Visualization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Valenti, R.</creatorcontrib><creatorcontrib>Sebe, N.</creatorcontrib><creatorcontrib>Gevers, T.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library Online</collection><collection>Pascal-Francis</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on image processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Valenti, R.</au><au>Sebe, N.</au><au>Gevers, T.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Combining Head Pose and Eye Location Information for Gaze Estimation</atitle><jtitle>IEEE transactions on image processing</jtitle><stitle>TIP</stitle><addtitle>IEEE Trans Image Process</addtitle><date>2012-02-01</date><risdate>2012</risdate><volume>21</volume><issue>2</issue><spage>802</spage><epage>815</epage><pages>802-815</pages><issn>1057-7149</issn><eissn>1941-0042</eissn><coden>IIPRE4</coden><abstract>Head pose and eye location for gaze estimation have been separately studied in numerous works in the literature. Previous research shows that satisfactory accuracy in head pose and eye location estimation can be achieved in constrained settings. However, in the presence of nonfrontal faces, eye locators are not adequate to accurately locate the center of the eyes. On the other hand, head pose estimation techniques are able to deal with these conditions; hence, they may be suited to enhance the accuracy of eye localization. Therefore, in this paper, a hybrid scheme is proposed to combine head pose and eye location information to obtain enhanced gaze estimation. To this end, the transformation matrix obtained from the head pose is used to normalize the eye regions, and in turn, the transformation matrix generated by the found eye location is used to correct the pose estimation procedure. The scheme is designed to enhance the accuracy of eye location estimations, particularly in low-resolution videos, to extend the operative range of the eye locators, and to improve the accuracy of the head pose tracker. These enhanced estimations are then combined to obtain a novel visual gaze estimation system, which uses both eye location and head information to refine the gaze estimates. From the experimental results, it can be derived that the proposed unified scheme improves the accuracy of eye estimations by 16% to 23%. Furthermore, it considerably extends its operating range by more than 15 ° by overcoming the problems introduced by extreme head poses. Moreover, the accuracy of the head pose tracker is improved by 12% to 24%. Finally, the experimentation on the proposed combined gaze estimation system shows that it is accurate (with a mean error between 2 ° and 5 ° ) and that it can be used in cases where classic approaches would fail without imposing restraints on the position of the head.</abstract><cop>New York, NY</cop><pub>IEEE</pub><pmid>21788191</pmid><doi>10.1109/TIP.2011.2162740</doi><tpages>14</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1057-7149
ispartof IEEE transactions on image processing, 2012-02, Vol.21 (2), p.802-815
issn 1057-7149
1941-0042
language eng
recordid cdi_pascalfrancis_primary_25482393
source IEEE Electronic Library (IEL) Journals
subjects Accuracy
Applied sciences
Cameras
Constraints
Estimation
Exact sciences and technology
Experimentation
Eye - anatomy & histology
Eye center location
Eye Movements - physiology
Eyes
gaze estimation
Head
Head Movements - physiology
head pose estimation
Humans
Image processing
Image Processing, Computer-Assisted - methods
Information, signal and communications theory
Localization
Locators
Position (location)
Posture - physiology
Signal processing
Solid modeling
Studies
Telecommunications and information theory
Three dimensional displays
Transformations
Visual
Visualization
title Combining Head Pose and Eye Location Information for Gaze Estimation
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-23T02%3A52%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pasca&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Combining%20Head%20Pose%20and%20Eye%20Location%20Information%20for%20Gaze%20Estimation&rft.jtitle=IEEE%20transactions%20on%20image%20processing&rft.au=Valenti,%20R.&rft.date=2012-02-01&rft.volume=21&rft.issue=2&rft.spage=802&rft.epage=815&rft.pages=802-815&rft.issn=1057-7149&rft.eissn=1941-0042&rft.coden=IIPRE4&rft_id=info:doi/10.1109/TIP.2011.2162740&rft_dat=%3Cproquest_pasca%3E1010893766%3C/proquest_pasca%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c516t-e1606aeb6d8df286a6fe64f234be40560d713e7dc0de1827c6b06837c1a0b9653%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=916834692&rft_id=info:pmid/21788191&rft_ieee_id=5959981&rfr_iscdi=true