Loading…
Accuracy Comparison Across Face Recognition Algorithms: Where Are We on Measuring Race Bias?
Previous generations of face recognition algorithms differ in accuracy for images of different races (race bias). Here, we present the possible underlying factors (data-driven and scenario modeling) and methodological considerations for assessing race bias in algorithms. We discuss data-driven facto...
Saved in:
Published in: | IEEE transactions on biometrics, behavior, and identity science behavior, and identity science, 2021-01, Vol.3 (1), p.101-111 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c4319-5086f9b81a6e0ff14fd21143796cf5d1aa4eb33e974948b4f8da8a49f9098f133 |
---|---|
cites | cdi_FETCH-LOGICAL-c4319-5086f9b81a6e0ff14fd21143796cf5d1aa4eb33e974948b4f8da8a49f9098f133 |
container_end_page | 111 |
container_issue | 1 |
container_start_page | 101 |
container_title | IEEE transactions on biometrics, behavior, and identity science |
container_volume | 3 |
creator | Cavazos, Jacqueline G. Phillips, P. Jonathon Castillo, Carlos D. O'Toole, Alice J. |
description | Previous generations of face recognition algorithms differ in accuracy for images of different races (race bias). Here, we present the possible underlying factors (data-driven and scenario modeling) and methodological considerations for assessing race bias in algorithms. We discuss data-driven factors (e.g., image quality, image population statistics, and algorithm architecture), and scenario modeling factors that consider the role of the "user" of the algorithm (e.g., threshold decisions and demographic constraints). To illustrate how these issues apply, we present data from four face recognition algorithms (a previous-generation algorithm and three deep convolutional neural networks, DCNNs) for East Asian and Caucasian faces. First, dataset difficulty affected both overall recognition accuracy and race bias, such that race bias increased with item difficulty. Second, for all four algorithms, the degree of bias varied depending on the identification decision threshold. To achieve equal false accept rates (FARs), East Asian faces required higher identification thresholds than Caucasian faces, for all algorithms. Third, demographic constraints on the formulation of the distributions used in the test, impacted estimates of algorithm accuracy. We conclude that race bias needs to be measured for individual applications and we provide a checklist for measuring this bias in face recognition algorithms. |
doi_str_mv | 10.1109/TBIOM.2020.3027269 |
format | article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TBIOM_2020_3027269</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9209125</ieee_id><sourcerecordid>2489599259</sourcerecordid><originalsourceid>FETCH-LOGICAL-c4319-5086f9b81a6e0ff14fd21143796cf5d1aa4eb33e974948b4f8da8a49f9098f133</originalsourceid><addsrcrecordid>eNpdkVtrGzEQhUVpaUKaP9BCWehLX-zotruaPrQ4JjdICISUvBSELI9shd2VI3kL-ffV2q5JAhISmu8cNHMI-czomDEKJ_enV7c3Y045HQvKa17BO3LIK1GPKknr9y_uB-Q4pUdKMyohr4_kQIhSlYqzQ_JnYm0fjX0upqFdmehT6IqJjSGl4txYLO7QhkXn1354bxYh-vWyTT-KhyVGLCZ5P2CRazdoUh99tyjuBtmpN-nXJ_LBmSbh8e48Ir_Pz-6nl6Pr24ur6eR6ZKVgMCqpqhzMFDMVUueYdHPOmBQ1VNaVc2aMxJkQCLUEqWbSqblRRoIDCsoxIY7Iz63vqp-1OLfYraNp9Cr61sRnHYzXryudX-pF-KtrVQPUZTb4vjOI4anHtNatTxabxnQY-qS5VFAC8BIy-u0N-hj62OX2NhSjeeQDxbfUZpIR3f4zjOohP73JTw_56V1-WfT1ZRt7yf-0MvBlC3hE3JeBU2C8FP8AckCeNQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2489106409</pqid></control><display><type>article</type><title>Accuracy Comparison Across Face Recognition Algorithms: Where Are We on Measuring Race Bias?</title><source>IEEE Xplore (Online service)</source><creator>Cavazos, Jacqueline G. ; Phillips, P. Jonathon ; Castillo, Carlos D. ; O'Toole, Alice J.</creator><creatorcontrib>Cavazos, Jacqueline G. ; Phillips, P. Jonathon ; Castillo, Carlos D. ; O'Toole, Alice J.</creatorcontrib><description>Previous generations of face recognition algorithms differ in accuracy for images of different races (race bias). Here, we present the possible underlying factors (data-driven and scenario modeling) and methodological considerations for assessing race bias in algorithms. We discuss data-driven factors (e.g., image quality, image population statistics, and algorithm architecture), and scenario modeling factors that consider the role of the "user" of the algorithm (e.g., threshold decisions and demographic constraints). To illustrate how these issues apply, we present data from four face recognition algorithms (a previous-generation algorithm and three deep convolutional neural networks, DCNNs) for East Asian and Caucasian faces. First, dataset difficulty affected both overall recognition accuracy and race bias, such that race bias increased with item difficulty. Second, for all four algorithms, the degree of bias varied depending on the identification decision threshold. To achieve equal false accept rates (FARs), East Asian faces required higher identification thresholds than Caucasian faces, for all algorithms. Third, demographic constraints on the formulation of the distributions used in the test, impacted estimates of algorithm accuracy. We conclude that race bias needs to be measured for individual applications and we provide a checklist for measuring this bias in face recognition algorithms.</description><identifier>ISSN: 2637-6407</identifier><identifier>EISSN: 2637-6407</identifier><identifier>DOI: 10.1109/TBIOM.2020.3027269</identifier><identifier>PMID: 33585821</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Accuracy ; Algorithms ; Artificial neural networks ; Bias ; Computational modeling ; Convolutional neural networks ; deep convolutional neural networks ; Demographics ; Face recognition ; Face recognition algorithm ; Faces ; Image quality ; Modelling ; Object recognition ; Population statistics ; Prediction algorithms ; Principal component analysis ; Race ; race bias ; the other-race effect</subject><ispartof>IEEE transactions on biometrics, behavior, and identity science, 2021-01, Vol.3 (1), p.101-111</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c4319-5086f9b81a6e0ff14fd21143796cf5d1aa4eb33e974948b4f8da8a49f9098f133</citedby><cites>FETCH-LOGICAL-c4319-5086f9b81a6e0ff14fd21143796cf5d1aa4eb33e974948b4f8da8a49f9098f133</cites><orcidid>0000-0001-6593-0813</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9209125$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>230,314,780,784,885,27924,27925,54796</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33585821$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Cavazos, Jacqueline G.</creatorcontrib><creatorcontrib>Phillips, P. Jonathon</creatorcontrib><creatorcontrib>Castillo, Carlos D.</creatorcontrib><creatorcontrib>O'Toole, Alice J.</creatorcontrib><title>Accuracy Comparison Across Face Recognition Algorithms: Where Are We on Measuring Race Bias?</title><title>IEEE transactions on biometrics, behavior, and identity science</title><addtitle>TBIOM</addtitle><addtitle>IEEE Trans Biom Behav Identity Sci</addtitle><description>Previous generations of face recognition algorithms differ in accuracy for images of different races (race bias). Here, we present the possible underlying factors (data-driven and scenario modeling) and methodological considerations for assessing race bias in algorithms. We discuss data-driven factors (e.g., image quality, image population statistics, and algorithm architecture), and scenario modeling factors that consider the role of the "user" of the algorithm (e.g., threshold decisions and demographic constraints). To illustrate how these issues apply, we present data from four face recognition algorithms (a previous-generation algorithm and three deep convolutional neural networks, DCNNs) for East Asian and Caucasian faces. First, dataset difficulty affected both overall recognition accuracy and race bias, such that race bias increased with item difficulty. Second, for all four algorithms, the degree of bias varied depending on the identification decision threshold. To achieve equal false accept rates (FARs), East Asian faces required higher identification thresholds than Caucasian faces, for all algorithms. Third, demographic constraints on the formulation of the distributions used in the test, impacted estimates of algorithm accuracy. We conclude that race bias needs to be measured for individual applications and we provide a checklist for measuring this bias in face recognition algorithms.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Bias</subject><subject>Computational modeling</subject><subject>Convolutional neural networks</subject><subject>deep convolutional neural networks</subject><subject>Demographics</subject><subject>Face recognition</subject><subject>Face recognition algorithm</subject><subject>Faces</subject><subject>Image quality</subject><subject>Modelling</subject><subject>Object recognition</subject><subject>Population statistics</subject><subject>Prediction algorithms</subject><subject>Principal component analysis</subject><subject>Race</subject><subject>race bias</subject><subject>the other-race effect</subject><issn>2637-6407</issn><issn>2637-6407</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNpdkVtrGzEQhUVpaUKaP9BCWehLX-zotruaPrQ4JjdICISUvBSELI9shd2VI3kL-ffV2q5JAhISmu8cNHMI-czomDEKJ_enV7c3Y045HQvKa17BO3LIK1GPKknr9y_uB-Q4pUdKMyohr4_kQIhSlYqzQ_JnYm0fjX0upqFdmehT6IqJjSGl4txYLO7QhkXn1354bxYh-vWyTT-KhyVGLCZ5P2CRazdoUh99tyjuBtmpN-nXJ_LBmSbh8e48Ir_Pz-6nl6Pr24ur6eR6ZKVgMCqpqhzMFDMVUueYdHPOmBQ1VNaVc2aMxJkQCLUEqWbSqblRRoIDCsoxIY7Iz63vqp-1OLfYraNp9Cr61sRnHYzXryudX-pF-KtrVQPUZTb4vjOI4anHtNatTxabxnQY-qS5VFAC8BIy-u0N-hj62OX2NhSjeeQDxbfUZpIR3f4zjOohP73JTw_56V1-WfT1ZRt7yf-0MvBlC3hE3JeBU2C8FP8AckCeNQ</recordid><startdate>20210101</startdate><enddate>20210101</enddate><creator>Cavazos, Jacqueline G.</creator><creator>Phillips, P. Jonathon</creator><creator>Castillo, Carlos D.</creator><creator>O'Toole, Alice J.</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>L7M</scope><scope>7X8</scope><scope>5PM</scope><orcidid>https://orcid.org/0000-0001-6593-0813</orcidid></search><sort><creationdate>20210101</creationdate><title>Accuracy Comparison Across Face Recognition Algorithms: Where Are We on Measuring Race Bias?</title><author>Cavazos, Jacqueline G. ; Phillips, P. Jonathon ; Castillo, Carlos D. ; O'Toole, Alice J.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c4319-5086f9b81a6e0ff14fd21143796cf5d1aa4eb33e974948b4f8da8a49f9098f133</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Bias</topic><topic>Computational modeling</topic><topic>Convolutional neural networks</topic><topic>deep convolutional neural networks</topic><topic>Demographics</topic><topic>Face recognition</topic><topic>Face recognition algorithm</topic><topic>Faces</topic><topic>Image quality</topic><topic>Modelling</topic><topic>Object recognition</topic><topic>Population statistics</topic><topic>Prediction algorithms</topic><topic>Principal component analysis</topic><topic>Race</topic><topic>race bias</topic><topic>the other-race effect</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Cavazos, Jacqueline G.</creatorcontrib><creatorcontrib>Phillips, P. Jonathon</creatorcontrib><creatorcontrib>Castillo, Carlos D.</creatorcontrib><creatorcontrib>O'Toole, Alice J.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Xplore</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>IEEE transactions on biometrics, behavior, and identity science</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Cavazos, Jacqueline G.</au><au>Phillips, P. Jonathon</au><au>Castillo, Carlos D.</au><au>O'Toole, Alice J.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Accuracy Comparison Across Face Recognition Algorithms: Where Are We on Measuring Race Bias?</atitle><jtitle>IEEE transactions on biometrics, behavior, and identity science</jtitle><stitle>TBIOM</stitle><addtitle>IEEE Trans Biom Behav Identity Sci</addtitle><date>2021-01-01</date><risdate>2021</risdate><volume>3</volume><issue>1</issue><spage>101</spage><epage>111</epage><pages>101-111</pages><issn>2637-6407</issn><eissn>2637-6407</eissn><abstract>Previous generations of face recognition algorithms differ in accuracy for images of different races (race bias). Here, we present the possible underlying factors (data-driven and scenario modeling) and methodological considerations for assessing race bias in algorithms. We discuss data-driven factors (e.g., image quality, image population statistics, and algorithm architecture), and scenario modeling factors that consider the role of the "user" of the algorithm (e.g., threshold decisions and demographic constraints). To illustrate how these issues apply, we present data from four face recognition algorithms (a previous-generation algorithm and three deep convolutional neural networks, DCNNs) for East Asian and Caucasian faces. First, dataset difficulty affected both overall recognition accuracy and race bias, such that race bias increased with item difficulty. Second, for all four algorithms, the degree of bias varied depending on the identification decision threshold. To achieve equal false accept rates (FARs), East Asian faces required higher identification thresholds than Caucasian faces, for all algorithms. Third, demographic constraints on the formulation of the distributions used in the test, impacted estimates of algorithm accuracy. We conclude that race bias needs to be measured for individual applications and we provide a checklist for measuring this bias in face recognition algorithms.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>33585821</pmid><doi>10.1109/TBIOM.2020.3027269</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0001-6593-0813</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2637-6407 |
ispartof | IEEE transactions on biometrics, behavior, and identity science, 2021-01, Vol.3 (1), p.101-111 |
issn | 2637-6407 2637-6407 |
language | eng |
recordid | cdi_crossref_primary_10_1109_TBIOM_2020_3027269 |
source | IEEE Xplore (Online service) |
subjects | Accuracy Algorithms Artificial neural networks Bias Computational modeling Convolutional neural networks deep convolutional neural networks Demographics Face recognition Face recognition algorithm Faces Image quality Modelling Object recognition Population statistics Prediction algorithms Principal component analysis Race race bias the other-race effect |
title | Accuracy Comparison Across Face Recognition Algorithms: Where Are We on Measuring Race Bias? |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-30T22%3A07%3A08IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Accuracy%20Comparison%20Across%20Face%20Recognition%20Algorithms:%20Where%20Are%20We%20on%20Measuring%20Race%20Bias?&rft.jtitle=IEEE%20transactions%20on%20biometrics,%20behavior,%20and%20identity%20science&rft.au=Cavazos,%20Jacqueline%20G.&rft.date=2021-01-01&rft.volume=3&rft.issue=1&rft.spage=101&rft.epage=111&rft.pages=101-111&rft.issn=2637-6407&rft.eissn=2637-6407&rft_id=info:doi/10.1109/TBIOM.2020.3027269&rft_dat=%3Cproquest_cross%3E2489599259%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c4319-5086f9b81a6e0ff14fd21143796cf5d1aa4eb33e974948b4f8da8a49f9098f133%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2489106409&rft_id=info:pmid/33585821&rft_ieee_id=9209125&rfr_iscdi=true |