Loading…
ChildPredictor: A Child Face Prediction Framework With Disentangled Learning
The appearances of children are inherited from their parents, which makes it feasible to predict them. Predicting realistic children's faces may help settle many social problems, such as age-invariant face recognition, kinship verification, and missing child identification. It can be regarded a...
Saved in:
Published in: | IEEE transactions on multimedia 2023, Vol.25, p.3737-3752 |
---|---|
Main Authors: | , , , , , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c291t-bd63c3c094c5d7f27ecf7f6e247f5c693108162a0d4de0202474f085b84e9f293 |
---|---|
cites | cdi_FETCH-LOGICAL-c291t-bd63c3c094c5d7f27ecf7f6e247f5c693108162a0d4de0202474f085b84e9f293 |
container_end_page | 3752 |
container_issue | |
container_start_page | 3737 |
container_title | IEEE transactions on multimedia |
container_volume | 25 |
creator | Zhao, Yuzhi Po, Lai-Man Wang, Xuehui Yan, Qiong Shen, Wei Zhang, Yujia Liu, Wei Wong, Chun-Kit Pang, Chiu-Sing Ou, Weifeng Yu, Wing-Yin Liu, Buhua |
description | The appearances of children are inherited from their parents, which makes it feasible to predict them. Predicting realistic children's faces may help settle many social problems, such as age-invariant face recognition, kinship verification, and missing child identification. It can be regarded as an image-to-image translation task. Existing approaches usually assume domain information in the image-to-image translation can be interpreted by "style", i.e., the separation of image content and style. However, such separation is improper for the child face prediction, because the facial contours between children and parents are not the same. To address this issue, we propose a new disentangled learning strategy for children's face prediction. We assume that children's faces are determined by genetic factors (compact family features, e.g., face contour), external factors (facial attributes irrelevant to prediction, such as moustaches and glasses), and variety factors (individual properties for each child). On this basis, we formulate predictions as a mapping from parents' genetic factors to children's genetic factors, and disentangle them from external and variety factors. In order to obtain accurate genetic factors and perform the mapping, we propose a ChildPredictor framework. It transfers human faces to genetic factors by encoders and back by generators. Then, it learns the relationship between the genetic factors of parents and children through a mapping function. To ensure the generated faces are realistic, we collect a large Family Face Database to train ChildPredictor and evaluate it on the FF-Database validation set. Experimental results demonstrate that ChildPredictor is superior to other well-known image-to-image translation methods in predicting realistic and diverse child faces. Implementation codes can be found at https://github.com/zhaoyuzhi/ChildPredictor . |
doi_str_mv | 10.1109/TMM.2022.3164785 |
format | article |
fullrecord | <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_proquest_journals_2862637953</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9749880</ieee_id><sourcerecordid>2862637953</sourcerecordid><originalsourceid>FETCH-LOGICAL-c291t-bd63c3c094c5d7f27ecf7f6e247f5c693108162a0d4de0202474f085b84e9f293</originalsourceid><addsrcrecordid>eNo9kM1Lw0AQxRdRsFbvgpcFz4mzH8lmvZVqVUjRQ8Xjkm5m261tUndTxP_e1BZPMzzem-H9CLlmkDIG-m42naYcOE8Fy6UqshMyYFqyBECp037POCSaMzgnFzGuAJjMQA1IOV76df0WsPa2a8M9HdE_hU4qi_So-7ahk1Bt8LsNn_TDd0v64CM2XdUs1ljTEqvQ-GZxSc5ctY54dZxD8j55nI2fk_L16WU8KhPLNeuSeZ0LKyxoabNaOa7QOuVy5FK5zOZaMChYziuoZY3Ql5JKOiiyeSFRO67FkNwe7m5D-7XD2JlVuwtN_9LwIue5UDoTvQsOLhvaGAM6sw1-U4Ufw8DsmZmemdkzM0dmfeTmEPGI-G_XSuqiAPELeSRmcw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2862637953</pqid></control><display><type>article</type><title>ChildPredictor: A Child Face Prediction Framework With Disentangled Learning</title><source>IEEE Xplore (Online service)</source><creator>Zhao, Yuzhi ; Po, Lai-Man ; Wang, Xuehui ; Yan, Qiong ; Shen, Wei ; Zhang, Yujia ; Liu, Wei ; Wong, Chun-Kit ; Pang, Chiu-Sing ; Ou, Weifeng ; Yu, Wing-Yin ; Liu, Buhua</creator><creatorcontrib>Zhao, Yuzhi ; Po, Lai-Man ; Wang, Xuehui ; Yan, Qiong ; Shen, Wei ; Zhang, Yujia ; Liu, Wei ; Wong, Chun-Kit ; Pang, Chiu-Sing ; Ou, Weifeng ; Yu, Wing-Yin ; Liu, Buhua</creatorcontrib><description>The appearances of children are inherited from their parents, which makes it feasible to predict them. Predicting realistic children's faces may help settle many social problems, such as age-invariant face recognition, kinship verification, and missing child identification. It can be regarded as an image-to-image translation task. Existing approaches usually assume domain information in the image-to-image translation can be interpreted by "style", i.e., the separation of image content and style. However, such separation is improper for the child face prediction, because the facial contours between children and parents are not the same. To address this issue, we propose a new disentangled learning strategy for children's face prediction. We assume that children's faces are determined by genetic factors (compact family features, e.g., face contour), external factors (facial attributes irrelevant to prediction, such as moustaches and glasses), and variety factors (individual properties for each child). On this basis, we formulate predictions as a mapping from parents' genetic factors to children's genetic factors, and disentangle them from external and variety factors. In order to obtain accurate genetic factors and perform the mapping, we propose a ChildPredictor framework. It transfers human faces to genetic factors by encoders and back by generators. Then, it learns the relationship between the genetic factors of parents and children through a mapping function. To ensure the generated faces are realistic, we collect a large Family Face Database to train ChildPredictor and evaluate it on the FF-Database validation set. Experimental results demonstrate that ChildPredictor is superior to other well-known image-to-image translation methods in predicting realistic and diverse child faces. Implementation codes can be found at https://github.com/zhaoyuzhi/ChildPredictor .</description><identifier>ISSN: 1520-9210</identifier><identifier>EISSN: 1941-0077</identifier><identifier>DOI: 10.1109/TMM.2022.3164785</identifier><identifier>CODEN: ITMUF8</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Child face prediction ; Children ; Children & youth ; Coders ; disentangled learning ; Face recognition ; Faces ; Families & family life ; generative adversarial network ; Generative adversarial networks ; Genetics ; Glass ; image-to-image translation ; Learning ; Mapping ; Parents ; Parents & parenting ; Separation ; Skin ; Training</subject><ispartof>IEEE transactions on multimedia, 2023, Vol.25, p.3737-3752</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c291t-bd63c3c094c5d7f27ecf7f6e247f5c693108162a0d4de0202474f085b84e9f293</citedby><cites>FETCH-LOGICAL-c291t-bd63c3c094c5d7f27ecf7f6e247f5c693108162a0d4de0202474f085b84e9f293</cites><orcidid>0000-0002-6333-7773 ; 0000-0002-9559-1055 ; 0000-0002-5185-1492 ; 0000-0001-8561-2206 ; 0000-0002-8908-3863 ; 0000-0002-1235-598X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9749880$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,4024,27923,27924,27925,54796</link.rule.ids></links><search><creatorcontrib>Zhao, Yuzhi</creatorcontrib><creatorcontrib>Po, Lai-Man</creatorcontrib><creatorcontrib>Wang, Xuehui</creatorcontrib><creatorcontrib>Yan, Qiong</creatorcontrib><creatorcontrib>Shen, Wei</creatorcontrib><creatorcontrib>Zhang, Yujia</creatorcontrib><creatorcontrib>Liu, Wei</creatorcontrib><creatorcontrib>Wong, Chun-Kit</creatorcontrib><creatorcontrib>Pang, Chiu-Sing</creatorcontrib><creatorcontrib>Ou, Weifeng</creatorcontrib><creatorcontrib>Yu, Wing-Yin</creatorcontrib><creatorcontrib>Liu, Buhua</creatorcontrib><title>ChildPredictor: A Child Face Prediction Framework With Disentangled Learning</title><title>IEEE transactions on multimedia</title><addtitle>TMM</addtitle><description>The appearances of children are inherited from their parents, which makes it feasible to predict them. Predicting realistic children's faces may help settle many social problems, such as age-invariant face recognition, kinship verification, and missing child identification. It can be regarded as an image-to-image translation task. Existing approaches usually assume domain information in the image-to-image translation can be interpreted by "style", i.e., the separation of image content and style. However, such separation is improper for the child face prediction, because the facial contours between children and parents are not the same. To address this issue, we propose a new disentangled learning strategy for children's face prediction. We assume that children's faces are determined by genetic factors (compact family features, e.g., face contour), external factors (facial attributes irrelevant to prediction, such as moustaches and glasses), and variety factors (individual properties for each child). On this basis, we formulate predictions as a mapping from parents' genetic factors to children's genetic factors, and disentangle them from external and variety factors. In order to obtain accurate genetic factors and perform the mapping, we propose a ChildPredictor framework. It transfers human faces to genetic factors by encoders and back by generators. Then, it learns the relationship between the genetic factors of parents and children through a mapping function. To ensure the generated faces are realistic, we collect a large Family Face Database to train ChildPredictor and evaluate it on the FF-Database validation set. Experimental results demonstrate that ChildPredictor is superior to other well-known image-to-image translation methods in predicting realistic and diverse child faces. Implementation codes can be found at https://github.com/zhaoyuzhi/ChildPredictor .</description><subject>Child face prediction</subject><subject>Children</subject><subject>Children & youth</subject><subject>Coders</subject><subject>disentangled learning</subject><subject>Face recognition</subject><subject>Faces</subject><subject>Families & family life</subject><subject>generative adversarial network</subject><subject>Generative adversarial networks</subject><subject>Genetics</subject><subject>Glass</subject><subject>image-to-image translation</subject><subject>Learning</subject><subject>Mapping</subject><subject>Parents</subject><subject>Parents & parenting</subject><subject>Separation</subject><subject>Skin</subject><subject>Training</subject><issn>1520-9210</issn><issn>1941-0077</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNo9kM1Lw0AQxRdRsFbvgpcFz4mzH8lmvZVqVUjRQ8Xjkm5m261tUndTxP_e1BZPMzzem-H9CLlmkDIG-m42naYcOE8Fy6UqshMyYFqyBECp037POCSaMzgnFzGuAJjMQA1IOV76df0WsPa2a8M9HdE_hU4qi_So-7ahk1Bt8LsNn_TDd0v64CM2XdUs1ljTEqvQ-GZxSc5ctY54dZxD8j55nI2fk_L16WU8KhPLNeuSeZ0LKyxoabNaOa7QOuVy5FK5zOZaMChYziuoZY3Ql5JKOiiyeSFRO67FkNwe7m5D-7XD2JlVuwtN_9LwIue5UDoTvQsOLhvaGAM6sw1-U4Ufw8DsmZmemdkzM0dmfeTmEPGI-G_XSuqiAPELeSRmcw</recordid><startdate>2023</startdate><enddate>2023</enddate><creator>Zhao, Yuzhi</creator><creator>Po, Lai-Man</creator><creator>Wang, Xuehui</creator><creator>Yan, Qiong</creator><creator>Shen, Wei</creator><creator>Zhang, Yujia</creator><creator>Liu, Wei</creator><creator>Wong, Chun-Kit</creator><creator>Pang, Chiu-Sing</creator><creator>Ou, Weifeng</creator><creator>Yu, Wing-Yin</creator><creator>Liu, Buhua</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-6333-7773</orcidid><orcidid>https://orcid.org/0000-0002-9559-1055</orcidid><orcidid>https://orcid.org/0000-0002-5185-1492</orcidid><orcidid>https://orcid.org/0000-0001-8561-2206</orcidid><orcidid>https://orcid.org/0000-0002-8908-3863</orcidid><orcidid>https://orcid.org/0000-0002-1235-598X</orcidid></search><sort><creationdate>2023</creationdate><title>ChildPredictor: A Child Face Prediction Framework With Disentangled Learning</title><author>Zhao, Yuzhi ; Po, Lai-Man ; Wang, Xuehui ; Yan, Qiong ; Shen, Wei ; Zhang, Yujia ; Liu, Wei ; Wong, Chun-Kit ; Pang, Chiu-Sing ; Ou, Weifeng ; Yu, Wing-Yin ; Liu, Buhua</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c291t-bd63c3c094c5d7f27ecf7f6e247f5c693108162a0d4de0202474f085b84e9f293</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Child face prediction</topic><topic>Children</topic><topic>Children & youth</topic><topic>Coders</topic><topic>disentangled learning</topic><topic>Face recognition</topic><topic>Faces</topic><topic>Families & family life</topic><topic>generative adversarial network</topic><topic>Generative adversarial networks</topic><topic>Genetics</topic><topic>Glass</topic><topic>image-to-image translation</topic><topic>Learning</topic><topic>Mapping</topic><topic>Parents</topic><topic>Parents & parenting</topic><topic>Separation</topic><topic>Skin</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhao, Yuzhi</creatorcontrib><creatorcontrib>Po, Lai-Man</creatorcontrib><creatorcontrib>Wang, Xuehui</creatorcontrib><creatorcontrib>Yan, Qiong</creatorcontrib><creatorcontrib>Shen, Wei</creatorcontrib><creatorcontrib>Zhang, Yujia</creatorcontrib><creatorcontrib>Liu, Wei</creatorcontrib><creatorcontrib>Wong, Chun-Kit</creatorcontrib><creatorcontrib>Pang, Chiu-Sing</creatorcontrib><creatorcontrib>Ou, Weifeng</creatorcontrib><creatorcontrib>Yu, Wing-Yin</creatorcontrib><creatorcontrib>Liu, Buhua</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on multimedia</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhao, Yuzhi</au><au>Po, Lai-Man</au><au>Wang, Xuehui</au><au>Yan, Qiong</au><au>Shen, Wei</au><au>Zhang, Yujia</au><au>Liu, Wei</au><au>Wong, Chun-Kit</au><au>Pang, Chiu-Sing</au><au>Ou, Weifeng</au><au>Yu, Wing-Yin</au><au>Liu, Buhua</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>ChildPredictor: A Child Face Prediction Framework With Disentangled Learning</atitle><jtitle>IEEE transactions on multimedia</jtitle><stitle>TMM</stitle><date>2023</date><risdate>2023</risdate><volume>25</volume><spage>3737</spage><epage>3752</epage><pages>3737-3752</pages><issn>1520-9210</issn><eissn>1941-0077</eissn><coden>ITMUF8</coden><abstract>The appearances of children are inherited from their parents, which makes it feasible to predict them. Predicting realistic children's faces may help settle many social problems, such as age-invariant face recognition, kinship verification, and missing child identification. It can be regarded as an image-to-image translation task. Existing approaches usually assume domain information in the image-to-image translation can be interpreted by "style", i.e., the separation of image content and style. However, such separation is improper for the child face prediction, because the facial contours between children and parents are not the same. To address this issue, we propose a new disentangled learning strategy for children's face prediction. We assume that children's faces are determined by genetic factors (compact family features, e.g., face contour), external factors (facial attributes irrelevant to prediction, such as moustaches and glasses), and variety factors (individual properties for each child). On this basis, we formulate predictions as a mapping from parents' genetic factors to children's genetic factors, and disentangle them from external and variety factors. In order to obtain accurate genetic factors and perform the mapping, we propose a ChildPredictor framework. It transfers human faces to genetic factors by encoders and back by generators. Then, it learns the relationship between the genetic factors of parents and children through a mapping function. To ensure the generated faces are realistic, we collect a large Family Face Database to train ChildPredictor and evaluate it on the FF-Database validation set. Experimental results demonstrate that ChildPredictor is superior to other well-known image-to-image translation methods in predicting realistic and diverse child faces. Implementation codes can be found at https://github.com/zhaoyuzhi/ChildPredictor .</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TMM.2022.3164785</doi><tpages>16</tpages><orcidid>https://orcid.org/0000-0002-6333-7773</orcidid><orcidid>https://orcid.org/0000-0002-9559-1055</orcidid><orcidid>https://orcid.org/0000-0002-5185-1492</orcidid><orcidid>https://orcid.org/0000-0001-8561-2206</orcidid><orcidid>https://orcid.org/0000-0002-8908-3863</orcidid><orcidid>https://orcid.org/0000-0002-1235-598X</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1520-9210 |
ispartof | IEEE transactions on multimedia, 2023, Vol.25, p.3737-3752 |
issn | 1520-9210 1941-0077 |
language | eng |
recordid | cdi_proquest_journals_2862637953 |
source | IEEE Xplore (Online service) |
subjects | Child face prediction Children Children & youth Coders disentangled learning Face recognition Faces Families & family life generative adversarial network Generative adversarial networks Genetics Glass image-to-image translation Learning Mapping Parents Parents & parenting Separation Skin Training |
title | ChildPredictor: A Child Face Prediction Framework With Disentangled Learning |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T16%3A23%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=ChildPredictor:%20A%20Child%20Face%20Prediction%20Framework%20With%20Disentangled%20Learning&rft.jtitle=IEEE%20transactions%20on%20multimedia&rft.au=Zhao,%20Yuzhi&rft.date=2023&rft.volume=25&rft.spage=3737&rft.epage=3752&rft.pages=3737-3752&rft.issn=1520-9210&rft.eissn=1941-0077&rft.coden=ITMUF8&rft_id=info:doi/10.1109/TMM.2022.3164785&rft_dat=%3Cproquest_ieee_%3E2862637953%3C/proquest_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c291t-bd63c3c094c5d7f27ecf7f6e247f5c693108162a0d4de0202474f085b84e9f293%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2862637953&rft_id=info:pmid/&rft_ieee_id=9749880&rfr_iscdi=true |