Loading…

Virtual Garment Fitting Through Parsing and Context-Aware Generative Adversarial Networks with Discriminator Group

Owing to the rapid growth of the e-commerce industry, image-based virtual try-on has emerged as a popular research topic in recent years. Despite the introduction of multiple approaches to achieve this concept, there remains ample scope for research and improvement. In this regard, Generative Advers...

Full description

Saved in:
Bibliographic Details
Main Authors: Su, Wei-Hong, Chen, Sze-Ann, Chin, Chen-I, Hsiao, Hsu-Feng
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 1738
container_issue
container_start_page 1732
container_title
container_volume
creator Su, Wei-Hong
Chen, Sze-Ann
Chin, Chen-I
Hsiao, Hsu-Feng
description Owing to the rapid growth of the e-commerce industry, image-based virtual try-on has emerged as a popular research topic in recent years. Despite the introduction of multiple approaches to achieve this concept, there remains ample scope for research and improvement. In this regard, Generative Adversarial Networks (GANs) demonstrate a framework possessing immense potential for subsequent development. Nonetheless, the generated images reported in the literature often manifest blurred edges between semantic regions, thereby diminishing the credibility of results. Furthermore, the generation of try-on images may retain the original shape of the upper body clothing on the model mistakenly, such as the length and tightness of the torso, rather than adapting to the shape of the target clothing. In this paper, we propose a more comprehensive architecture to overcome the limitations of GAN-based approaches, which includes the following contributions. First, we introduce a new parsing and context generator that takes into account the warped binary mask of the geometric matching image of the target clothing. The outputs of this generator incorporate the generation of human parsing images that correspond to the generated try-on images. Moreover, we have designed a novel discriminator group that is specifically focused on judging whether the generated image is a reasonable representation of the specific clothing being worn. According to the experimental results, our method effectively exhibits better synthesis quality and remedies the common challenges encountered while using GANs for virtual try-on.
doi_str_mv 10.1109/APSIPAASC58517.2023.10317305
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_10317305</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10317305</ieee_id><sourcerecordid>10317305</sourcerecordid><originalsourceid>FETCH-LOGICAL-i204t-5c4ed89f8b7441c24a57f7ccf96f78be175b2f7103c605d82ba4f60085fa2c8d3</originalsourceid><addsrcrecordid>eNo1kM9LwzAcxaMgOOb-Aw85eO385leTHkt1dTB0sOl1pGmyRbd2pNmq_70V9fR48Pjw3kPojsCUEMju8-VqvszzVSGUIHJKgbIpAUYkA3GBJpnMFBPAAFLJLtGIphwSGALXaNJ17wDAKDCewQiFNx_iSe9xqcPBNhHPfIy-2eL1LrSn7Q4vdeh-vG5qXLRNtJ8xyXsdLC5tY4OO_mxxXp9t6HTwA-jZxr4NHx3ufdzhB9-Z4A--0bENuByYxxt05fS-s5M_HaPX2eO6eEoWL-W8yBeJp8BjIgy3tcqcqiTnxFCuhXTSGJelTqrKEikq6uSwyqQgakUrzV0KoITT1KiajdHtL9dbazfHoYUOX5v_m9g3c_JgFQ</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Virtual Garment Fitting Through Parsing and Context-Aware Generative Adversarial Networks with Discriminator Group</title><source>IEEE Xplore All Conference Series</source><creator>Su, Wei-Hong ; Chen, Sze-Ann ; Chin, Chen-I ; Hsiao, Hsu-Feng</creator><creatorcontrib>Su, Wei-Hong ; Chen, Sze-Ann ; Chin, Chen-I ; Hsiao, Hsu-Feng</creatorcontrib><description>Owing to the rapid growth of the e-commerce industry, image-based virtual try-on has emerged as a popular research topic in recent years. Despite the introduction of multiple approaches to achieve this concept, there remains ample scope for research and improvement. In this regard, Generative Adversarial Networks (GANs) demonstrate a framework possessing immense potential for subsequent development. Nonetheless, the generated images reported in the literature often manifest blurred edges between semantic regions, thereby diminishing the credibility of results. Furthermore, the generation of try-on images may retain the original shape of the upper body clothing on the model mistakenly, such as the length and tightness of the torso, rather than adapting to the shape of the target clothing. In this paper, we propose a more comprehensive architecture to overcome the limitations of GAN-based approaches, which includes the following contributions. First, we introduce a new parsing and context generator that takes into account the warped binary mask of the geometric matching image of the target clothing. The outputs of this generator incorporate the generation of human parsing images that correspond to the generated try-on images. Moreover, we have designed a novel discriminator group that is specifically focused on judging whether the generated image is a reasonable representation of the specific clothing being worn. According to the experimental results, our method effectively exhibits better synthesis quality and remedies the common challenges encountered while using GANs for virtual try-on.</description><identifier>EISSN: 2640-0103</identifier><identifier>EISBN: 9798350300673</identifier><identifier>DOI: 10.1109/APSIPAASC58517.2023.10317305</identifier><language>eng</language><publisher>IEEE</publisher><subject>Image synthesis ; Industries ; Morphology ; Semantics ; Shape ; Torso</subject><ispartof>2023 Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), 2023, p.1732-1738</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10317305$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,777,781,786,787,27906,54536,54913</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10317305$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Su, Wei-Hong</creatorcontrib><creatorcontrib>Chen, Sze-Ann</creatorcontrib><creatorcontrib>Chin, Chen-I</creatorcontrib><creatorcontrib>Hsiao, Hsu-Feng</creatorcontrib><title>Virtual Garment Fitting Through Parsing and Context-Aware Generative Adversarial Networks with Discriminator Group</title><title>2023 Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)</title><addtitle>APSIPA ASC</addtitle><description>Owing to the rapid growth of the e-commerce industry, image-based virtual try-on has emerged as a popular research topic in recent years. Despite the introduction of multiple approaches to achieve this concept, there remains ample scope for research and improvement. In this regard, Generative Adversarial Networks (GANs) demonstrate a framework possessing immense potential for subsequent development. Nonetheless, the generated images reported in the literature often manifest blurred edges between semantic regions, thereby diminishing the credibility of results. Furthermore, the generation of try-on images may retain the original shape of the upper body clothing on the model mistakenly, such as the length and tightness of the torso, rather than adapting to the shape of the target clothing. In this paper, we propose a more comprehensive architecture to overcome the limitations of GAN-based approaches, which includes the following contributions. First, we introduce a new parsing and context generator that takes into account the warped binary mask of the geometric matching image of the target clothing. The outputs of this generator incorporate the generation of human parsing images that correspond to the generated try-on images. Moreover, we have designed a novel discriminator group that is specifically focused on judging whether the generated image is a reasonable representation of the specific clothing being worn. According to the experimental results, our method effectively exhibits better synthesis quality and remedies the common challenges encountered while using GANs for virtual try-on.</description><subject>Image synthesis</subject><subject>Industries</subject><subject>Morphology</subject><subject>Semantics</subject><subject>Shape</subject><subject>Torso</subject><issn>2640-0103</issn><isbn>9798350300673</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2023</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNo1kM9LwzAcxaMgOOb-Aw85eO385leTHkt1dTB0sOl1pGmyRbd2pNmq_70V9fR48Pjw3kPojsCUEMju8-VqvszzVSGUIHJKgbIpAUYkA3GBJpnMFBPAAFLJLtGIphwSGALXaNJ17wDAKDCewQiFNx_iSe9xqcPBNhHPfIy-2eL1LrSn7Q4vdeh-vG5qXLRNtJ8xyXsdLC5tY4OO_mxxXp9t6HTwA-jZxr4NHx3ufdzhB9-Z4A--0bENuByYxxt05fS-s5M_HaPX2eO6eEoWL-W8yBeJp8BjIgy3tcqcqiTnxFCuhXTSGJelTqrKEikq6uSwyqQgakUrzV0KoITT1KiajdHtL9dbazfHoYUOX5v_m9g3c_JgFQ</recordid><startdate>20231031</startdate><enddate>20231031</enddate><creator>Su, Wei-Hong</creator><creator>Chen, Sze-Ann</creator><creator>Chin, Chen-I</creator><creator>Hsiao, Hsu-Feng</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>20231031</creationdate><title>Virtual Garment Fitting Through Parsing and Context-Aware Generative Adversarial Networks with Discriminator Group</title><author>Su, Wei-Hong ; Chen, Sze-Ann ; Chin, Chen-I ; Hsiao, Hsu-Feng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i204t-5c4ed89f8b7441c24a57f7ccf96f78be175b2f7103c605d82ba4f60085fa2c8d3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Image synthesis</topic><topic>Industries</topic><topic>Morphology</topic><topic>Semantics</topic><topic>Shape</topic><topic>Torso</topic><toplevel>online_resources</toplevel><creatorcontrib>Su, Wei-Hong</creatorcontrib><creatorcontrib>Chen, Sze-Ann</creatorcontrib><creatorcontrib>Chin, Chen-I</creatorcontrib><creatorcontrib>Hsiao, Hsu-Feng</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Xplore</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Su, Wei-Hong</au><au>Chen, Sze-Ann</au><au>Chin, Chen-I</au><au>Hsiao, Hsu-Feng</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Virtual Garment Fitting Through Parsing and Context-Aware Generative Adversarial Networks with Discriminator Group</atitle><btitle>2023 Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)</btitle><stitle>APSIPA ASC</stitle><date>2023-10-31</date><risdate>2023</risdate><spage>1732</spage><epage>1738</epage><pages>1732-1738</pages><eissn>2640-0103</eissn><eisbn>9798350300673</eisbn><abstract>Owing to the rapid growth of the e-commerce industry, image-based virtual try-on has emerged as a popular research topic in recent years. Despite the introduction of multiple approaches to achieve this concept, there remains ample scope for research and improvement. In this regard, Generative Adversarial Networks (GANs) demonstrate a framework possessing immense potential for subsequent development. Nonetheless, the generated images reported in the literature often manifest blurred edges between semantic regions, thereby diminishing the credibility of results. Furthermore, the generation of try-on images may retain the original shape of the upper body clothing on the model mistakenly, such as the length and tightness of the torso, rather than adapting to the shape of the target clothing. In this paper, we propose a more comprehensive architecture to overcome the limitations of GAN-based approaches, which includes the following contributions. First, we introduce a new parsing and context generator that takes into account the warped binary mask of the geometric matching image of the target clothing. The outputs of this generator incorporate the generation of human parsing images that correspond to the generated try-on images. Moreover, we have designed a novel discriminator group that is specifically focused on judging whether the generated image is a reasonable representation of the specific clothing being worn. According to the experimental results, our method effectively exhibits better synthesis quality and remedies the common challenges encountered while using GANs for virtual try-on.</abstract><pub>IEEE</pub><doi>10.1109/APSIPAASC58517.2023.10317305</doi><tpages>7</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2640-0103
ispartof 2023 Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), 2023, p.1732-1738
issn 2640-0103
language eng
recordid cdi_ieee_primary_10317305
source IEEE Xplore All Conference Series
subjects Image synthesis
Industries
Morphology
Semantics
Shape
Torso
title Virtual Garment Fitting Through Parsing and Context-Aware Generative Adversarial Networks with Discriminator Group
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T09%3A13%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Virtual%20Garment%20Fitting%20Through%20Parsing%20and%20Context-Aware%20Generative%20Adversarial%20Networks%20with%20Discriminator%20Group&rft.btitle=2023%20Asia%20Pacific%20Signal%20and%20Information%20Processing%20Association%20Annual%20Summit%20and%20Conference%20(APSIPA%20ASC)&rft.au=Su,%20Wei-Hong&rft.date=2023-10-31&rft.spage=1732&rft.epage=1738&rft.pages=1732-1738&rft.eissn=2640-0103&rft_id=info:doi/10.1109/APSIPAASC58517.2023.10317305&rft.eisbn=9798350300673&rft_dat=%3Cieee_CHZPO%3E10317305%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i204t-5c4ed89f8b7441c24a57f7ccf96f78be175b2f7103c605d82ba4f60085fa2c8d3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10317305&rfr_iscdi=true