Loading…

DeLiGAN : Generative Adversarial Networks for Diverse and Limited Data

A class of recent approaches for generating images, called Generative Adversarial Networks (GAN), have been used to generate impressively realistic images of objects, bedrooms, handwritten digits and a variety of other image modalities. However, typical GAN-based approaches require large amounts of...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2017-06
Main Authors: Swaminathan Gurumurthy, Sarvadevabhatla, Ravi Kiran, Venkatesh Babu Radhakrishnan
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Swaminathan Gurumurthy
Sarvadevabhatla, Ravi Kiran
Venkatesh Babu Radhakrishnan
description A class of recent approaches for generating images, called Generative Adversarial Networks (GAN), have been used to generate impressively realistic images of objects, bedrooms, handwritten digits and a variety of other image modalities. However, typical GAN-based approaches require large amounts of training data to capture the diversity across the image modality. In this paper, we propose DeLiGAN -- a novel GAN-based architecture for diverse and limited training data scenarios. In our approach, we reparameterize the latent generative space as a mixture model and learn the mixture model's parameters along with those of GAN. This seemingly simple modification to the GAN framework is surprisingly effective and results in models which enable diversity in generated samples although trained with limited data. In our work, we show that DeLiGAN can generate images of handwritten digits, objects and hand-drawn sketches, all using limited amounts of data. To quantitatively characterize intra-class diversity of generated samples, we also introduce a modified version of "inception-score", a measure which has been found to correlate well with human assessment of generated samples.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2075879695</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2075879695</sourcerecordid><originalsourceid>FETCH-proquest_journals_20758796953</originalsourceid><addsrcrecordid>eNqNyr0KwjAUQOEgCBbtO1xwLsTE9MetWFuH4uReAr2F1NroTVpfXwUfwOkM31mwQEi5i9K9ECsWOtdzzkWcCKVkwMoCa1PlFzhAhSOS9mZGyNsZyWkyeoAL-pelm4POEhTmCwh6bKE2d-OxhUJ7vWHLTg8Ow1_XbFuersdz9CD7nND5prcTjR9qBE9UmmRxpuR_1xsW_Tow</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2075879695</pqid></control><display><type>article</type><title>DeLiGAN : Generative Adversarial Networks for Diverse and Limited Data</title><source>Publicly Available Content (ProQuest)</source><creator>Swaminathan Gurumurthy ; Sarvadevabhatla, Ravi Kiran ; Venkatesh Babu Radhakrishnan</creator><creatorcontrib>Swaminathan Gurumurthy ; Sarvadevabhatla, Ravi Kiran ; Venkatesh Babu Radhakrishnan</creatorcontrib><description>A class of recent approaches for generating images, called Generative Adversarial Networks (GAN), have been used to generate impressively realistic images of objects, bedrooms, handwritten digits and a variety of other image modalities. However, typical GAN-based approaches require large amounts of training data to capture the diversity across the image modality. In this paper, we propose DeLiGAN -- a novel GAN-based architecture for diverse and limited training data scenarios. In our approach, we reparameterize the latent generative space as a mixture model and learn the mixture model's parameters along with those of GAN. This seemingly simple modification to the GAN framework is surprisingly effective and results in models which enable diversity in generated samples although trained with limited data. In our work, we show that DeLiGAN can generate images of handwritten digits, objects and hand-drawn sketches, all using limited amounts of data. To quantitatively characterize intra-class diversity of generated samples, we also introduce a modified version of "inception-score", a measure which has been found to correlate well with human assessment of generated samples.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Bedrooms ; Digits ; Generative adversarial networks ; Handwriting ; Parameter modification ; Sketches ; Training</subject><ispartof>arXiv.org, 2017-06</ispartof><rights>2017. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2075879695?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>776,780,25731,36989,44566</link.rule.ids></links><search><creatorcontrib>Swaminathan Gurumurthy</creatorcontrib><creatorcontrib>Sarvadevabhatla, Ravi Kiran</creatorcontrib><creatorcontrib>Venkatesh Babu Radhakrishnan</creatorcontrib><title>DeLiGAN : Generative Adversarial Networks for Diverse and Limited Data</title><title>arXiv.org</title><description>A class of recent approaches for generating images, called Generative Adversarial Networks (GAN), have been used to generate impressively realistic images of objects, bedrooms, handwritten digits and a variety of other image modalities. However, typical GAN-based approaches require large amounts of training data to capture the diversity across the image modality. In this paper, we propose DeLiGAN -- a novel GAN-based architecture for diverse and limited training data scenarios. In our approach, we reparameterize the latent generative space as a mixture model and learn the mixture model's parameters along with those of GAN. This seemingly simple modification to the GAN framework is surprisingly effective and results in models which enable diversity in generated samples although trained with limited data. In our work, we show that DeLiGAN can generate images of handwritten digits, objects and hand-drawn sketches, all using limited amounts of data. To quantitatively characterize intra-class diversity of generated samples, we also introduce a modified version of "inception-score", a measure which has been found to correlate well with human assessment of generated samples.</description><subject>Bedrooms</subject><subject>Digits</subject><subject>Generative adversarial networks</subject><subject>Handwriting</subject><subject>Parameter modification</subject><subject>Sketches</subject><subject>Training</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNyr0KwjAUQOEgCBbtO1xwLsTE9MetWFuH4uReAr2F1NroTVpfXwUfwOkM31mwQEi5i9K9ECsWOtdzzkWcCKVkwMoCa1PlFzhAhSOS9mZGyNsZyWkyeoAL-pelm4POEhTmCwh6bKE2d-OxhUJ7vWHLTg8Ow1_XbFuersdz9CD7nND5prcTjR9qBE9UmmRxpuR_1xsW_Tow</recordid><startdate>20170607</startdate><enddate>20170607</enddate><creator>Swaminathan Gurumurthy</creator><creator>Sarvadevabhatla, Ravi Kiran</creator><creator>Venkatesh Babu Radhakrishnan</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20170607</creationdate><title>DeLiGAN : Generative Adversarial Networks for Diverse and Limited Data</title><author>Swaminathan Gurumurthy ; Sarvadevabhatla, Ravi Kiran ; Venkatesh Babu Radhakrishnan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_20758796953</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Bedrooms</topic><topic>Digits</topic><topic>Generative adversarial networks</topic><topic>Handwriting</topic><topic>Parameter modification</topic><topic>Sketches</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Swaminathan Gurumurthy</creatorcontrib><creatorcontrib>Sarvadevabhatla, Ravi Kiran</creatorcontrib><creatorcontrib>Venkatesh Babu Radhakrishnan</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Swaminathan Gurumurthy</au><au>Sarvadevabhatla, Ravi Kiran</au><au>Venkatesh Babu Radhakrishnan</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>DeLiGAN : Generative Adversarial Networks for Diverse and Limited Data</atitle><jtitle>arXiv.org</jtitle><date>2017-06-07</date><risdate>2017</risdate><eissn>2331-8422</eissn><abstract>A class of recent approaches for generating images, called Generative Adversarial Networks (GAN), have been used to generate impressively realistic images of objects, bedrooms, handwritten digits and a variety of other image modalities. However, typical GAN-based approaches require large amounts of training data to capture the diversity across the image modality. In this paper, we propose DeLiGAN -- a novel GAN-based architecture for diverse and limited training data scenarios. In our approach, we reparameterize the latent generative space as a mixture model and learn the mixture model's parameters along with those of GAN. This seemingly simple modification to the GAN framework is surprisingly effective and results in models which enable diversity in generated samples although trained with limited data. In our work, we show that DeLiGAN can generate images of handwritten digits, objects and hand-drawn sketches, all using limited amounts of data. To quantitatively characterize intra-class diversity of generated samples, we also introduce a modified version of "inception-score", a measure which has been found to correlate well with human assessment of generated samples.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2017-06
issn 2331-8422
language eng
recordid cdi_proquest_journals_2075879695
source Publicly Available Content (ProQuest)
subjects Bedrooms
Digits
Generative adversarial networks
Handwriting
Parameter modification
Sketches
Training
title DeLiGAN : Generative Adversarial Networks for Diverse and Limited Data
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T13%3A43%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=DeLiGAN%20:%20Generative%20Adversarial%20Networks%20for%20Diverse%20and%20Limited%20Data&rft.jtitle=arXiv.org&rft.au=Swaminathan%20Gurumurthy&rft.date=2017-06-07&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2075879695%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_20758796953%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2075879695&rft_id=info:pmid/&rfr_iscdi=true