Loading…
Autoencoder and Its Various Variants
The concept of autoencoder was originally proposed by LeCun in 1987, early works on autoencoder were used for dimensionality reduction or feature learning. Recently, with the popularity of deep learning research, autoencoder has been brought to the forefront of generative modeling. Many variants of...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Citations: | Items that cite this one |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c220t-be81b20f00d2355d0fe32dab774d24a678fcb828b7839db6844013854d1d5d713 |
---|---|
cites | |
container_end_page | 419 |
container_issue | |
container_start_page | 415 |
container_title | |
container_volume | |
creator | Zhai, Junhai Zhang, Sufang Chen, Junfen He, Qiang |
description | The concept of autoencoder was originally proposed by LeCun in 1987, early works on autoencoder were used for dimensionality reduction or feature learning. Recently, with the popularity of deep learning research, autoencoder has been brought to the forefront of generative modeling. Many variants of autoencoder have been proposed by different researchers and have been successfully applied in many fields, such as computer vision, speech recognition and natural language processing. In this paper, we present a comprehensive survey on autoencoder and its various variants. Furthermore, we also present the lineage of the surveyed autoencoders. This paper can provide researchers engaged in related works with very valuable help. |
doi_str_mv | 10.1109/SMC.2018.00080 |
format | conference_proceeding |
fullrecord | <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_8616075</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8616075</ieee_id><sourcerecordid>8616075</sourcerecordid><originalsourceid>FETCH-LOGICAL-c220t-be81b20f00d2355d0fe32dab774d24a678fcb828b7839db6844013854d1d5d713</originalsourceid><addsrcrecordid>eNotzD1PwzAQAFCDhERbWFlYMrAm3J199nWsIiiVihj4WCs750hBkKAkHfj3DGV62zPmBqFChPX963NdEaBUACBwZpbIVrz3DHRuFsQhlOiZL81ymj4BCBzKwtxtjvOQ-2bQPBax12I3T8VHHLvheDL283RlLtr4NeXrf1fm_fHhrX4q9y_bXb3Zlw0RzGXKgomgBVCyzApttqQxheCUXPRB2iYJSQpi15q8OAdohZ2isga0K3N7eruc8-Fn7L7j-HsQjx4C2z95pD0N</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Autoencoder and Its Various Variants</title><source>IEEE Xplore All Conference Series</source><creator>Zhai, Junhai ; Zhang, Sufang ; Chen, Junfen ; He, Qiang</creator><creatorcontrib>Zhai, Junhai ; Zhang, Sufang ; Chen, Junfen ; He, Qiang</creatorcontrib><description>The concept of autoencoder was originally proposed by LeCun in 1987, early works on autoencoder were used for dimensionality reduction or feature learning. Recently, with the popularity of deep learning research, autoencoder has been brought to the forefront of generative modeling. Many variants of autoencoder have been proposed by different researchers and have been successfully applied in many fields, such as computer vision, speech recognition and natural language processing. In this paper, we present a comprehensive survey on autoencoder and its various variants. Furthermore, we also present the lineage of the surveyed autoencoders. This paper can provide researchers engaged in related works with very valuable help.</description><identifier>EISSN: 2577-1655</identifier><identifier>EISBN: 1538666502</identifier><identifier>EISBN: 9781538666500</identifier><identifier>DOI: 10.1109/SMC.2018.00080</identifier><identifier>CODEN: IEEPAD</identifier><language>eng</language><publisher>IEEE</publisher><subject>autoencoder ; Computational modeling ; Data models ; decoder ; Decoding ; deep learning ; feature learning ; Gallium nitride ; Generative adversarial networks ; generative model ; Mathematical model ; Training</subject><ispartof>2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2018, p.415-419</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c220t-be81b20f00d2355d0fe32dab774d24a678fcb828b7839db6844013854d1d5d713</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8616075$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,777,781,786,787,23911,23912,25121,27906,54536,54913</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8616075$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Zhai, Junhai</creatorcontrib><creatorcontrib>Zhang, Sufang</creatorcontrib><creatorcontrib>Chen, Junfen</creatorcontrib><creatorcontrib>He, Qiang</creatorcontrib><title>Autoencoder and Its Various Variants</title><title>2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC)</title><addtitle>SMC</addtitle><description>The concept of autoencoder was originally proposed by LeCun in 1987, early works on autoencoder were used for dimensionality reduction or feature learning. Recently, with the popularity of deep learning research, autoencoder has been brought to the forefront of generative modeling. Many variants of autoencoder have been proposed by different researchers and have been successfully applied in many fields, such as computer vision, speech recognition and natural language processing. In this paper, we present a comprehensive survey on autoencoder and its various variants. Furthermore, we also present the lineage of the surveyed autoencoders. This paper can provide researchers engaged in related works with very valuable help.</description><subject>autoencoder</subject><subject>Computational modeling</subject><subject>Data models</subject><subject>decoder</subject><subject>Decoding</subject><subject>deep learning</subject><subject>feature learning</subject><subject>Gallium nitride</subject><subject>Generative adversarial networks</subject><subject>generative model</subject><subject>Mathematical model</subject><subject>Training</subject><issn>2577-1655</issn><isbn>1538666502</isbn><isbn>9781538666500</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2018</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotzD1PwzAQAFCDhERbWFlYMrAm3J199nWsIiiVihj4WCs750hBkKAkHfj3DGV62zPmBqFChPX963NdEaBUACBwZpbIVrz3DHRuFsQhlOiZL81ymj4BCBzKwtxtjvOQ-2bQPBax12I3T8VHHLvheDL283RlLtr4NeXrf1fm_fHhrX4q9y_bXb3Zlw0RzGXKgomgBVCyzApttqQxheCUXPRB2iYJSQpi15q8OAdohZ2isga0K3N7eruc8-Fn7L7j-HsQjx4C2z95pD0N</recordid><startdate>201810</startdate><enddate>201810</enddate><creator>Zhai, Junhai</creator><creator>Zhang, Sufang</creator><creator>Chen, Junfen</creator><creator>He, Qiang</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>201810</creationdate><title>Autoencoder and Its Various Variants</title><author>Zhai, Junhai ; Zhang, Sufang ; Chen, Junfen ; He, Qiang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c220t-be81b20f00d2355d0fe32dab774d24a678fcb828b7839db6844013854d1d5d713</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2018</creationdate><topic>autoencoder</topic><topic>Computational modeling</topic><topic>Data models</topic><topic>decoder</topic><topic>Decoding</topic><topic>deep learning</topic><topic>feature learning</topic><topic>Gallium nitride</topic><topic>Generative adversarial networks</topic><topic>generative model</topic><topic>Mathematical model</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhai, Junhai</creatorcontrib><creatorcontrib>Zhang, Sufang</creatorcontrib><creatorcontrib>Chen, Junfen</creatorcontrib><creatorcontrib>He, Qiang</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhai, Junhai</au><au>Zhang, Sufang</au><au>Chen, Junfen</au><au>He, Qiang</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Autoencoder and Its Various Variants</atitle><btitle>2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC)</btitle><stitle>SMC</stitle><date>2018-10</date><risdate>2018</risdate><spage>415</spage><epage>419</epage><pages>415-419</pages><eissn>2577-1655</eissn><eisbn>1538666502</eisbn><eisbn>9781538666500</eisbn><coden>IEEPAD</coden><abstract>The concept of autoencoder was originally proposed by LeCun in 1987, early works on autoencoder were used for dimensionality reduction or feature learning. Recently, with the popularity of deep learning research, autoencoder has been brought to the forefront of generative modeling. Many variants of autoencoder have been proposed by different researchers and have been successfully applied in many fields, such as computer vision, speech recognition and natural language processing. In this paper, we present a comprehensive survey on autoencoder and its various variants. Furthermore, we also present the lineage of the surveyed autoencoders. This paper can provide researchers engaged in related works with very valuable help.</abstract><pub>IEEE</pub><doi>10.1109/SMC.2018.00080</doi><tpages>5</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | EISSN: 2577-1655 |
ispartof | 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2018, p.415-419 |
issn | 2577-1655 |
language | eng |
recordid | cdi_ieee_primary_8616075 |
source | IEEE Xplore All Conference Series |
subjects | autoencoder Computational modeling Data models decoder Decoding deep learning feature learning Gallium nitride Generative adversarial networks generative model Mathematical model Training |
title | Autoencoder and Its Various Variants |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T13%3A56%3A06IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Autoencoder%20and%20Its%20Various%20Variants&rft.btitle=2018%20IEEE%20International%20Conference%20on%20Systems,%20Man,%20and%20Cybernetics%20(SMC)&rft.au=Zhai,%20Junhai&rft.date=2018-10&rft.spage=415&rft.epage=419&rft.pages=415-419&rft.eissn=2577-1655&rft.coden=IEEPAD&rft_id=info:doi/10.1109/SMC.2018.00080&rft.eisbn=1538666502&rft.eisbn_list=9781538666500&rft_dat=%3Cieee_CHZPO%3E8616075%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c220t-be81b20f00d2355d0fe32dab774d24a678fcb828b7839db6844013854d1d5d713%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=8616075&rfr_iscdi=true |