Loading…

Hidden dimensions of the data: PCA vs autoencoders

Principal component analysis (PCA) has been a commonly used unsupervised learning method with broad applications in both descriptive and inferential analytics. It is widely used for representation learning to extract key features from a dataset and visualize them in a lower dimensional space. With m...

Full description

Saved in:
Bibliographic Details
Published in:Quality engineering 2023-10, Vol.35 (4), p.741-750
Main Authors: Cacciarelli, Davide, Kulahci, Murat
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c365t-4f768f705859b3b99c6318dff4ba667e132b55d9322236530b17d0c5034199393
cites cdi_FETCH-LOGICAL-c365t-4f768f705859b3b99c6318dff4ba667e132b55d9322236530b17d0c5034199393
container_end_page 750
container_issue 4
container_start_page 741
container_title Quality engineering
container_volume 35
creator Cacciarelli, Davide
Kulahci, Murat
description Principal component analysis (PCA) has been a commonly used unsupervised learning method with broad applications in both descriptive and inferential analytics. It is widely used for representation learning to extract key features from a dataset and visualize them in a lower dimensional space. With more applications of neural network-based methods, autoencoders (AEs) have gained popularity for dimensionality reduction tasks. In this paper, we explore the intriguing relationship between PCA and AEs and demonstrate, through some examples, how these two approaches yield similar results in the case of the so-called linear AEs (LAEs). This study provides insights into the evolving landscape of unsupervised learning and highlights the relevance of both PCA and AEs in modern data analysis.
doi_str_mv 10.1080/08982112.2023.2231064
format article
fullrecord <record><control><sourceid>proquest_swepu</sourceid><recordid>TN_cdi_swepub_primary_oai_DiVA_org_ltu_99664</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2871533487</sourcerecordid><originalsourceid>FETCH-LOGICAL-c365t-4f768f705859b3b99c6318dff4ba667e132b55d9322236530b17d0c5034199393</originalsourceid><addsrcrecordid>eNo9kM1KAzEYRYMoWKuPIATcOjXJl2QSd6X-VCjoQt2GzCTRKe2kJjOKb-8MVVd3c-7lchA6p2RGiSJXRGnFKGUzRhjMGANKJD9AEyqAFZwxdogmI1OM0DE6yXlNCFVKwwSxZeOcb7Frtr7NTWwzjgF37x4729lr_LSY48-Mbd9F39bR-ZRP0VGwm-zPfnOKXu5unxfLYvV4_7CYr4oapOgKHkqpQkmEErqCSutaAlUuBF5ZKUtPgVVCOA3DwaEApKKlI7UgwKnWoGGKLve7-cvv-srsUrO16dtE25ib5nVuYnozm643WkvJB_xij-9S_Oh97sw69qkdHhqmysEFcFUOlNhTdYo5Jx_-Zykxo03zZ9OMNs2vTfgB1dNkMA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2871533487</pqid></control><display><type>article</type><title>Hidden dimensions of the data: PCA vs autoencoders</title><source>EBSCOhost Business Source Ultimate</source><source>Taylor and Francis Science and Technology Collection</source><creator>Cacciarelli, Davide ; Kulahci, Murat</creator><creatorcontrib>Cacciarelli, Davide ; Kulahci, Murat</creatorcontrib><description>Principal component analysis (PCA) has been a commonly used unsupervised learning method with broad applications in both descriptive and inferential analytics. It is widely used for representation learning to extract key features from a dataset and visualize them in a lower dimensional space. With more applications of neural network-based methods, autoencoders (AEs) have gained popularity for dimensionality reduction tasks. In this paper, we explore the intriguing relationship between PCA and AEs and demonstrate, through some examples, how these two approaches yield similar results in the case of the so-called linear AEs (LAEs). This study provides insights into the evolving landscape of unsupervised learning and highlights the relevance of both PCA and AEs in modern data analysis.</description><identifier>ISSN: 0898-2112</identifier><identifier>ISSN: 1532-4222</identifier><identifier>EISSN: 1532-4222</identifier><identifier>DOI: 10.1080/08982112.2023.2231064</identifier><language>eng</language><publisher>Milwaukee: Taylor &amp; Francis Ltd</publisher><subject>Data analysis ; Kvalitetsteknik och logistik ; Machine learning ; Neural networks ; Principal components analysis ; Quality Technology and Logistics ; Unsupervised learning</subject><ispartof>Quality engineering, 2023-10, Vol.35 (4), p.741-750</ispartof><rights>2023 Taylor &amp; Francis Group, LLC</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c365t-4f768f705859b3b99c6318dff4ba667e132b55d9322236530b17d0c5034199393</citedby><cites>FETCH-LOGICAL-c365t-4f768f705859b3b99c6318dff4ba667e132b55d9322236530b17d0c5034199393</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>230,314,780,784,885,27923,27924</link.rule.ids><backlink>$$Uhttps://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-99664$$DView record from Swedish Publication Index$$Hfree_for_read</backlink></links><search><creatorcontrib>Cacciarelli, Davide</creatorcontrib><creatorcontrib>Kulahci, Murat</creatorcontrib><title>Hidden dimensions of the data: PCA vs autoencoders</title><title>Quality engineering</title><description>Principal component analysis (PCA) has been a commonly used unsupervised learning method with broad applications in both descriptive and inferential analytics. It is widely used for representation learning to extract key features from a dataset and visualize them in a lower dimensional space. With more applications of neural network-based methods, autoencoders (AEs) have gained popularity for dimensionality reduction tasks. In this paper, we explore the intriguing relationship between PCA and AEs and demonstrate, through some examples, how these two approaches yield similar results in the case of the so-called linear AEs (LAEs). This study provides insights into the evolving landscape of unsupervised learning and highlights the relevance of both PCA and AEs in modern data analysis.</description><subject>Data analysis</subject><subject>Kvalitetsteknik och logistik</subject><subject>Machine learning</subject><subject>Neural networks</subject><subject>Principal components analysis</subject><subject>Quality Technology and Logistics</subject><subject>Unsupervised learning</subject><issn>0898-2112</issn><issn>1532-4222</issn><issn>1532-4222</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNo9kM1KAzEYRYMoWKuPIATcOjXJl2QSd6X-VCjoQt2GzCTRKe2kJjOKb-8MVVd3c-7lchA6p2RGiSJXRGnFKGUzRhjMGANKJD9AEyqAFZwxdogmI1OM0DE6yXlNCFVKwwSxZeOcb7Frtr7NTWwzjgF37x4729lr_LSY48-Mbd9F39bR-ZRP0VGwm-zPfnOKXu5unxfLYvV4_7CYr4oapOgKHkqpQkmEErqCSutaAlUuBF5ZKUtPgVVCOA3DwaEApKKlI7UgwKnWoGGKLve7-cvv-srsUrO16dtE25ib5nVuYnozm643WkvJB_xij-9S_Oh97sw69qkdHhqmysEFcFUOlNhTdYo5Jx_-Zykxo03zZ9OMNs2vTfgB1dNkMA</recordid><startdate>20231002</startdate><enddate>20231002</enddate><creator>Cacciarelli, Davide</creator><creator>Kulahci, Murat</creator><general>Taylor &amp; Francis Ltd</general><scope>AAYXX</scope><scope>CITATION</scope><scope>U9A</scope><scope>ADTPV</scope><scope>AOWAS</scope></search><sort><creationdate>20231002</creationdate><title>Hidden dimensions of the data: PCA vs autoencoders</title><author>Cacciarelli, Davide ; Kulahci, Murat</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c365t-4f768f705859b3b99c6318dff4ba667e132b55d9322236530b17d0c5034199393</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Data analysis</topic><topic>Kvalitetsteknik och logistik</topic><topic>Machine learning</topic><topic>Neural networks</topic><topic>Principal components analysis</topic><topic>Quality Technology and Logistics</topic><topic>Unsupervised learning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Cacciarelli, Davide</creatorcontrib><creatorcontrib>Kulahci, Murat</creatorcontrib><collection>CrossRef</collection><collection>SwePub</collection><collection>SwePub Articles</collection><jtitle>Quality engineering</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Cacciarelli, Davide</au><au>Kulahci, Murat</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Hidden dimensions of the data: PCA vs autoencoders</atitle><jtitle>Quality engineering</jtitle><date>2023-10-02</date><risdate>2023</risdate><volume>35</volume><issue>4</issue><spage>741</spage><epage>750</epage><pages>741-750</pages><issn>0898-2112</issn><issn>1532-4222</issn><eissn>1532-4222</eissn><abstract>Principal component analysis (PCA) has been a commonly used unsupervised learning method with broad applications in both descriptive and inferential analytics. It is widely used for representation learning to extract key features from a dataset and visualize them in a lower dimensional space. With more applications of neural network-based methods, autoencoders (AEs) have gained popularity for dimensionality reduction tasks. In this paper, we explore the intriguing relationship between PCA and AEs and demonstrate, through some examples, how these two approaches yield similar results in the case of the so-called linear AEs (LAEs). This study provides insights into the evolving landscape of unsupervised learning and highlights the relevance of both PCA and AEs in modern data analysis.</abstract><cop>Milwaukee</cop><pub>Taylor &amp; Francis Ltd</pub><doi>10.1080/08982112.2023.2231064</doi><tpages>10</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0898-2112
ispartof Quality engineering, 2023-10, Vol.35 (4), p.741-750
issn 0898-2112
1532-4222
1532-4222
language eng
recordid cdi_swepub_primary_oai_DiVA_org_ltu_99664
source EBSCOhost Business Source Ultimate; Taylor and Francis Science and Technology Collection
subjects Data analysis
Kvalitetsteknik och logistik
Machine learning
Neural networks
Principal components analysis
Quality Technology and Logistics
Unsupervised learning
title Hidden dimensions of the data: PCA vs autoencoders
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T21%3A16%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_swepu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Hidden%20dimensions%20of%20the%20data:%20PCA%20vs%20autoencoders&rft.jtitle=Quality%20engineering&rft.au=Cacciarelli,%20Davide&rft.date=2023-10-02&rft.volume=35&rft.issue=4&rft.spage=741&rft.epage=750&rft.pages=741-750&rft.issn=0898-2112&rft.eissn=1532-4222&rft_id=info:doi/10.1080/08982112.2023.2231064&rft_dat=%3Cproquest_swepu%3E2871533487%3C/proquest_swepu%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c365t-4f768f705859b3b99c6318dff4ba667e132b55d9322236530b17d0c5034199393%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2871533487&rft_id=info:pmid/&rfr_iscdi=true