Loading…
Low-CP-Rank Tensor Completion via Practical Regularization
Dimension reduction is analytical methods for reconstructing high-order tensors that the intrinsic rank of these tensor data is relatively much smaller than the dimension of the ambient measurement space. Typically, this is the case for most real world datasets in signals, images and machine learnin...
Saved in:
Published in: | Journal of scientific computing 2022-04, Vol.91 (1), p.18, Article 18 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c363t-5216fa70fcd7b0511e20c3bc5796bf03e8918d8688e1d2b99c2894358cdc1de53 |
---|---|
cites | cdi_FETCH-LOGICAL-c363t-5216fa70fcd7b0511e20c3bc5796bf03e8918d8688e1d2b99c2894358cdc1de53 |
container_end_page | |
container_issue | 1 |
container_start_page | 18 |
container_title | Journal of scientific computing |
container_volume | 91 |
creator | Jiang, Jiahua Sanogo, Fatoumata Navasca, Carmeliza |
description | Dimension reduction is analytical methods for reconstructing high-order tensors that the intrinsic rank of these tensor data is relatively much smaller than the dimension of the ambient measurement space. Typically, this is the case for most real world datasets in signals, images and machine learning. The CANDECOMP/PARAFAC (CP, aka Canonical Polyadic) tensor completion is a widely used approach to find a low-rank approximation for a given tensor. In the tensor model (Sanogo and Navasca in 2018 52nd Asilomar conference on signals, systems, and computers, pp 845–849,
https://doi.org/10.1109/ACSSC.2018.8645405
, 2018), a sparse regularization minimization problem via
ℓ
1
norm was formulated with an appropriate choice of the regularization parameter. The choice of the regularization parameter is important in the approximation accuracy. Due to the emergence of the massive data, one is faced with an onerous computational burden for computing the regularization parameter via classical approaches (Gazzola and Sabaté Landman in GAMM-Mitteilungen 43:e202000017, 2020) such as the weighted generalized cross validation (WGCV) (Chung et al. in Electr Trans Numer Anal 28:2008, 2008), the unbiased predictive risk estimator (Stein in Ann Stat 9:1135–1151, 1981; Vogel in Computational methods for inverse problems, 2002), and the discrepancy principle (Morozov in Doklady Akademii Nauk, Russian Academy of Sciences, pp 510–512, 1966). In order to improve the efficiency of choosing the regularization parameter and leverage the accuracy of the CP tensor, we propose a new algorithm for tensor completion by embedding the flexible hybrid method (Gazzola in Flexible krylov methods for lp regularization) into the framework of the CP tensor. The main benefits of this method include incorporating the regularization automatically and efficiently as well as improving accuracy in the reconstruction and algorithmic robustness. Numerical examples from image reconstruction and model order reduction demonstrate the efficacy of the proposed algorithm. |
doi_str_mv | 10.1007/s10915-022-01789-9 |
format | article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2918315513</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2918315513</sourcerecordid><originalsourceid>FETCH-LOGICAL-c363t-5216fa70fcd7b0511e20c3bc5796bf03e8918d8688e1d2b99c2894358cdc1de53</originalsourceid><addsrcrecordid>eNp9kE1LxDAURYMoOI7-AVcF19H3kkmTuJPiFww4DOM6pGk6dOw0Y9JR9NdbreDO1Vvce-6DQ8g5wiUCyKuEoFFQYIwCSqWpPiATFJJTmWs8JBNQSlA5k7NjcpLSBgC00mxCrufhnRYLurTdS7byXQoxK8J21_q-CV321thsEa3rG2fbbOnX-9bG5tN-h6fkqLZt8me_d0qe725XxQOdP90_Fjdz6njOeyoY5rWVULtKliAQPQPHSyekzssauFcaVaVypTxWrNTaMaVnXChXOay84FNyMe7uYnjd-9SbTdjHbnhp2IByFAL50GJjy8WQUvS12cVma-OHQTDfjszoyAyOzI8joweIj1Aayt3ax7_pf6gvYu5obg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2918315513</pqid></control><display><type>article</type><title>Low-CP-Rank Tensor Completion via Practical Regularization</title><source>Springer Nature</source><creator>Jiang, Jiahua ; Sanogo, Fatoumata ; Navasca, Carmeliza</creator><creatorcontrib>Jiang, Jiahua ; Sanogo, Fatoumata ; Navasca, Carmeliza</creatorcontrib><description>Dimension reduction is analytical methods for reconstructing high-order tensors that the intrinsic rank of these tensor data is relatively much smaller than the dimension of the ambient measurement space. Typically, this is the case for most real world datasets in signals, images and machine learning. The CANDECOMP/PARAFAC (CP, aka Canonical Polyadic) tensor completion is a widely used approach to find a low-rank approximation for a given tensor. In the tensor model (Sanogo and Navasca in 2018 52nd Asilomar conference on signals, systems, and computers, pp 845–849,
https://doi.org/10.1109/ACSSC.2018.8645405
, 2018), a sparse regularization minimization problem via
ℓ
1
norm was formulated with an appropriate choice of the regularization parameter. The choice of the regularization parameter is important in the approximation accuracy. Due to the emergence of the massive data, one is faced with an onerous computational burden for computing the regularization parameter via classical approaches (Gazzola and Sabaté Landman in GAMM-Mitteilungen 43:e202000017, 2020) such as the weighted generalized cross validation (WGCV) (Chung et al. in Electr Trans Numer Anal 28:2008, 2008), the unbiased predictive risk estimator (Stein in Ann Stat 9:1135–1151, 1981; Vogel in Computational methods for inverse problems, 2002), and the discrepancy principle (Morozov in Doklady Akademii Nauk, Russian Academy of Sciences, pp 510–512, 1966). In order to improve the efficiency of choosing the regularization parameter and leverage the accuracy of the CP tensor, we propose a new algorithm for tensor completion by embedding the flexible hybrid method (Gazzola in Flexible krylov methods for lp regularization) into the framework of the CP tensor. The main benefits of this method include incorporating the regularization automatically and efficiently as well as improving accuracy in the reconstruction and algorithmic robustness. Numerical examples from image reconstruction and model order reduction demonstrate the efficacy of the proposed algorithm.</description><identifier>ISSN: 0885-7474</identifier><identifier>EISSN: 1573-7691</identifier><identifier>DOI: 10.1007/s10915-022-01789-9</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Accuracy ; Algorithms ; Approximation ; Computational Mathematics and Numerical Analysis ; Computer science ; Data science ; Decomposition ; Image reconstruction ; Inverse problems ; Linear algebra ; Machine learning ; Mathematical and Computational Engineering ; Mathematical and Computational Physics ; Mathematical models ; Mathematics ; Mathematics and Statistics ; Methods ; Missing data ; Model reduction ; Optimization techniques ; Parameters ; Regularization ; Robustness (mathematics) ; Tensors ; Theoretical</subject><ispartof>Journal of scientific computing, 2022-04, Vol.91 (1), p.18, Article 18</ispartof><rights>Crown 2022</rights><rights>Crown 2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c363t-5216fa70fcd7b0511e20c3bc5796bf03e8918d8688e1d2b99c2894358cdc1de53</citedby><cites>FETCH-LOGICAL-c363t-5216fa70fcd7b0511e20c3bc5796bf03e8918d8688e1d2b99c2894358cdc1de53</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Jiang, Jiahua</creatorcontrib><creatorcontrib>Sanogo, Fatoumata</creatorcontrib><creatorcontrib>Navasca, Carmeliza</creatorcontrib><title>Low-CP-Rank Tensor Completion via Practical Regularization</title><title>Journal of scientific computing</title><addtitle>J Sci Comput</addtitle><description>Dimension reduction is analytical methods for reconstructing high-order tensors that the intrinsic rank of these tensor data is relatively much smaller than the dimension of the ambient measurement space. Typically, this is the case for most real world datasets in signals, images and machine learning. The CANDECOMP/PARAFAC (CP, aka Canonical Polyadic) tensor completion is a widely used approach to find a low-rank approximation for a given tensor. In the tensor model (Sanogo and Navasca in 2018 52nd Asilomar conference on signals, systems, and computers, pp 845–849,
https://doi.org/10.1109/ACSSC.2018.8645405
, 2018), a sparse regularization minimization problem via
ℓ
1
norm was formulated with an appropriate choice of the regularization parameter. The choice of the regularization parameter is important in the approximation accuracy. Due to the emergence of the massive data, one is faced with an onerous computational burden for computing the regularization parameter via classical approaches (Gazzola and Sabaté Landman in GAMM-Mitteilungen 43:e202000017, 2020) such as the weighted generalized cross validation (WGCV) (Chung et al. in Electr Trans Numer Anal 28:2008, 2008), the unbiased predictive risk estimator (Stein in Ann Stat 9:1135–1151, 1981; Vogel in Computational methods for inverse problems, 2002), and the discrepancy principle (Morozov in Doklady Akademii Nauk, Russian Academy of Sciences, pp 510–512, 1966). In order to improve the efficiency of choosing the regularization parameter and leverage the accuracy of the CP tensor, we propose a new algorithm for tensor completion by embedding the flexible hybrid method (Gazzola in Flexible krylov methods for lp regularization) into the framework of the CP tensor. The main benefits of this method include incorporating the regularization automatically and efficiently as well as improving accuracy in the reconstruction and algorithmic robustness. Numerical examples from image reconstruction and model order reduction demonstrate the efficacy of the proposed algorithm.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Approximation</subject><subject>Computational Mathematics and Numerical Analysis</subject><subject>Computer science</subject><subject>Data science</subject><subject>Decomposition</subject><subject>Image reconstruction</subject><subject>Inverse problems</subject><subject>Linear algebra</subject><subject>Machine learning</subject><subject>Mathematical and Computational Engineering</subject><subject>Mathematical and Computational Physics</subject><subject>Mathematical models</subject><subject>Mathematics</subject><subject>Mathematics and Statistics</subject><subject>Methods</subject><subject>Missing data</subject><subject>Model reduction</subject><subject>Optimization techniques</subject><subject>Parameters</subject><subject>Regularization</subject><subject>Robustness (mathematics)</subject><subject>Tensors</subject><subject>Theoretical</subject><issn>0885-7474</issn><issn>1573-7691</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNp9kE1LxDAURYMoOI7-AVcF19H3kkmTuJPiFww4DOM6pGk6dOw0Y9JR9NdbreDO1Vvce-6DQ8g5wiUCyKuEoFFQYIwCSqWpPiATFJJTmWs8JBNQSlA5k7NjcpLSBgC00mxCrufhnRYLurTdS7byXQoxK8J21_q-CV321thsEa3rG2fbbOnX-9bG5tN-h6fkqLZt8me_d0qe725XxQOdP90_Fjdz6njOeyoY5rWVULtKliAQPQPHSyekzssauFcaVaVypTxWrNTaMaVnXChXOay84FNyMe7uYnjd-9SbTdjHbnhp2IByFAL50GJjy8WQUvS12cVma-OHQTDfjszoyAyOzI8joweIj1Aayt3ax7_pf6gvYu5obg</recordid><startdate>20220401</startdate><enddate>20220401</enddate><creator>Jiang, Jiahua</creator><creator>Sanogo, Fatoumata</creator><creator>Navasca, Carmeliza</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>C6C</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope></search><sort><creationdate>20220401</creationdate><title>Low-CP-Rank Tensor Completion via Practical Regularization</title><author>Jiang, Jiahua ; Sanogo, Fatoumata ; Navasca, Carmeliza</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c363t-5216fa70fcd7b0511e20c3bc5796bf03e8918d8688e1d2b99c2894358cdc1de53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Approximation</topic><topic>Computational Mathematics and Numerical Analysis</topic><topic>Computer science</topic><topic>Data science</topic><topic>Decomposition</topic><topic>Image reconstruction</topic><topic>Inverse problems</topic><topic>Linear algebra</topic><topic>Machine learning</topic><topic>Mathematical and Computational Engineering</topic><topic>Mathematical and Computational Physics</topic><topic>Mathematical models</topic><topic>Mathematics</topic><topic>Mathematics and Statistics</topic><topic>Methods</topic><topic>Missing data</topic><topic>Model reduction</topic><topic>Optimization techniques</topic><topic>Parameters</topic><topic>Regularization</topic><topic>Robustness (mathematics)</topic><topic>Tensors</topic><topic>Theoretical</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Jiang, Jiahua</creatorcontrib><creatorcontrib>Sanogo, Fatoumata</creatorcontrib><creatorcontrib>Navasca, Carmeliza</creatorcontrib><collection>SpringerOpen</collection><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Databases</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><jtitle>Journal of scientific computing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Jiang, Jiahua</au><au>Sanogo, Fatoumata</au><au>Navasca, Carmeliza</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Low-CP-Rank Tensor Completion via Practical Regularization</atitle><jtitle>Journal of scientific computing</jtitle><stitle>J Sci Comput</stitle><date>2022-04-01</date><risdate>2022</risdate><volume>91</volume><issue>1</issue><spage>18</spage><pages>18-</pages><artnum>18</artnum><issn>0885-7474</issn><eissn>1573-7691</eissn><abstract>Dimension reduction is analytical methods for reconstructing high-order tensors that the intrinsic rank of these tensor data is relatively much smaller than the dimension of the ambient measurement space. Typically, this is the case for most real world datasets in signals, images and machine learning. The CANDECOMP/PARAFAC (CP, aka Canonical Polyadic) tensor completion is a widely used approach to find a low-rank approximation for a given tensor. In the tensor model (Sanogo and Navasca in 2018 52nd Asilomar conference on signals, systems, and computers, pp 845–849,
https://doi.org/10.1109/ACSSC.2018.8645405
, 2018), a sparse regularization minimization problem via
ℓ
1
norm was formulated with an appropriate choice of the regularization parameter. The choice of the regularization parameter is important in the approximation accuracy. Due to the emergence of the massive data, one is faced with an onerous computational burden for computing the regularization parameter via classical approaches (Gazzola and Sabaté Landman in GAMM-Mitteilungen 43:e202000017, 2020) such as the weighted generalized cross validation (WGCV) (Chung et al. in Electr Trans Numer Anal 28:2008, 2008), the unbiased predictive risk estimator (Stein in Ann Stat 9:1135–1151, 1981; Vogel in Computational methods for inverse problems, 2002), and the discrepancy principle (Morozov in Doklady Akademii Nauk, Russian Academy of Sciences, pp 510–512, 1966). In order to improve the efficiency of choosing the regularization parameter and leverage the accuracy of the CP tensor, we propose a new algorithm for tensor completion by embedding the flexible hybrid method (Gazzola in Flexible krylov methods for lp regularization) into the framework of the CP tensor. The main benefits of this method include incorporating the regularization automatically and efficiently as well as improving accuracy in the reconstruction and algorithmic robustness. Numerical examples from image reconstruction and model order reduction demonstrate the efficacy of the proposed algorithm.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s10915-022-01789-9</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0885-7474 |
ispartof | Journal of scientific computing, 2022-04, Vol.91 (1), p.18, Article 18 |
issn | 0885-7474 1573-7691 |
language | eng |
recordid | cdi_proquest_journals_2918315513 |
source | Springer Nature |
subjects | Accuracy Algorithms Approximation Computational Mathematics and Numerical Analysis Computer science Data science Decomposition Image reconstruction Inverse problems Linear algebra Machine learning Mathematical and Computational Engineering Mathematical and Computational Physics Mathematical models Mathematics Mathematics and Statistics Methods Missing data Model reduction Optimization techniques Parameters Regularization Robustness (mathematics) Tensors Theoretical |
title | Low-CP-Rank Tensor Completion via Practical Regularization |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T20%3A55%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Low-CP-Rank%20Tensor%20Completion%20via%20Practical%20Regularization&rft.jtitle=Journal%20of%20scientific%20computing&rft.au=Jiang,%20Jiahua&rft.date=2022-04-01&rft.volume=91&rft.issue=1&rft.spage=18&rft.pages=18-&rft.artnum=18&rft.issn=0885-7474&rft.eissn=1573-7691&rft_id=info:doi/10.1007/s10915-022-01789-9&rft_dat=%3Cproquest_cross%3E2918315513%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c363t-5216fa70fcd7b0511e20c3bc5796bf03e8918d8688e1d2b99c2894358cdc1de53%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2918315513&rft_id=info:pmid/&rfr_iscdi=true |