Loading…

Multi-Temporal Pixel-Based Compositing for Cloud Removal Based on Cloud Masks Developed Using Classification Techniques

Cloud is a serious problem that affects the quality of remote-sensing (RS) images. Existing cloud removal techniques suffer from notable limitations, such as being specific to certain data types, cloud conditions, and spatial extents, as well as requiring auxiliary data, which hampers their generali...

Full description

Saved in:
Bibliographic Details
Published in:Remote sensing (Basel, Switzerland) Switzerland), 2024-10, Vol.16 (19), p.3665
Main Authors: Adugna, Tesfaye, Xu, Wenbo, Fan, Jinlong, Luo, Xin, Jia, Haitao
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c289t-4b532ce3447967612d934ea9659067d0c39100bfc0505194af2ca3607caab5033
container_end_page
container_issue 19
container_start_page 3665
container_title Remote sensing (Basel, Switzerland)
container_volume 16
creator Adugna, Tesfaye
Xu, Wenbo
Fan, Jinlong
Luo, Xin
Jia, Haitao
description Cloud is a serious problem that affects the quality of remote-sensing (RS) images. Existing cloud removal techniques suffer from notable limitations, such as being specific to certain data types, cloud conditions, and spatial extents, as well as requiring auxiliary data, which hampers their generalizability and flexibility. To address the issue, we propose a maximum-value compositing approach by generating cloud masks. We acquired 432 daily MOD09GA L2 MODIS imageries covering a vast region with persistent cloud cover and various climates and land-cover types. Labeled datasets for cloud, land, and no-data were collected from selected daily imageries. Subsequently, we trained and evaluated RF, SVM, and U-Net models to choose the best models. Accordingly, SVM and U-Net were chosen and employed to classify all the daily imageries. Then, the classified imageries were converted to two sets of mask layers to mask clouds and no-data pixels in the corresponding daily images by setting the masked pixels’ values to −0.999999. After masking, we employed the maximum-value technique to generate two sets of 16-day composite products, MaxComp-1 and MaxComp-2, corresponding to SVM and U-Net-derived cloud masks, respectively. Finally, we assessed the quality of our composite products by comparing them with the reference MOD13A1 16-day composite product. Based on the land-cover classification accuracy, our products yielded a significantly higher accuracy (5–28%) than the reference MODIS product across three classifiers (RF, SVM, and U-Net), indicating the quality of our products and the effectiveness of our techniques. In particular, MaxComp-1 yielded the best results, which further implies the superiority of SVM for cloud masking. In addition, our products appear to be more radiometrically and spectrally consistent and less noisy than MOD13A1, implying that our approach is more efficient in removing shadows and noises/artifacts. Our method yields high-quality products that are vital for investigating large regions with persistent clouds and studies requiring time-series data. Moreover, the proposed techniques can be adopted for higher-resolution RS imageries, regardless of the spatial extent, data volume, and type of clouds.
doi_str_mv 10.3390/rs16193665
format article
fullrecord <record><control><sourceid>gale_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_76f9fee3903649b3b346106598ef5014</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A814409541</galeid><doaj_id>oai_doaj_org_article_76f9fee3903649b3b346106598ef5014</doaj_id><sourcerecordid>A814409541</sourcerecordid><originalsourceid>FETCH-LOGICAL-c289t-4b532ce3447967612d934ea9659067d0c39100bfc0505194af2ca3607caab5033</originalsourceid><addsrcrecordid>eNpNUV1P2zAUjSaQVhVe9gsi7W1SynXsOPVjF2AggYam8mzdONeduzTu7JTBv8dd0Jjvg61zzzm-H1n2icGCcwUXITLJFJey-pDNSqjLQpSqPPnv_TE7j3EL6XDOFIhZ9uf-0I-uWNNu7wP2-YN7pr74ipG6vPEJjG50wya3PuRN7w9d_oN2_ikxJ44f3uB7jL9ifklP1Pt9SjzGo6zpMUZnncHRJeqazM_B_T5QPMtOLfaRzt_uefZ4fbVuboq7799um9VdYcqlGgvRVrw0xIWolawlKzvFBaGSlQJZd2C4YgCtNVBBxZRAWxrkEmqD2Fapy3l2O_l2Hrd6H9wOw4v26PRfwIeNxjA605OupVWWKE2SS6Fa3nIhGaSflmQrYCJ5fZ689sEfexj11h_CkMrXnLE0dsU4JNZiYm0wmbrB-jGgSdHRzhk_kHUJXy2ZEKAqwZLgyyQwwccYyP4rk4E-Lla_L5a_AuWrk2c</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3116659130</pqid></control><display><type>article</type><title>Multi-Temporal Pixel-Based Compositing for Cloud Removal Based on Cloud Masks Developed Using Classification Techniques</title><source>Publicly Available Content Database</source><creator>Adugna, Tesfaye ; Xu, Wenbo ; Fan, Jinlong ; Luo, Xin ; Jia, Haitao</creator><creatorcontrib>Adugna, Tesfaye ; Xu, Wenbo ; Fan, Jinlong ; Luo, Xin ; Jia, Haitao</creatorcontrib><description>Cloud is a serious problem that affects the quality of remote-sensing (RS) images. Existing cloud removal techniques suffer from notable limitations, such as being specific to certain data types, cloud conditions, and spatial extents, as well as requiring auxiliary data, which hampers their generalizability and flexibility. To address the issue, we propose a maximum-value compositing approach by generating cloud masks. We acquired 432 daily MOD09GA L2 MODIS imageries covering a vast region with persistent cloud cover and various climates and land-cover types. Labeled datasets for cloud, land, and no-data were collected from selected daily imageries. Subsequently, we trained and evaluated RF, SVM, and U-Net models to choose the best models. Accordingly, SVM and U-Net were chosen and employed to classify all the daily imageries. Then, the classified imageries were converted to two sets of mask layers to mask clouds and no-data pixels in the corresponding daily images by setting the masked pixels’ values to −0.999999. After masking, we employed the maximum-value technique to generate two sets of 16-day composite products, MaxComp-1 and MaxComp-2, corresponding to SVM and U-Net-derived cloud masks, respectively. Finally, we assessed the quality of our composite products by comparing them with the reference MOD13A1 16-day composite product. Based on the land-cover classification accuracy, our products yielded a significantly higher accuracy (5–28%) than the reference MODIS product across three classifiers (RF, SVM, and U-Net), indicating the quality of our products and the effectiveness of our techniques. In particular, MaxComp-1 yielded the best results, which further implies the superiority of SVM for cloud masking. In addition, our products appear to be more radiometrically and spectrally consistent and less noisy than MOD13A1, implying that our approach is more efficient in removing shadows and noises/artifacts. Our method yields high-quality products that are vital for investigating large regions with persistent clouds and studies requiring time-series data. Moreover, the proposed techniques can be adopted for higher-resolution RS imageries, regardless of the spatial extent, data volume, and type of clouds.</description><identifier>ISSN: 2072-4292</identifier><identifier>EISSN: 2072-4292</identifier><identifier>DOI: 10.3390/rs16193665</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Algorithms ; Classification ; Cloud cover ; cloud mask ; cloud removal ; Clouds ; Deep learning ; Image acquisition ; Image quality ; Information management ; Land acquisition ; Land cover ; machine learning ; Masking ; Masks ; Methods ; MODIS ; pixel-based compositing ; Pixels ; Radiation ; Remote sensing ; segmentation ; Sensors ; Spatial data</subject><ispartof>Remote sensing (Basel, Switzerland), 2024-10, Vol.16 (19), p.3665</ispartof><rights>COPYRIGHT 2024 MDPI AG</rights><rights>2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c289t-4b532ce3447967612d934ea9659067d0c39100bfc0505194af2ca3607caab5033</cites><orcidid>0000-0002-9534-592X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/3116659130/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/3116659130?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,25732,27903,27904,36991,44569,74872</link.rule.ids></links><search><creatorcontrib>Adugna, Tesfaye</creatorcontrib><creatorcontrib>Xu, Wenbo</creatorcontrib><creatorcontrib>Fan, Jinlong</creatorcontrib><creatorcontrib>Luo, Xin</creatorcontrib><creatorcontrib>Jia, Haitao</creatorcontrib><title>Multi-Temporal Pixel-Based Compositing for Cloud Removal Based on Cloud Masks Developed Using Classification Techniques</title><title>Remote sensing (Basel, Switzerland)</title><description>Cloud is a serious problem that affects the quality of remote-sensing (RS) images. Existing cloud removal techniques suffer from notable limitations, such as being specific to certain data types, cloud conditions, and spatial extents, as well as requiring auxiliary data, which hampers their generalizability and flexibility. To address the issue, we propose a maximum-value compositing approach by generating cloud masks. We acquired 432 daily MOD09GA L2 MODIS imageries covering a vast region with persistent cloud cover and various climates and land-cover types. Labeled datasets for cloud, land, and no-data were collected from selected daily imageries. Subsequently, we trained and evaluated RF, SVM, and U-Net models to choose the best models. Accordingly, SVM and U-Net were chosen and employed to classify all the daily imageries. Then, the classified imageries were converted to two sets of mask layers to mask clouds and no-data pixels in the corresponding daily images by setting the masked pixels’ values to −0.999999. After masking, we employed the maximum-value technique to generate two sets of 16-day composite products, MaxComp-1 and MaxComp-2, corresponding to SVM and U-Net-derived cloud masks, respectively. Finally, we assessed the quality of our composite products by comparing them with the reference MOD13A1 16-day composite product. Based on the land-cover classification accuracy, our products yielded a significantly higher accuracy (5–28%) than the reference MODIS product across three classifiers (RF, SVM, and U-Net), indicating the quality of our products and the effectiveness of our techniques. In particular, MaxComp-1 yielded the best results, which further implies the superiority of SVM for cloud masking. In addition, our products appear to be more radiometrically and spectrally consistent and less noisy than MOD13A1, implying that our approach is more efficient in removing shadows and noises/artifacts. Our method yields high-quality products that are vital for investigating large regions with persistent clouds and studies requiring time-series data. Moreover, the proposed techniques can be adopted for higher-resolution RS imageries, regardless of the spatial extent, data volume, and type of clouds.</description><subject>Algorithms</subject><subject>Classification</subject><subject>Cloud cover</subject><subject>cloud mask</subject><subject>cloud removal</subject><subject>Clouds</subject><subject>Deep learning</subject><subject>Image acquisition</subject><subject>Image quality</subject><subject>Information management</subject><subject>Land acquisition</subject><subject>Land cover</subject><subject>machine learning</subject><subject>Masking</subject><subject>Masks</subject><subject>Methods</subject><subject>MODIS</subject><subject>pixel-based compositing</subject><subject>Pixels</subject><subject>Radiation</subject><subject>Remote sensing</subject><subject>segmentation</subject><subject>Sensors</subject><subject>Spatial data</subject><issn>2072-4292</issn><issn>2072-4292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNpNUV1P2zAUjSaQVhVe9gsi7W1SynXsOPVjF2AggYam8mzdONeduzTu7JTBv8dd0Jjvg61zzzm-H1n2icGCcwUXITLJFJey-pDNSqjLQpSqPPnv_TE7j3EL6XDOFIhZ9uf-0I-uWNNu7wP2-YN7pr74ipG6vPEJjG50wya3PuRN7w9d_oN2_ikxJ44f3uB7jL9ifklP1Pt9SjzGo6zpMUZnncHRJeqazM_B_T5QPMtOLfaRzt_uefZ4fbVuboq7799um9VdYcqlGgvRVrw0xIWolawlKzvFBaGSlQJZd2C4YgCtNVBBxZRAWxrkEmqD2Fapy3l2O_l2Hrd6H9wOw4v26PRfwIeNxjA605OupVWWKE2SS6Fa3nIhGaSflmQrYCJ5fZ689sEfexj11h_CkMrXnLE0dsU4JNZiYm0wmbrB-jGgSdHRzhk_kHUJXy2ZEKAqwZLgyyQwwccYyP4rk4E-Lla_L5a_AuWrk2c</recordid><startdate>20241001</startdate><enddate>20241001</enddate><creator>Adugna, Tesfaye</creator><creator>Xu, Wenbo</creator><creator>Fan, Jinlong</creator><creator>Luo, Xin</creator><creator>Jia, Haitao</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QQ</scope><scope>7QR</scope><scope>7SC</scope><scope>7SE</scope><scope>7SN</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BHPHI</scope><scope>BKSAR</scope><scope>C1K</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>H8G</scope><scope>HCIFZ</scope><scope>JG9</scope><scope>JQ2</scope><scope>KR7</scope><scope>L6V</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>P64</scope><scope>PCBAR</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-9534-592X</orcidid></search><sort><creationdate>20241001</creationdate><title>Multi-Temporal Pixel-Based Compositing for Cloud Removal Based on Cloud Masks Developed Using Classification Techniques</title><author>Adugna, Tesfaye ; Xu, Wenbo ; Fan, Jinlong ; Luo, Xin ; Jia, Haitao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c289t-4b532ce3447967612d934ea9659067d0c39100bfc0505194af2ca3607caab5033</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Algorithms</topic><topic>Classification</topic><topic>Cloud cover</topic><topic>cloud mask</topic><topic>cloud removal</topic><topic>Clouds</topic><topic>Deep learning</topic><topic>Image acquisition</topic><topic>Image quality</topic><topic>Information management</topic><topic>Land acquisition</topic><topic>Land cover</topic><topic>machine learning</topic><topic>Masking</topic><topic>Masks</topic><topic>Methods</topic><topic>MODIS</topic><topic>pixel-based compositing</topic><topic>Pixels</topic><topic>Radiation</topic><topic>Remote sensing</topic><topic>segmentation</topic><topic>Sensors</topic><topic>Spatial data</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Adugna, Tesfaye</creatorcontrib><creatorcontrib>Xu, Wenbo</creatorcontrib><creatorcontrib>Fan, Jinlong</creatorcontrib><creatorcontrib>Luo, Xin</creatorcontrib><creatorcontrib>Jia, Haitao</creatorcontrib><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Chemoreception Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Ecology Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>Natural Science Collection</collection><collection>Earth, Atmospheric &amp; Aquatic Science Collection</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Copper Technical Reference Library</collection><collection>SciTech Premium Collection</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Engineering Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>Earth, Atmospheric &amp; Aquatic Science Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>Directory of Open Access Journals</collection><jtitle>Remote sensing (Basel, Switzerland)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Adugna, Tesfaye</au><au>Xu, Wenbo</au><au>Fan, Jinlong</au><au>Luo, Xin</au><au>Jia, Haitao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multi-Temporal Pixel-Based Compositing for Cloud Removal Based on Cloud Masks Developed Using Classification Techniques</atitle><jtitle>Remote sensing (Basel, Switzerland)</jtitle><date>2024-10-01</date><risdate>2024</risdate><volume>16</volume><issue>19</issue><spage>3665</spage><pages>3665-</pages><issn>2072-4292</issn><eissn>2072-4292</eissn><abstract>Cloud is a serious problem that affects the quality of remote-sensing (RS) images. Existing cloud removal techniques suffer from notable limitations, such as being specific to certain data types, cloud conditions, and spatial extents, as well as requiring auxiliary data, which hampers their generalizability and flexibility. To address the issue, we propose a maximum-value compositing approach by generating cloud masks. We acquired 432 daily MOD09GA L2 MODIS imageries covering a vast region with persistent cloud cover and various climates and land-cover types. Labeled datasets for cloud, land, and no-data were collected from selected daily imageries. Subsequently, we trained and evaluated RF, SVM, and U-Net models to choose the best models. Accordingly, SVM and U-Net were chosen and employed to classify all the daily imageries. Then, the classified imageries were converted to two sets of mask layers to mask clouds and no-data pixels in the corresponding daily images by setting the masked pixels’ values to −0.999999. After masking, we employed the maximum-value technique to generate two sets of 16-day composite products, MaxComp-1 and MaxComp-2, corresponding to SVM and U-Net-derived cloud masks, respectively. Finally, we assessed the quality of our composite products by comparing them with the reference MOD13A1 16-day composite product. Based on the land-cover classification accuracy, our products yielded a significantly higher accuracy (5–28%) than the reference MODIS product across three classifiers (RF, SVM, and U-Net), indicating the quality of our products and the effectiveness of our techniques. In particular, MaxComp-1 yielded the best results, which further implies the superiority of SVM for cloud masking. In addition, our products appear to be more radiometrically and spectrally consistent and less noisy than MOD13A1, implying that our approach is more efficient in removing shadows and noises/artifacts. Our method yields high-quality products that are vital for investigating large regions with persistent clouds and studies requiring time-series data. Moreover, the proposed techniques can be adopted for higher-resolution RS imageries, regardless of the spatial extent, data volume, and type of clouds.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/rs16193665</doi><orcidid>https://orcid.org/0000-0002-9534-592X</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2072-4292
ispartof Remote sensing (Basel, Switzerland), 2024-10, Vol.16 (19), p.3665
issn 2072-4292
2072-4292
language eng
recordid cdi_doaj_primary_oai_doaj_org_article_76f9fee3903649b3b346106598ef5014
source Publicly Available Content Database
subjects Algorithms
Classification
Cloud cover
cloud mask
cloud removal
Clouds
Deep learning
Image acquisition
Image quality
Information management
Land acquisition
Land cover
machine learning
Masking
Masks
Methods
MODIS
pixel-based compositing
Pixels
Radiation
Remote sensing
segmentation
Sensors
Spatial data
title Multi-Temporal Pixel-Based Compositing for Cloud Removal Based on Cloud Masks Developed Using Classification Techniques
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T18%3A38%3A51IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multi-Temporal%20Pixel-Based%20Compositing%20for%20Cloud%20Removal%20Based%20on%20Cloud%20Masks%20Developed%20Using%20Classification%20Techniques&rft.jtitle=Remote%20sensing%20(Basel,%20Switzerland)&rft.au=Adugna,%20Tesfaye&rft.date=2024-10-01&rft.volume=16&rft.issue=19&rft.spage=3665&rft.pages=3665-&rft.issn=2072-4292&rft.eissn=2072-4292&rft_id=info:doi/10.3390/rs16193665&rft_dat=%3Cgale_doaj_%3EA814409541%3C/gale_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c289t-4b532ce3447967612d934ea9659067d0c39100bfc0505194af2ca3607caab5033%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3116659130&rft_id=info:pmid/&rft_galeid=A814409541&rfr_iscdi=true