Loading…

Decoding lip language using triboelectric sensors with deep learning

Lip language is an effective method of voice-off communication in daily life for people with vocal cord lesions and laryngeal and lingual injuries without occupying the hands. Collection and interpretation of lip language is challenging. Here, we propose the concept of a novel lip-language decoding...

Full description

Saved in:
Bibliographic Details
Published in:Nature communications 2022-03, Vol.13 (1), p.1401-1401, Article 1401
Main Authors: Lu, Yijia, Tian, Han, Cheng, Jia, Zhu, Fei, Liu, Bin, Wei, Shanshan, Ji, Linhong, Wang, Zhong Lin
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c540t-5d812ac7e5454cb3bc11b8199c85eb298e6f1547434750347ca5b3021ddc5cc63
cites cdi_FETCH-LOGICAL-c540t-5d812ac7e5454cb3bc11b8199c85eb298e6f1547434750347ca5b3021ddc5cc63
container_end_page 1401
container_issue 1
container_start_page 1401
container_title Nature communications
container_volume 13
creator Lu, Yijia
Tian, Han
Cheng, Jia
Zhu, Fei
Liu, Bin
Wei, Shanshan
Ji, Linhong
Wang, Zhong Lin
description Lip language is an effective method of voice-off communication in daily life for people with vocal cord lesions and laryngeal and lingual injuries without occupying the hands. Collection and interpretation of lip language is challenging. Here, we propose the concept of a novel lip-language decoding system with self-powered, low-cost, contact and flexible triboelectric sensors and a well-trained dilated recurrent neural network model based on prototype learning. The structural principle and electrical properties of the flexible sensors are measured and analysed. Lip motions for selected vowels, words, phrases, silent speech and voice speech are collected and compared. The prototype learning model reaches a test accuracy of 94.5% in training 20 classes with 100 samples each. The applications, such as identity recognition to unlock a gate, directional control of a toy car and lip-motion to speech conversion, work well and demonstrate great feasibility and potential. Our work presents a promising way to help people lacking a voice live a convenient life with barrier-free communication and boost their happiness, enriches the diversity of lip-language translation systems and will have potential value in many applications. Lip-language decoding systems are a promising technology to help people lacking a voice live a convenient life with barrier-free communication. Here, authors propose a concept of such system integrating self-powered triboelectric sensors and a well-trained dilated RNN model based on prototype learning.
doi_str_mv 10.1038/s41467-022-29083-0
format article
fullrecord <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_d3def1cefe9a4024848a2b1d497391fa</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_d3def1cefe9a4024848a2b1d497391fa</doaj_id><sourcerecordid>2640565157</sourcerecordid><originalsourceid>FETCH-LOGICAL-c540t-5d812ac7e5454cb3bc11b8199c85eb298e6f1547434750347ca5b3021ddc5cc63</originalsourceid><addsrcrecordid>eNp9kc1u1TAQhS0EolXpC7BAkdiwSfH4J7E3SKiFUqlSN2VtOfYk9VVufLETKt4e96YtLYt6YY9mvjm25xDyHugJUK4-ZwGiaWvKWM00Vbymr8ghowJqaBl__SQ-IMc5b2hZXIMS4i054JJT4MAPydkZuujDNFRj2FWjnYbFDlgt-S41p9BFHNGVwFUZpxxTrm7DfFN5xIKjTVMB35E3vR0zHt-fR-Tn92_Xpz_qy6vzi9Ovl7WTgs619AqYdS1KIYXreOcAOgVaOyWxY1ph04MUreCilbRszsqOUwbeO-lcw4_Ixarro92YXQpbm_6YaIPZJ2IajE1zcCMazz324LBHbQVlQgllWQde6LZMobdF68uqtVu6LXqH05zs-Ez0eWUKN2aIv43SHCioIvDpXiDFXwvm2WxDdjiWGWJcsmGNoFpD07CCfvwP3cQlTWVUe0o2EmRbKLZSLsWcE_aPjwFq7jw3q-emeG72nhtamj48_cZjy4PDBeArkEtpGjD9u_sF2b--F7aY</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2640565157</pqid></control><display><type>article</type><title>Decoding lip language using triboelectric sensors with deep learning</title><source>Publicly Available Content Database</source><source>Nature</source><source>Linguistics and Language Behavior Abstracts (LLBA)</source><source>PubMed Central</source><source>Coronavirus Research Database</source><source>Springer Nature - nature.com Journals - Fully Open Access</source><creator>Lu, Yijia ; Tian, Han ; Cheng, Jia ; Zhu, Fei ; Liu, Bin ; Wei, Shanshan ; Ji, Linhong ; Wang, Zhong Lin</creator><creatorcontrib>Lu, Yijia ; Tian, Han ; Cheng, Jia ; Zhu, Fei ; Liu, Bin ; Wei, Shanshan ; Ji, Linhong ; Wang, Zhong Lin</creatorcontrib><description>Lip language is an effective method of voice-off communication in daily life for people with vocal cord lesions and laryngeal and lingual injuries without occupying the hands. Collection and interpretation of lip language is challenging. Here, we propose the concept of a novel lip-language decoding system with self-powered, low-cost, contact and flexible triboelectric sensors and a well-trained dilated recurrent neural network model based on prototype learning. The structural principle and electrical properties of the flexible sensors are measured and analysed. Lip motions for selected vowels, words, phrases, silent speech and voice speech are collected and compared. The prototype learning model reaches a test accuracy of 94.5% in training 20 classes with 100 samples each. The applications, such as identity recognition to unlock a gate, directional control of a toy car and lip-motion to speech conversion, work well and demonstrate great feasibility and potential. Our work presents a promising way to help people lacking a voice live a convenient life with barrier-free communication and boost their happiness, enriches the diversity of lip-language translation systems and will have potential value in many applications. Lip-language decoding systems are a promising technology to help people lacking a voice live a convenient life with barrier-free communication. Here, authors propose a concept of such system integrating self-powered triboelectric sensors and a well-trained dilated RNN model based on prototype learning.</description><identifier>ISSN: 2041-1723</identifier><identifier>EISSN: 2041-1723</identifier><identifier>DOI: 10.1038/s41467-022-29083-0</identifier><identifier>PMID: 35301313</identifier><language>eng</language><publisher>London: Nature Publishing Group UK</publisher><subject>639/301/1005 ; 639/766/1130 ; Communication ; Deep Learning ; Directional control ; Electric contacts ; Electrical properties ; Flexible components ; Humanities and Social Sciences ; Humans ; Language ; Language translation ; Laryngology ; Lip ; Lips ; multidisciplinary ; Neural networks ; Prototypes ; Recurrent neural networks ; Science ; Science (multidisciplinary) ; Self concept ; Sensors ; Speech ; Voice ; Voice communication</subject><ispartof>Nature communications, 2022-03, Vol.13 (1), p.1401-1401, Article 1401</ispartof><rights>The Author(s) 2022</rights><rights>2022. The Author(s).</rights><rights>The Author(s) 2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c540t-5d812ac7e5454cb3bc11b8199c85eb298e6f1547434750347ca5b3021ddc5cc63</citedby><cites>FETCH-LOGICAL-c540t-5d812ac7e5454cb3bc11b8199c85eb298e6f1547434750347ca5b3021ddc5cc63</cites><orcidid>0000-0003-3535-3220 ; 0000-0002-5530-0380 ; 0000-0002-2557-0072</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2640565157/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2640565157?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>230,314,723,776,780,881,12831,25732,27903,27904,31248,36991,36992,38495,43874,44569,53769,53771,74158,74872</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/35301313$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Lu, Yijia</creatorcontrib><creatorcontrib>Tian, Han</creatorcontrib><creatorcontrib>Cheng, Jia</creatorcontrib><creatorcontrib>Zhu, Fei</creatorcontrib><creatorcontrib>Liu, Bin</creatorcontrib><creatorcontrib>Wei, Shanshan</creatorcontrib><creatorcontrib>Ji, Linhong</creatorcontrib><creatorcontrib>Wang, Zhong Lin</creatorcontrib><title>Decoding lip language using triboelectric sensors with deep learning</title><title>Nature communications</title><addtitle>Nat Commun</addtitle><addtitle>Nat Commun</addtitle><description>Lip language is an effective method of voice-off communication in daily life for people with vocal cord lesions and laryngeal and lingual injuries without occupying the hands. Collection and interpretation of lip language is challenging. Here, we propose the concept of a novel lip-language decoding system with self-powered, low-cost, contact and flexible triboelectric sensors and a well-trained dilated recurrent neural network model based on prototype learning. The structural principle and electrical properties of the flexible sensors are measured and analysed. Lip motions for selected vowels, words, phrases, silent speech and voice speech are collected and compared. The prototype learning model reaches a test accuracy of 94.5% in training 20 classes with 100 samples each. The applications, such as identity recognition to unlock a gate, directional control of a toy car and lip-motion to speech conversion, work well and demonstrate great feasibility and potential. Our work presents a promising way to help people lacking a voice live a convenient life with barrier-free communication and boost their happiness, enriches the diversity of lip-language translation systems and will have potential value in many applications. Lip-language decoding systems are a promising technology to help people lacking a voice live a convenient life with barrier-free communication. Here, authors propose a concept of such system integrating self-powered triboelectric sensors and a well-trained dilated RNN model based on prototype learning.</description><subject>639/301/1005</subject><subject>639/766/1130</subject><subject>Communication</subject><subject>Deep Learning</subject><subject>Directional control</subject><subject>Electric contacts</subject><subject>Electrical properties</subject><subject>Flexible components</subject><subject>Humanities and Social Sciences</subject><subject>Humans</subject><subject>Language</subject><subject>Language translation</subject><subject>Laryngology</subject><subject>Lip</subject><subject>Lips</subject><subject>multidisciplinary</subject><subject>Neural networks</subject><subject>Prototypes</subject><subject>Recurrent neural networks</subject><subject>Science</subject><subject>Science (multidisciplinary)</subject><subject>Self concept</subject><subject>Sensors</subject><subject>Speech</subject><subject>Voice</subject><subject>Voice communication</subject><issn>2041-1723</issn><issn>2041-1723</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>7T9</sourceid><sourceid>COVID</sourceid><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNp9kc1u1TAQhS0EolXpC7BAkdiwSfH4J7E3SKiFUqlSN2VtOfYk9VVufLETKt4e96YtLYt6YY9mvjm25xDyHugJUK4-ZwGiaWvKWM00Vbymr8ghowJqaBl__SQ-IMc5b2hZXIMS4i054JJT4MAPydkZuujDNFRj2FWjnYbFDlgt-S41p9BFHNGVwFUZpxxTrm7DfFN5xIKjTVMB35E3vR0zHt-fR-Tn92_Xpz_qy6vzi9Ovl7WTgs619AqYdS1KIYXreOcAOgVaOyWxY1ph04MUreCilbRszsqOUwbeO-lcw4_Ixarro92YXQpbm_6YaIPZJ2IajE1zcCMazz324LBHbQVlQgllWQde6LZMobdF68uqtVu6LXqH05zs-Ez0eWUKN2aIv43SHCioIvDpXiDFXwvm2WxDdjiWGWJcsmGNoFpD07CCfvwP3cQlTWVUe0o2EmRbKLZSLsWcE_aPjwFq7jw3q-emeG72nhtamj48_cZjy4PDBeArkEtpGjD9u_sF2b--F7aY</recordid><startdate>20220317</startdate><enddate>20220317</enddate><creator>Lu, Yijia</creator><creator>Tian, Han</creator><creator>Cheng, Jia</creator><creator>Zhu, Fei</creator><creator>Liu, Bin</creator><creator>Wei, Shanshan</creator><creator>Ji, Linhong</creator><creator>Wang, Zhong Lin</creator><general>Nature Publishing Group UK</general><general>Nature Publishing Group</general><general>Nature Portfolio</general><scope>C6C</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7QL</scope><scope>7QP</scope><scope>7QR</scope><scope>7SN</scope><scope>7SS</scope><scope>7ST</scope><scope>7T5</scope><scope>7T7</scope><scope>7T9</scope><scope>7TM</scope><scope>7TO</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FH</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AEUYN</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BHPHI</scope><scope>C1K</scope><scope>CCPQU</scope><scope>COVID</scope><scope>DWQXO</scope><scope>FR3</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>H94</scope><scope>HCIFZ</scope><scope>K9.</scope><scope>LK8</scope><scope>M0S</scope><scope>M1P</scope><scope>M7P</scope><scope>P5Z</scope><scope>P62</scope><scope>P64</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>RC3</scope><scope>SOI</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0003-3535-3220</orcidid><orcidid>https://orcid.org/0000-0002-5530-0380</orcidid><orcidid>https://orcid.org/0000-0002-2557-0072</orcidid></search><sort><creationdate>20220317</creationdate><title>Decoding lip language using triboelectric sensors with deep learning</title><author>Lu, Yijia ; Tian, Han ; Cheng, Jia ; Zhu, Fei ; Liu, Bin ; Wei, Shanshan ; Ji, Linhong ; Wang, Zhong Lin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c540t-5d812ac7e5454cb3bc11b8199c85eb298e6f1547434750347ca5b3021ddc5cc63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>639/301/1005</topic><topic>639/766/1130</topic><topic>Communication</topic><topic>Deep Learning</topic><topic>Directional control</topic><topic>Electric contacts</topic><topic>Electrical properties</topic><topic>Flexible components</topic><topic>Humanities and Social Sciences</topic><topic>Humans</topic><topic>Language</topic><topic>Language translation</topic><topic>Laryngology</topic><topic>Lip</topic><topic>Lips</topic><topic>multidisciplinary</topic><topic>Neural networks</topic><topic>Prototypes</topic><topic>Recurrent neural networks</topic><topic>Science</topic><topic>Science (multidisciplinary)</topic><topic>Self concept</topic><topic>Sensors</topic><topic>Speech</topic><topic>Voice</topic><topic>Voice communication</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Lu, Yijia</creatorcontrib><creatorcontrib>Tian, Han</creatorcontrib><creatorcontrib>Cheng, Jia</creatorcontrib><creatorcontrib>Zhu, Fei</creatorcontrib><creatorcontrib>Liu, Bin</creatorcontrib><creatorcontrib>Wei, Shanshan</creatorcontrib><creatorcontrib>Ji, Linhong</creatorcontrib><creatorcontrib>Wang, Zhong Lin</creatorcontrib><collection>SpringerOpen</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Bacteriology Abstracts (Microbiology B)</collection><collection>Calcium &amp; Calcified Tissue Abstracts</collection><collection>Chemoreception Abstracts</collection><collection>Ecology Abstracts</collection><collection>Entomology Abstracts (Full archive)</collection><collection>Environment Abstracts</collection><collection>Immunology Abstracts</collection><collection>Industrial and Applied Microbiology Abstracts (Microbiology A)</collection><collection>Linguistics and Language Behavior Abstracts (LLBA)</collection><collection>Nucleic Acids Abstracts</collection><collection>Oncogenes and Growth Factors Abstracts</collection><collection>Health &amp; Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest One Sustainability</collection><collection>ProQuest Central</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>Natural Science Collection</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ProQuest One Community College</collection><collection>Coronavirus Research Database</collection><collection>ProQuest Central Korea</collection><collection>Engineering Research Database</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>AIDS and Cancer Research Abstracts</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>ProQuest Biological Science Collection</collection><collection>Health &amp; Medical Collection (Alumni Edition)</collection><collection>PML(ProQuest Medical Library)</collection><collection>Biological Science Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Genetics Abstracts</collection><collection>Environment Abstracts</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Nature communications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lu, Yijia</au><au>Tian, Han</au><au>Cheng, Jia</au><au>Zhu, Fei</au><au>Liu, Bin</au><au>Wei, Shanshan</au><au>Ji, Linhong</au><au>Wang, Zhong Lin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Decoding lip language using triboelectric sensors with deep learning</atitle><jtitle>Nature communications</jtitle><stitle>Nat Commun</stitle><addtitle>Nat Commun</addtitle><date>2022-03-17</date><risdate>2022</risdate><volume>13</volume><issue>1</issue><spage>1401</spage><epage>1401</epage><pages>1401-1401</pages><artnum>1401</artnum><issn>2041-1723</issn><eissn>2041-1723</eissn><abstract>Lip language is an effective method of voice-off communication in daily life for people with vocal cord lesions and laryngeal and lingual injuries without occupying the hands. Collection and interpretation of lip language is challenging. Here, we propose the concept of a novel lip-language decoding system with self-powered, low-cost, contact and flexible triboelectric sensors and a well-trained dilated recurrent neural network model based on prototype learning. The structural principle and electrical properties of the flexible sensors are measured and analysed. Lip motions for selected vowels, words, phrases, silent speech and voice speech are collected and compared. The prototype learning model reaches a test accuracy of 94.5% in training 20 classes with 100 samples each. The applications, such as identity recognition to unlock a gate, directional control of a toy car and lip-motion to speech conversion, work well and demonstrate great feasibility and potential. Our work presents a promising way to help people lacking a voice live a convenient life with barrier-free communication and boost their happiness, enriches the diversity of lip-language translation systems and will have potential value in many applications. Lip-language decoding systems are a promising technology to help people lacking a voice live a convenient life with barrier-free communication. Here, authors propose a concept of such system integrating self-powered triboelectric sensors and a well-trained dilated RNN model based on prototype learning.</abstract><cop>London</cop><pub>Nature Publishing Group UK</pub><pmid>35301313</pmid><doi>10.1038/s41467-022-29083-0</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0003-3535-3220</orcidid><orcidid>https://orcid.org/0000-0002-5530-0380</orcidid><orcidid>https://orcid.org/0000-0002-2557-0072</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2041-1723
ispartof Nature communications, 2022-03, Vol.13 (1), p.1401-1401, Article 1401
issn 2041-1723
2041-1723
language eng
recordid cdi_doaj_primary_oai_doaj_org_article_d3def1cefe9a4024848a2b1d497391fa
source Publicly Available Content Database; Nature; Linguistics and Language Behavior Abstracts (LLBA); PubMed Central; Coronavirus Research Database; Springer Nature - nature.com Journals - Fully Open Access
subjects 639/301/1005
639/766/1130
Communication
Deep Learning
Directional control
Electric contacts
Electrical properties
Flexible components
Humanities and Social Sciences
Humans
Language
Language translation
Laryngology
Lip
Lips
multidisciplinary
Neural networks
Prototypes
Recurrent neural networks
Science
Science (multidisciplinary)
Self concept
Sensors
Speech
Voice
Voice communication
title Decoding lip language using triboelectric sensors with deep learning
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T21%3A52%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Decoding%20lip%20language%20using%20triboelectric%20sensors%20with%20deep%20learning&rft.jtitle=Nature%20communications&rft.au=Lu,%20Yijia&rft.date=2022-03-17&rft.volume=13&rft.issue=1&rft.spage=1401&rft.epage=1401&rft.pages=1401-1401&rft.artnum=1401&rft.issn=2041-1723&rft.eissn=2041-1723&rft_id=info:doi/10.1038/s41467-022-29083-0&rft_dat=%3Cproquest_doaj_%3E2640565157%3C/proquest_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c540t-5d812ac7e5454cb3bc11b8199c85eb298e6f1547434750347ca5b3021ddc5cc63%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2640565157&rft_id=info:pmid/35301313&rfr_iscdi=true