Loading…
Surface and underwater human pose recognition based on temporal 3D point cloud deep learning
Airborne surface and underwater human pose recognition are crucial for various safety and surveillance applications, including the detection of individuals in distress or drowning situations. However, airborne optical cameras struggle to achieve simultaneous imaging of the surface and underwater bec...
Saved in:
Published in: | Scientific reports 2024-01, Vol.14 (1), p.55-55, Article 55 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c541t-6d99263354a4e863d37c36ff54bb05ddf45789b6e5f3b8d5ee30ddc28eb1d08a3 |
---|---|
cites | cdi_FETCH-LOGICAL-c541t-6d99263354a4e863d37c36ff54bb05ddf45789b6e5f3b8d5ee30ddc28eb1d08a3 |
container_end_page | 55 |
container_issue | 1 |
container_start_page | 55 |
container_title | Scientific reports |
container_volume | 14 |
creator | Wang, Haijian Wu, Zhenyu Zhao, Xuemei |
description | Airborne surface and underwater human pose recognition are crucial for various safety and surveillance applications, including the detection of individuals in distress or drowning situations. However, airborne optical cameras struggle to achieve simultaneous imaging of the surface and underwater because of limitations imposed by visible-light wavelengths. To address this problem, this study proposes the use of light detection and ranging (LiDAR) to simultaneously detect humans on the surface and underwater, whereby human poses are recognized using a neural network designed for irregular data. First, a temporal point-cloud dataset was constructed for surface and underwater human pose recognition to enhance the recognition of comparable movements. Subsequently, radius outlier removal (ROR) and statistical outlier removal (SOR) were employed to alleviate the impact of noise and outliers in the constructed dataset. Finally, different combinations of secondary sampling methods and sample sizes were tested to improve recognition accuracy using PointNet++. The experimental results show that the highest recognition accuracy reached 97.5012%, demonstrating the effectiveness of the proposed human pose detection and recognition method. |
doi_str_mv | 10.1038/s41598-023-50658-4 |
format | article |
fullrecord | <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_48ddac4efed5422c8c603754b1bc23b2</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_48ddac4efed5422c8c603754b1bc23b2</doaj_id><sourcerecordid>2910192253</sourcerecordid><originalsourceid>FETCH-LOGICAL-c541t-6d99263354a4e863d37c36ff54bb05ddf45789b6e5f3b8d5ee30ddc28eb1d08a3</originalsourceid><addsrcrecordid>eNp9kstu1jAQhSMEolXpC7BAltiwCdjjS5wVQuVWqRILYIdkOfYkza_EDnYC4u1xm1JaFnjjkefzmRn7VNVTRl8yyvWrLJhsdU2B15IqqWvxoDoGKmQNHODhnfioOs35QMuS0ArWPq6OuGaqEY08rr593lJvHRIbPNmCx_TTrpjI5TbbQJaYkSR0cQjjOsZAOpvRkxKsOC8x2Ynwt4Uaw0rcFDdPPOJCJrQpjGF4Uj3q7ZTx9GY_qb6-f_fl7GN98enD-dmbi9pJwdZa-bYFxbkUVqBW3PPGcdX3UnQdld73Qja67RTKnnfaS0ROvXegsWOeastPqvNd10d7MEsaZ5t-mWhHc30Q02BsWkc3oRHae-sE9uilAHDaKcqbUol1DngHRev1rrVs3YzeYVjLmPdE72fCeGmG-MMw2igAEEXhxY1Cit83zKuZx-xwmmzAuGUDLaOsBZC8oM__QQ9xS6G8VaFoyxgVTVMo2CmXYs4J-9tuGDVXZjC7GUwxg7k2g7nq4tndOW6v_Pn6AvAdyCUVBkx_a_9H9jc_usAy</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2909110477</pqid></control><display><type>article</type><title>Surface and underwater human pose recognition based on temporal 3D point cloud deep learning</title><source>ProQuest - Publicly Available Content Database</source><source>PubMed Central</source><source>Free Full-Text Journals in Chemistry</source><source>Springer Nature - nature.com Journals - Fully Open Access</source><creator>Wang, Haijian ; Wu, Zhenyu ; Zhao, Xuemei</creator><creatorcontrib>Wang, Haijian ; Wu, Zhenyu ; Zhao, Xuemei</creatorcontrib><description>Airborne surface and underwater human pose recognition are crucial for various safety and surveillance applications, including the detection of individuals in distress or drowning situations. However, airborne optical cameras struggle to achieve simultaneous imaging of the surface and underwater because of limitations imposed by visible-light wavelengths. To address this problem, this study proposes the use of light detection and ranging (LiDAR) to simultaneously detect humans on the surface and underwater, whereby human poses are recognized using a neural network designed for irregular data. First, a temporal point-cloud dataset was constructed for surface and underwater human pose recognition to enhance the recognition of comparable movements. Subsequently, radius outlier removal (ROR) and statistical outlier removal (SOR) were employed to alleviate the impact of noise and outliers in the constructed dataset. Finally, different combinations of secondary sampling methods and sample sizes were tested to improve recognition accuracy using PointNet++. The experimental results show that the highest recognition accuracy reached 97.5012%, demonstrating the effectiveness of the proposed human pose detection and recognition method.</description><identifier>ISSN: 2045-2322</identifier><identifier>EISSN: 2045-2322</identifier><identifier>DOI: 10.1038/s41598-023-50658-4</identifier><identifier>PMID: 38167475</identifier><language>eng</language><publisher>London: Nature Publishing Group UK</publisher><subject>639/166/987 ; 639/705/117 ; Accuracy ; Aluminum ; Automation ; Datasets ; Deep Learning ; Drowning ; Drownings ; Humanities and Social Sciences ; Humans ; Lidar ; Light ; Monitoring systems ; Movement ; multidisciplinary ; Neural networks ; Neural Networks, Computer ; Sample size ; Sampling methods ; Science ; Science (multidisciplinary) ; Sensors ; Sparsity ; Underwater ; Wavelengths</subject><ispartof>Scientific reports, 2024-01, Vol.14 (1), p.55-55, Article 55</ispartof><rights>The Author(s) 2024</rights><rights>2024. The Author(s).</rights><rights>The Author(s) 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c541t-6d99263354a4e863d37c36ff54bb05ddf45789b6e5f3b8d5ee30ddc28eb1d08a3</citedby><cites>FETCH-LOGICAL-c541t-6d99263354a4e863d37c36ff54bb05ddf45789b6e5f3b8d5ee30ddc28eb1d08a3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2909110477/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2909110477?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>230,314,727,780,784,885,25753,27924,27925,37012,37013,44590,53791,53793,75126</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/38167475$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Wang, Haijian</creatorcontrib><creatorcontrib>Wu, Zhenyu</creatorcontrib><creatorcontrib>Zhao, Xuemei</creatorcontrib><title>Surface and underwater human pose recognition based on temporal 3D point cloud deep learning</title><title>Scientific reports</title><addtitle>Sci Rep</addtitle><addtitle>Sci Rep</addtitle><description>Airborne surface and underwater human pose recognition are crucial for various safety and surveillance applications, including the detection of individuals in distress or drowning situations. However, airborne optical cameras struggle to achieve simultaneous imaging of the surface and underwater because of limitations imposed by visible-light wavelengths. To address this problem, this study proposes the use of light detection and ranging (LiDAR) to simultaneously detect humans on the surface and underwater, whereby human poses are recognized using a neural network designed for irregular data. First, a temporal point-cloud dataset was constructed for surface and underwater human pose recognition to enhance the recognition of comparable movements. Subsequently, radius outlier removal (ROR) and statistical outlier removal (SOR) were employed to alleviate the impact of noise and outliers in the constructed dataset. Finally, different combinations of secondary sampling methods and sample sizes were tested to improve recognition accuracy using PointNet++. The experimental results show that the highest recognition accuracy reached 97.5012%, demonstrating the effectiveness of the proposed human pose detection and recognition method.</description><subject>639/166/987</subject><subject>639/705/117</subject><subject>Accuracy</subject><subject>Aluminum</subject><subject>Automation</subject><subject>Datasets</subject><subject>Deep Learning</subject><subject>Drowning</subject><subject>Drownings</subject><subject>Humanities and Social Sciences</subject><subject>Humans</subject><subject>Lidar</subject><subject>Light</subject><subject>Monitoring systems</subject><subject>Movement</subject><subject>multidisciplinary</subject><subject>Neural networks</subject><subject>Neural Networks, Computer</subject><subject>Sample size</subject><subject>Sampling methods</subject><subject>Science</subject><subject>Science (multidisciplinary)</subject><subject>Sensors</subject><subject>Sparsity</subject><subject>Underwater</subject><subject>Wavelengths</subject><issn>2045-2322</issn><issn>2045-2322</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNp9kstu1jAQhSMEolXpC7BAltiwCdjjS5wVQuVWqRILYIdkOfYkza_EDnYC4u1xm1JaFnjjkefzmRn7VNVTRl8yyvWrLJhsdU2B15IqqWvxoDoGKmQNHODhnfioOs35QMuS0ArWPq6OuGaqEY08rr593lJvHRIbPNmCx_TTrpjI5TbbQJaYkSR0cQjjOsZAOpvRkxKsOC8x2Ynwt4Uaw0rcFDdPPOJCJrQpjGF4Uj3q7ZTx9GY_qb6-f_fl7GN98enD-dmbi9pJwdZa-bYFxbkUVqBW3PPGcdX3UnQdld73Qja67RTKnnfaS0ROvXegsWOeastPqvNd10d7MEsaZ5t-mWhHc30Q02BsWkc3oRHae-sE9uilAHDaKcqbUol1DngHRev1rrVs3YzeYVjLmPdE72fCeGmG-MMw2igAEEXhxY1Cit83zKuZx-xwmmzAuGUDLaOsBZC8oM__QQ9xS6G8VaFoyxgVTVMo2CmXYs4J-9tuGDVXZjC7GUwxg7k2g7nq4tndOW6v_Pn6AvAdyCUVBkx_a_9H9jc_usAy</recordid><startdate>20240102</startdate><enddate>20240102</enddate><creator>Wang, Haijian</creator><creator>Wu, Zhenyu</creator><creator>Zhao, Xuemei</creator><general>Nature Publishing Group UK</general><general>Nature Publishing Group</general><general>Nature Portfolio</general><scope>C6C</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7X7</scope><scope>7XB</scope><scope>88A</scope><scope>88E</scope><scope>88I</scope><scope>8FE</scope><scope>8FH</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BHPHI</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>K9.</scope><scope>LK8</scope><scope>M0S</scope><scope>M1P</scope><scope>M2P</scope><scope>M7P</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>Q9U</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope></search><sort><creationdate>20240102</creationdate><title>Surface and underwater human pose recognition based on temporal 3D point cloud deep learning</title><author>Wang, Haijian ; Wu, Zhenyu ; Zhao, Xuemei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c541t-6d99263354a4e863d37c36ff54bb05ddf45789b6e5f3b8d5ee30ddc28eb1d08a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>639/166/987</topic><topic>639/705/117</topic><topic>Accuracy</topic><topic>Aluminum</topic><topic>Automation</topic><topic>Datasets</topic><topic>Deep Learning</topic><topic>Drowning</topic><topic>Drownings</topic><topic>Humanities and Social Sciences</topic><topic>Humans</topic><topic>Lidar</topic><topic>Light</topic><topic>Monitoring systems</topic><topic>Movement</topic><topic>multidisciplinary</topic><topic>Neural networks</topic><topic>Neural Networks, Computer</topic><topic>Sample size</topic><topic>Sampling methods</topic><topic>Science</topic><topic>Science (multidisciplinary)</topic><topic>Sensors</topic><topic>Sparsity</topic><topic>Underwater</topic><topic>Wavelengths</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wang, Haijian</creatorcontrib><creatorcontrib>Wu, Zhenyu</creatorcontrib><creatorcontrib>Zhao, Xuemei</creatorcontrib><collection>Springer Nature OA Free Journals</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Health & Medical Collection (Proquest)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Biology Database (Alumni Edition)</collection><collection>Medical Database (Alumni Edition)</collection><collection>Science Database (Alumni Edition)</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>ProQuest Central</collection><collection>ProQuest Natural Science Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>ProQuest Biological Science Collection</collection><collection>Health & Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Science Database (ProQuest)</collection><collection>ProQuest Biological Science Journals</collection><collection>ProQuest - Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central Basic</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>Directory of Open Access Journals</collection><jtitle>Scientific reports</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wang, Haijian</au><au>Wu, Zhenyu</au><au>Zhao, Xuemei</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Surface and underwater human pose recognition based on temporal 3D point cloud deep learning</atitle><jtitle>Scientific reports</jtitle><stitle>Sci Rep</stitle><addtitle>Sci Rep</addtitle><date>2024-01-02</date><risdate>2024</risdate><volume>14</volume><issue>1</issue><spage>55</spage><epage>55</epage><pages>55-55</pages><artnum>55</artnum><issn>2045-2322</issn><eissn>2045-2322</eissn><abstract>Airborne surface and underwater human pose recognition are crucial for various safety and surveillance applications, including the detection of individuals in distress or drowning situations. However, airborne optical cameras struggle to achieve simultaneous imaging of the surface and underwater because of limitations imposed by visible-light wavelengths. To address this problem, this study proposes the use of light detection and ranging (LiDAR) to simultaneously detect humans on the surface and underwater, whereby human poses are recognized using a neural network designed for irregular data. First, a temporal point-cloud dataset was constructed for surface and underwater human pose recognition to enhance the recognition of comparable movements. Subsequently, radius outlier removal (ROR) and statistical outlier removal (SOR) were employed to alleviate the impact of noise and outliers in the constructed dataset. Finally, different combinations of secondary sampling methods and sample sizes were tested to improve recognition accuracy using PointNet++. The experimental results show that the highest recognition accuracy reached 97.5012%, demonstrating the effectiveness of the proposed human pose detection and recognition method.</abstract><cop>London</cop><pub>Nature Publishing Group UK</pub><pmid>38167475</pmid><doi>10.1038/s41598-023-50658-4</doi><tpages>1</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2045-2322 |
ispartof | Scientific reports, 2024-01, Vol.14 (1), p.55-55, Article 55 |
issn | 2045-2322 2045-2322 |
language | eng |
recordid | cdi_doaj_primary_oai_doaj_org_article_48ddac4efed5422c8c603754b1bc23b2 |
source | ProQuest - Publicly Available Content Database; PubMed Central; Free Full-Text Journals in Chemistry; Springer Nature - nature.com Journals - Fully Open Access |
subjects | 639/166/987 639/705/117 Accuracy Aluminum Automation Datasets Deep Learning Drowning Drownings Humanities and Social Sciences Humans Lidar Light Monitoring systems Movement multidisciplinary Neural networks Neural Networks, Computer Sample size Sampling methods Science Science (multidisciplinary) Sensors Sparsity Underwater Wavelengths |
title | Surface and underwater human pose recognition based on temporal 3D point cloud deep learning |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T07%3A19%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Surface%20and%20underwater%20human%20pose%20recognition%20based%20on%20temporal%203D%20point%20cloud%20deep%20learning&rft.jtitle=Scientific%20reports&rft.au=Wang,%20Haijian&rft.date=2024-01-02&rft.volume=14&rft.issue=1&rft.spage=55&rft.epage=55&rft.pages=55-55&rft.artnum=55&rft.issn=2045-2322&rft.eissn=2045-2322&rft_id=info:doi/10.1038/s41598-023-50658-4&rft_dat=%3Cproquest_doaj_%3E2910192253%3C/proquest_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c541t-6d99263354a4e863d37c36ff54bb05ddf45789b6e5f3b8d5ee30ddc28eb1d08a3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2909110477&rft_id=info:pmid/38167475&rfr_iscdi=true |