Loading…

Cutting Pose Prediction from Point Clouds

The challenge of getting machines to understand and interact with natural objects is encountered in important areas such as medicine, agriculture, and, in our case, slaughterhouse automation. Recent breakthroughs have enabled the application of Deep Neural Networks (DNN) directly to point clouds, an...

Full description

Saved in:
Bibliographic Details
Published in:Sensors (Basel, Switzerland) Switzerland), 2020-03, Vol.20 (6), p.1563
Main Authors: Philipsen, Mark P, B Moeslund, Thomas
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c441t-f93545f20544538a39502a411aa149a5a7b5740b19537562437b5c2080e004503
cites cdi_FETCH-LOGICAL-c441t-f93545f20544538a39502a411aa149a5a7b5740b19537562437b5c2080e004503
container_end_page
container_issue 6
container_start_page 1563
container_title Sensors (Basel, Switzerland)
container_volume 20
creator Philipsen, Mark P
B Moeslund, Thomas
description The challenge of getting machines to understand and interact with natural objects is encountered in important areas such as medicine, agriculture, and, in our case, slaughterhouse automation. Recent breakthroughs have enabled the application of Deep Neural Networks (DNN) directly to point clouds, an efficient and natural representation of 3D objects. The potential of these methods has mostly been demonstrated for classification and segmentation tasks involving rigid man-made objects. We present a method, based on the successful PointNet architecture, for learning to regress correct tool placement from human demonstrations, using virtual reality. Our method is applied to a challenging slaughterhouse cutting task, which requires an understanding of the local geometry including the shape, size, and orientation. We propose an intermediate five-Degree of Freedom (DoF) cutting plane representation, a point and a normal vector, which eases the demonstration and learning process. A live experiment is conducted in order to unveil issues and begin to understand the required accuracy. Eleven cuts are rated by an expert, with 8 / 11 being rated as acceptable. The error on the test set is subsequently reduced through the addition of more training data and improvements to the DNN. The result is a reduction in the average translation from 1.5 cm to 0.8 cm and the orientation error from 4 . 59 to 4 . 48 . The method's generalization capacity is assessed on a similar task from the slaughterhouse and on the very different public LINEMOD dataset for object pose estimation across view points. In both cases, the method shows promising results. Code, datasets, and supplementary materials are available at https://github.com/markpp/PoseFromPointClouds.
doi_str_mv 10.3390/s20061563
format article
fullrecord <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_8645a338ee364fddbf8e0f4ac7a6658e</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_8645a338ee364fddbf8e0f4ac7a6658e</doaj_id><sourcerecordid>2377332828</sourcerecordid><originalsourceid>FETCH-LOGICAL-c441t-f93545f20544538a39502a411aa149a5a7b5740b19537562437b5c2080e004503</originalsourceid><addsrcrecordid>eNpVkc1KAzEUhYMotlYXvoB0aRfVm9wkk9kIUvwpFOxC1yGdydSU6aQmM4Jvb7S12GwSTg7fPZdDyCWFG8QcbiMDkFRIPCJ9yhkfK8bg-N-7R85iXAEwRFSnpIeMSpVOn4wmXdu6Zjmc-2iH82BLV7TON8Mq-HUSXdMOJ7XvynhOTipTR3uxuwfk7fHhdfI8nr08TSf3s3HBOW3HVY6Ci4qB4FygMpgLYIZTagzluREmW4iMw4LmAjMhGcckFAwUWAAuAAdkuuWW3qz0Jri1CV_aG6d_BR-W2oTWFbXVSnJh0kbWouRVWS4qZaHipsiMlELZxLrbsjbdYm3LwjZtMPUB9PCnce966T91RrlMyRLgegcI_qOzsdVrFwtb16axvouaYZYhMsVUso621iL4GIOt9mMo6J-a9L6m5L36n2vv_OsFvwHyeIoH</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2377332828</pqid></control><display><type>article</type><title>Cutting Pose Prediction from Point Clouds</title><source>Publicly Available Content Database</source><source>PubMed Central</source><creator>Philipsen, Mark P ; B Moeslund, Thomas</creator><creatorcontrib>Philipsen, Mark P ; B Moeslund, Thomas</creatorcontrib><description>The challenge of getting machines to understand and interact with natural objects is encountered in important areas such as medicine, agriculture, and, in our case, slaughterhouse automation. Recent breakthroughs have enabled the application of Deep Neural Networks (DNN) directly to point clouds, an efficient and natural representation of 3D objects. The potential of these methods has mostly been demonstrated for classification and segmentation tasks involving rigid man-made objects. We present a method, based on the successful PointNet architecture, for learning to regress correct tool placement from human demonstrations, using virtual reality. Our method is applied to a challenging slaughterhouse cutting task, which requires an understanding of the local geometry including the shape, size, and orientation. We propose an intermediate five-Degree of Freedom (DoF) cutting plane representation, a point and a normal vector, which eases the demonstration and learning process. A live experiment is conducted in order to unveil issues and begin to understand the required accuracy. Eleven cuts are rated by an expert, with 8 / 11 being rated as acceptable. The error on the test set is subsequently reduced through the addition of more training data and improvements to the DNN. The result is a reduction in the average translation from 1.5 cm to 0.8 cm and the orientation error from 4 . 59 to 4 . 48 . The method's generalization capacity is assessed on a similar task from the slaughterhouse and on the very different public LINEMOD dataset for object pose estimation across view points. In both cases, the method shows promising results. Code, datasets, and supplementary materials are available at https://github.com/markpp/PoseFromPointClouds.</description><identifier>ISSN: 1424-8220</identifier><identifier>EISSN: 1424-8220</identifier><identifier>DOI: 10.3390/s20061563</identifier><identifier>PMID: 32168888</identifier><language>eng</language><publisher>Switzerland: MDPI</publisher><subject>automation ; meat production ; point cloud ; pointnet ; pose prediction</subject><ispartof>Sensors (Basel, Switzerland), 2020-03, Vol.20 (6), p.1563</ispartof><rights>2020 by the authors. 2020</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c441t-f93545f20544538a39502a411aa149a5a7b5740b19537562437b5c2080e004503</citedby><cites>FETCH-LOGICAL-c441t-f93545f20544538a39502a411aa149a5a7b5740b19537562437b5c2080e004503</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC7146437/pdf/$$EPDF$$P50$$Gpubmedcentral$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC7146437/$$EHTML$$P50$$Gpubmedcentral$$Hfree_for_read</linktohtml><link.rule.ids>230,314,727,780,784,885,27924,27925,37013,53791,53793</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/32168888$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Philipsen, Mark P</creatorcontrib><creatorcontrib>B Moeslund, Thomas</creatorcontrib><title>Cutting Pose Prediction from Point Clouds</title><title>Sensors (Basel, Switzerland)</title><addtitle>Sensors (Basel)</addtitle><description>The challenge of getting machines to understand and interact with natural objects is encountered in important areas such as medicine, agriculture, and, in our case, slaughterhouse automation. Recent breakthroughs have enabled the application of Deep Neural Networks (DNN) directly to point clouds, an efficient and natural representation of 3D objects. The potential of these methods has mostly been demonstrated for classification and segmentation tasks involving rigid man-made objects. We present a method, based on the successful PointNet architecture, for learning to regress correct tool placement from human demonstrations, using virtual reality. Our method is applied to a challenging slaughterhouse cutting task, which requires an understanding of the local geometry including the shape, size, and orientation. We propose an intermediate five-Degree of Freedom (DoF) cutting plane representation, a point and a normal vector, which eases the demonstration and learning process. A live experiment is conducted in order to unveil issues and begin to understand the required accuracy. Eleven cuts are rated by an expert, with 8 / 11 being rated as acceptable. The error on the test set is subsequently reduced through the addition of more training data and improvements to the DNN. The result is a reduction in the average translation from 1.5 cm to 0.8 cm and the orientation error from 4 . 59 to 4 . 48 . The method's generalization capacity is assessed on a similar task from the slaughterhouse and on the very different public LINEMOD dataset for object pose estimation across view points. In both cases, the method shows promising results. Code, datasets, and supplementary materials are available at https://github.com/markpp/PoseFromPointClouds.</description><subject>automation</subject><subject>meat production</subject><subject>point cloud</subject><subject>pointnet</subject><subject>pose prediction</subject><issn>1424-8220</issn><issn>1424-8220</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>DOA</sourceid><recordid>eNpVkc1KAzEUhYMotlYXvoB0aRfVm9wkk9kIUvwpFOxC1yGdydSU6aQmM4Jvb7S12GwSTg7fPZdDyCWFG8QcbiMDkFRIPCJ9yhkfK8bg-N-7R85iXAEwRFSnpIeMSpVOn4wmXdu6Zjmc-2iH82BLV7TON8Mq-HUSXdMOJ7XvynhOTipTR3uxuwfk7fHhdfI8nr08TSf3s3HBOW3HVY6Ci4qB4FygMpgLYIZTagzluREmW4iMw4LmAjMhGcckFAwUWAAuAAdkuuWW3qz0Jri1CV_aG6d_BR-W2oTWFbXVSnJh0kbWouRVWS4qZaHipsiMlELZxLrbsjbdYm3LwjZtMPUB9PCnce966T91RrlMyRLgegcI_qOzsdVrFwtb16axvouaYZYhMsVUso621iL4GIOt9mMo6J-a9L6m5L36n2vv_OsFvwHyeIoH</recordid><startdate>20200311</startdate><enddate>20200311</enddate><creator>Philipsen, Mark P</creator><creator>B Moeslund, Thomas</creator><general>MDPI</general><general>MDPI AG</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope></search><sort><creationdate>20200311</creationdate><title>Cutting Pose Prediction from Point Clouds</title><author>Philipsen, Mark P ; B Moeslund, Thomas</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c441t-f93545f20544538a39502a411aa149a5a7b5740b19537562437b5c2080e004503</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>automation</topic><topic>meat production</topic><topic>point cloud</topic><topic>pointnet</topic><topic>pose prediction</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Philipsen, Mark P</creatorcontrib><creatorcontrib>B Moeslund, Thomas</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>Directory of Open Access Journals</collection><jtitle>Sensors (Basel, Switzerland)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Philipsen, Mark P</au><au>B Moeslund, Thomas</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Cutting Pose Prediction from Point Clouds</atitle><jtitle>Sensors (Basel, Switzerland)</jtitle><addtitle>Sensors (Basel)</addtitle><date>2020-03-11</date><risdate>2020</risdate><volume>20</volume><issue>6</issue><spage>1563</spage><pages>1563-</pages><issn>1424-8220</issn><eissn>1424-8220</eissn><abstract>The challenge of getting machines to understand and interact with natural objects is encountered in important areas such as medicine, agriculture, and, in our case, slaughterhouse automation. Recent breakthroughs have enabled the application of Deep Neural Networks (DNN) directly to point clouds, an efficient and natural representation of 3D objects. The potential of these methods has mostly been demonstrated for classification and segmentation tasks involving rigid man-made objects. We present a method, based on the successful PointNet architecture, for learning to regress correct tool placement from human demonstrations, using virtual reality. Our method is applied to a challenging slaughterhouse cutting task, which requires an understanding of the local geometry including the shape, size, and orientation. We propose an intermediate five-Degree of Freedom (DoF) cutting plane representation, a point and a normal vector, which eases the demonstration and learning process. A live experiment is conducted in order to unveil issues and begin to understand the required accuracy. Eleven cuts are rated by an expert, with 8 / 11 being rated as acceptable. The error on the test set is subsequently reduced through the addition of more training data and improvements to the DNN. The result is a reduction in the average translation from 1.5 cm to 0.8 cm and the orientation error from 4 . 59 to 4 . 48 . The method's generalization capacity is assessed on a similar task from the slaughterhouse and on the very different public LINEMOD dataset for object pose estimation across view points. In both cases, the method shows promising results. Code, datasets, and supplementary materials are available at https://github.com/markpp/PoseFromPointClouds.</abstract><cop>Switzerland</cop><pub>MDPI</pub><pmid>32168888</pmid><doi>10.3390/s20061563</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1424-8220
ispartof Sensors (Basel, Switzerland), 2020-03, Vol.20 (6), p.1563
issn 1424-8220
1424-8220
language eng
recordid cdi_doaj_primary_oai_doaj_org_article_8645a338ee364fddbf8e0f4ac7a6658e
source Publicly Available Content Database; PubMed Central
subjects automation
meat production
point cloud
pointnet
pose prediction
title Cutting Pose Prediction from Point Clouds
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-22T21%3A01%3A33IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Cutting%20Pose%20Prediction%20from%20Point%20Clouds&rft.jtitle=Sensors%20(Basel,%20Switzerland)&rft.au=Philipsen,%20Mark%20P&rft.date=2020-03-11&rft.volume=20&rft.issue=6&rft.spage=1563&rft.pages=1563-&rft.issn=1424-8220&rft.eissn=1424-8220&rft_id=info:doi/10.3390/s20061563&rft_dat=%3Cproquest_doaj_%3E2377332828%3C/proquest_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c441t-f93545f20544538a39502a411aa149a5a7b5740b19537562437b5c2080e004503%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2377332828&rft_id=info:pmid/32168888&rfr_iscdi=true