Loading…

LiDAR-guided object search and detection in Subterranean Environments

Detecting objects of interest, such as human survivors, safety equipment, and structure access points, is critical to any search-and-rescue operation. Robots deployed for such time-sensitive efforts rely on their onboard sensors to perform their designated tasks. However, as disaster response operat...

Full description

Saved in:
Bibliographic Details
Main Authors: Patel, Manthan, Waibel, Gabriel, Khattak, Shehryar, Hutter, Marco
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 46
container_issue
container_start_page 41
container_title
container_volume
creator Patel, Manthan
Waibel, Gabriel
Khattak, Shehryar
Hutter, Marco
description Detecting objects of interest, such as human survivors, safety equipment, and structure access points, is critical to any search-and-rescue operation. Robots deployed for such time-sensitive efforts rely on their onboard sensors to perform their designated tasks. However, as disaster response operations are predominantly conducted under perceptually degraded conditions, commonly utilized sensors such as visual cameras and LiDARs suffer in terms of performance degradation. In response, this work presents a method that utilizes the complementary nature of vision and depth sensors to leverage multi-modal information to aid object detection at longer distances. In particular, depth and intensity values from sparse LiDAR returns are used to generate proposals for objects present in the environment. These proposals are then utilized by a Pan-Tilt-Zoom (PTZ) camera system to perform a directed search by adjusting its pose and zoom level for performing object detection and classification in difficult environments. The proposed work has been thoroughly verified using an ANYmal quadruped robot in underground settings and on datasets collected during the DARPA Subterranean Challenge finals.
doi_str_mv 10.1109/SSRR56537.2022.10018684
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_10018684</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10018684</ieee_id><sourcerecordid>10018684</sourcerecordid><originalsourceid>FETCH-LOGICAL-i204t-30fe3393c06b790e988a1efaa83fe710abbebeb0cf03a8bafe07fb81f72f89d23</originalsourceid><addsrcrecordid>eNo1j11LwzAYhaMgOOf-gWD-QOebj-bjcsw6hYLQ6vVI2jea4VJJO8F_b0HlXBx4ODxwCLllsGYM7F3bNk2pSqHXHDhfMwBmlJFn5IopVcpSGbDnZMGlLgsjubokq3E8AACfl0yqBanqeL9pirdT7LGngz9gN9ERXe7eqUs97XGaSRwSjYm2Jz9hzi6hS7RKXzEP6YhpGq_JRXAfI67-ekleH6qX7WNRP--etpu6iBzkVAgIKIQVHSivLaA1xjEMzhkRUDNw3uMc6AIIZ7wLCDp4w4LmwdieiyW5-fVGRNx_5nh0-Xv_f1v8AFhcTew</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>LiDAR-guided object search and detection in Subterranean Environments</title><source>IEEE Xplore All Conference Series</source><creator>Patel, Manthan ; Waibel, Gabriel ; Khattak, Shehryar ; Hutter, Marco</creator><creatorcontrib>Patel, Manthan ; Waibel, Gabriel ; Khattak, Shehryar ; Hutter, Marco</creatorcontrib><description>Detecting objects of interest, such as human survivors, safety equipment, and structure access points, is critical to any search-and-rescue operation. Robots deployed for such time-sensitive efforts rely on their onboard sensors to perform their designated tasks. However, as disaster response operations are predominantly conducted under perceptually degraded conditions, commonly utilized sensors such as visual cameras and LiDARs suffer in terms of performance degradation. In response, this work presents a method that utilizes the complementary nature of vision and depth sensors to leverage multi-modal information to aid object detection at longer distances. In particular, depth and intensity values from sparse LiDAR returns are used to generate proposals for objects present in the environment. These proposals are then utilized by a Pan-Tilt-Zoom (PTZ) camera system to perform a directed search by adjusting its pose and zoom level for performing object detection and classification in difficult environments. The proposed work has been thoroughly verified using an ANYmal quadruped robot in underground settings and on datasets collected during the DARPA Subterranean Challenge finals.</description><identifier>EISSN: 2475-8426</identifier><identifier>EISBN: 1665456809</identifier><identifier>EISBN: 9781665456807</identifier><identifier>DOI: 10.1109/SSRR56537.2022.10018684</identifier><language>eng</language><publisher>IEEE</publisher><subject>Cameras ; Laser radar ; Object detection ; Robot vision systems ; Search problems ; Sensors ; Visualization</subject><ispartof>2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), 2022, p.41-46</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10018684$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,23930,23931,25140,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10018684$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Patel, Manthan</creatorcontrib><creatorcontrib>Waibel, Gabriel</creatorcontrib><creatorcontrib>Khattak, Shehryar</creatorcontrib><creatorcontrib>Hutter, Marco</creatorcontrib><title>LiDAR-guided object search and detection in Subterranean Environments</title><title>2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)</title><addtitle>SSRR</addtitle><description>Detecting objects of interest, such as human survivors, safety equipment, and structure access points, is critical to any search-and-rescue operation. Robots deployed for such time-sensitive efforts rely on their onboard sensors to perform their designated tasks. However, as disaster response operations are predominantly conducted under perceptually degraded conditions, commonly utilized sensors such as visual cameras and LiDARs suffer in terms of performance degradation. In response, this work presents a method that utilizes the complementary nature of vision and depth sensors to leverage multi-modal information to aid object detection at longer distances. In particular, depth and intensity values from sparse LiDAR returns are used to generate proposals for objects present in the environment. These proposals are then utilized by a Pan-Tilt-Zoom (PTZ) camera system to perform a directed search by adjusting its pose and zoom level for performing object detection and classification in difficult environments. The proposed work has been thoroughly verified using an ANYmal quadruped robot in underground settings and on datasets collected during the DARPA Subterranean Challenge finals.</description><subject>Cameras</subject><subject>Laser radar</subject><subject>Object detection</subject><subject>Robot vision systems</subject><subject>Search problems</subject><subject>Sensors</subject><subject>Visualization</subject><issn>2475-8426</issn><isbn>1665456809</isbn><isbn>9781665456807</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2022</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNo1j11LwzAYhaMgOOf-gWD-QOebj-bjcsw6hYLQ6vVI2jea4VJJO8F_b0HlXBx4ODxwCLllsGYM7F3bNk2pSqHXHDhfMwBmlJFn5IopVcpSGbDnZMGlLgsjubokq3E8AACfl0yqBanqeL9pirdT7LGngz9gN9ERXe7eqUs97XGaSRwSjYm2Jz9hzi6hS7RKXzEP6YhpGq_JRXAfI67-ekleH6qX7WNRP--etpu6iBzkVAgIKIQVHSivLaA1xjEMzhkRUDNw3uMc6AIIZ7wLCDp4w4LmwdieiyW5-fVGRNx_5nh0-Xv_f1v8AFhcTew</recordid><startdate>20221108</startdate><enddate>20221108</enddate><creator>Patel, Manthan</creator><creator>Waibel, Gabriel</creator><creator>Khattak, Shehryar</creator><creator>Hutter, Marco</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>20221108</creationdate><title>LiDAR-guided object search and detection in Subterranean Environments</title><author>Patel, Manthan ; Waibel, Gabriel ; Khattak, Shehryar ; Hutter, Marco</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i204t-30fe3393c06b790e988a1efaa83fe710abbebeb0cf03a8bafe07fb81f72f89d23</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Cameras</topic><topic>Laser radar</topic><topic>Object detection</topic><topic>Robot vision systems</topic><topic>Search problems</topic><topic>Sensors</topic><topic>Visualization</topic><toplevel>online_resources</toplevel><creatorcontrib>Patel, Manthan</creatorcontrib><creatorcontrib>Waibel, Gabriel</creatorcontrib><creatorcontrib>Khattak, Shehryar</creatorcontrib><creatorcontrib>Hutter, Marco</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Xplore</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Patel, Manthan</au><au>Waibel, Gabriel</au><au>Khattak, Shehryar</au><au>Hutter, Marco</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>LiDAR-guided object search and detection in Subterranean Environments</atitle><btitle>2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)</btitle><stitle>SSRR</stitle><date>2022-11-08</date><risdate>2022</risdate><spage>41</spage><epage>46</epage><pages>41-46</pages><eissn>2475-8426</eissn><eisbn>1665456809</eisbn><eisbn>9781665456807</eisbn><abstract>Detecting objects of interest, such as human survivors, safety equipment, and structure access points, is critical to any search-and-rescue operation. Robots deployed for such time-sensitive efforts rely on their onboard sensors to perform their designated tasks. However, as disaster response operations are predominantly conducted under perceptually degraded conditions, commonly utilized sensors such as visual cameras and LiDARs suffer in terms of performance degradation. In response, this work presents a method that utilizes the complementary nature of vision and depth sensors to leverage multi-modal information to aid object detection at longer distances. In particular, depth and intensity values from sparse LiDAR returns are used to generate proposals for objects present in the environment. These proposals are then utilized by a Pan-Tilt-Zoom (PTZ) camera system to perform a directed search by adjusting its pose and zoom level for performing object detection and classification in difficult environments. The proposed work has been thoroughly verified using an ANYmal quadruped robot in underground settings and on datasets collected during the DARPA Subterranean Challenge finals.</abstract><pub>IEEE</pub><doi>10.1109/SSRR56537.2022.10018684</doi><tpages>6</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2475-8426
ispartof 2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), 2022, p.41-46
issn 2475-8426
language eng
recordid cdi_ieee_primary_10018684
source IEEE Xplore All Conference Series
subjects Cameras
Laser radar
Object detection
Robot vision systems
Search problems
Sensors
Visualization
title LiDAR-guided object search and detection in Subterranean Environments
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T22%3A27%3A13IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=LiDAR-guided%20object%20search%20and%20detection%20in%20Subterranean%20Environments&rft.btitle=2022%20IEEE%20International%20Symposium%20on%20Safety,%20Security,%20and%20Rescue%20Robotics%20(SSRR)&rft.au=Patel,%20Manthan&rft.date=2022-11-08&rft.spage=41&rft.epage=46&rft.pages=41-46&rft.eissn=2475-8426&rft_id=info:doi/10.1109/SSRR56537.2022.10018684&rft.eisbn=1665456809&rft.eisbn_list=9781665456807&rft_dat=%3Cieee_CHZPO%3E10018684%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i204t-30fe3393c06b790e988a1efaa83fe710abbebeb0cf03a8bafe07fb81f72f89d23%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10018684&rfr_iscdi=true