Loading…

StarNet: Targeted Computation for Object Detection in Point Clouds

Detecting objects from LiDAR point clouds is an important component of self-driving car technology as LiDAR provides high resolution spatial information. Previous work on point-cloud 3D object detection has re-purposed convolutional approaches from traditional camera imagery. In this work, we presen...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2019-12
Main Authors: Ngiam, Jiquan, Caine, Benjamin, Han, Wei, Yang, Brandon, Chai, Yuning, Sun, Pei, Zhou, Yin, Xi, Yi, Alsharif, Ouais, Nguyen, Patrick, Chen, Zhifeng, Shlens, Jonathon, Vasudevan, Vijay
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Ngiam, Jiquan
Caine, Benjamin
Han, Wei
Yang, Brandon
Chai, Yuning
Sun, Pei
Zhou, Yin
Xi, Yi
Alsharif, Ouais
Nguyen, Patrick
Chen, Zhifeng
Shlens, Jonathon
Vasudevan, Vijay
description Detecting objects from LiDAR point clouds is an important component of self-driving car technology as LiDAR provides high resolution spatial information. Previous work on point-cloud 3D object detection has re-purposed convolutional approaches from traditional camera imagery. In this work, we present an object detection system called StarNet designed specifically to take advantage of the sparse and 3D nature of point cloud data. StarNet is entirely point-based, uses no global information, has data dependent anchors, and uses sampling instead of learned region proposals. We demonstrate how this design leads to competitive or superior performance on the large Waymo Open Dataset and the KITTI detection dataset, as compared to convolutional baselines. In particular, we show how our detector can outperform a competitive baseline on Pedestrian detection on the Waymo Open Dataset by more than 7 absolute mAP while being more computationally efficient. We show how our redesign---namely using only local information and using sampling instead of learned proposals---leads to a significantly more flexible and adaptable system: we demonstrate how we can vary the computational cost of a single trained StarNet without retraining, and how we can target proposals towards areas of interest with priors and heuristics. Finally, we show how our design allows for incorporating temporal context by using detections from previous frames to target computation of the detector, which leads to further improvements in performance without additional computational cost.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2282708884</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2282708884</sourcerecordid><originalsourceid>FETCH-proquest_journals_22827088843</originalsourceid><addsrcrecordid>eNqNitEKgjAUQEcQJOU_XOhZWHeao8es6KmCfJelMxTbbLv7_yT6gJ4OnHNmLEIhNolMERcs9r7nnOM2xywTEdvfSbmLph2Uyj016QYK-xoDKeqsgdY6uD56XRMcplh_ZWfgZjtDUAw2NH7F5q0avI5_XLL16VgW52R09h20p6q3wZkpVYgScy6lTMV_1weGPzlv</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2282708884</pqid></control><display><type>article</type><title>StarNet: Targeted Computation for Object Detection in Point Clouds</title><source>Publicly Available Content Database</source><creator>Ngiam, Jiquan ; Caine, Benjamin ; Han, Wei ; Yang, Brandon ; Chai, Yuning ; Sun, Pei ; Zhou, Yin ; Xi, Yi ; Alsharif, Ouais ; Nguyen, Patrick ; Chen, Zhifeng ; Shlens, Jonathon ; Vasudevan, Vijay</creator><creatorcontrib>Ngiam, Jiquan ; Caine, Benjamin ; Han, Wei ; Yang, Brandon ; Chai, Yuning ; Sun, Pei ; Zhou, Yin ; Xi, Yi ; Alsharif, Ouais ; Nguyen, Patrick ; Chen, Zhifeng ; Shlens, Jonathon ; Vasudevan, Vijay</creatorcontrib><description>Detecting objects from LiDAR point clouds is an important component of self-driving car technology as LiDAR provides high resolution spatial information. Previous work on point-cloud 3D object detection has re-purposed convolutional approaches from traditional camera imagery. In this work, we present an object detection system called StarNet designed specifically to take advantage of the sparse and 3D nature of point cloud data. StarNet is entirely point-based, uses no global information, has data dependent anchors, and uses sampling instead of learned region proposals. We demonstrate how this design leads to competitive or superior performance on the large Waymo Open Dataset and the KITTI detection dataset, as compared to convolutional baselines. In particular, we show how our detector can outperform a competitive baseline on Pedestrian detection on the Waymo Open Dataset by more than 7 absolute mAP while being more computationally efficient. We show how our redesign---namely using only local information and using sampling instead of learned proposals---leads to a significantly more flexible and adaptable system: we demonstrate how we can vary the computational cost of a single trained StarNet without retraining, and how we can target proposals towards areas of interest with priors and heuristics. Finally, we show how our design allows for incorporating temporal context by using detections from previous frames to target computation of the detector, which leads to further improvements in performance without additional computational cost.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Autonomous cars ; Cameras ; Cloud computing ; Datasets ; Image detection ; Lidar ; Object recognition ; Priorities ; Spatial data</subject><ispartof>arXiv.org, 2019-12</ispartof><rights>2019. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2282708884?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>776,780,25732,36991,44569</link.rule.ids></links><search><creatorcontrib>Ngiam, Jiquan</creatorcontrib><creatorcontrib>Caine, Benjamin</creatorcontrib><creatorcontrib>Han, Wei</creatorcontrib><creatorcontrib>Yang, Brandon</creatorcontrib><creatorcontrib>Chai, Yuning</creatorcontrib><creatorcontrib>Sun, Pei</creatorcontrib><creatorcontrib>Zhou, Yin</creatorcontrib><creatorcontrib>Xi, Yi</creatorcontrib><creatorcontrib>Alsharif, Ouais</creatorcontrib><creatorcontrib>Nguyen, Patrick</creatorcontrib><creatorcontrib>Chen, Zhifeng</creatorcontrib><creatorcontrib>Shlens, Jonathon</creatorcontrib><creatorcontrib>Vasudevan, Vijay</creatorcontrib><title>StarNet: Targeted Computation for Object Detection in Point Clouds</title><title>arXiv.org</title><description>Detecting objects from LiDAR point clouds is an important component of self-driving car technology as LiDAR provides high resolution spatial information. Previous work on point-cloud 3D object detection has re-purposed convolutional approaches from traditional camera imagery. In this work, we present an object detection system called StarNet designed specifically to take advantage of the sparse and 3D nature of point cloud data. StarNet is entirely point-based, uses no global information, has data dependent anchors, and uses sampling instead of learned region proposals. We demonstrate how this design leads to competitive or superior performance on the large Waymo Open Dataset and the KITTI detection dataset, as compared to convolutional baselines. In particular, we show how our detector can outperform a competitive baseline on Pedestrian detection on the Waymo Open Dataset by more than 7 absolute mAP while being more computationally efficient. We show how our redesign---namely using only local information and using sampling instead of learned proposals---leads to a significantly more flexible and adaptable system: we demonstrate how we can vary the computational cost of a single trained StarNet without retraining, and how we can target proposals towards areas of interest with priors and heuristics. Finally, we show how our design allows for incorporating temporal context by using detections from previous frames to target computation of the detector, which leads to further improvements in performance without additional computational cost.</description><subject>Autonomous cars</subject><subject>Cameras</subject><subject>Cloud computing</subject><subject>Datasets</subject><subject>Image detection</subject><subject>Lidar</subject><subject>Object recognition</subject><subject>Priorities</subject><subject>Spatial data</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNitEKgjAUQEcQJOU_XOhZWHeao8es6KmCfJelMxTbbLv7_yT6gJ4OnHNmLEIhNolMERcs9r7nnOM2xywTEdvfSbmLph2Uyj016QYK-xoDKeqsgdY6uD56XRMcplh_ZWfgZjtDUAw2NH7F5q0avI5_XLL16VgW52R09h20p6q3wZkpVYgScy6lTMV_1weGPzlv</recordid><startdate>20191202</startdate><enddate>20191202</enddate><creator>Ngiam, Jiquan</creator><creator>Caine, Benjamin</creator><creator>Han, Wei</creator><creator>Yang, Brandon</creator><creator>Chai, Yuning</creator><creator>Sun, Pei</creator><creator>Zhou, Yin</creator><creator>Xi, Yi</creator><creator>Alsharif, Ouais</creator><creator>Nguyen, Patrick</creator><creator>Chen, Zhifeng</creator><creator>Shlens, Jonathon</creator><creator>Vasudevan, Vijay</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20191202</creationdate><title>StarNet: Targeted Computation for Object Detection in Point Clouds</title><author>Ngiam, Jiquan ; Caine, Benjamin ; Han, Wei ; Yang, Brandon ; Chai, Yuning ; Sun, Pei ; Zhou, Yin ; Xi, Yi ; Alsharif, Ouais ; Nguyen, Patrick ; Chen, Zhifeng ; Shlens, Jonathon ; Vasudevan, Vijay</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_22827088843</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Autonomous cars</topic><topic>Cameras</topic><topic>Cloud computing</topic><topic>Datasets</topic><topic>Image detection</topic><topic>Lidar</topic><topic>Object recognition</topic><topic>Priorities</topic><topic>Spatial data</topic><toplevel>online_resources</toplevel><creatorcontrib>Ngiam, Jiquan</creatorcontrib><creatorcontrib>Caine, Benjamin</creatorcontrib><creatorcontrib>Han, Wei</creatorcontrib><creatorcontrib>Yang, Brandon</creatorcontrib><creatorcontrib>Chai, Yuning</creatorcontrib><creatorcontrib>Sun, Pei</creatorcontrib><creatorcontrib>Zhou, Yin</creatorcontrib><creatorcontrib>Xi, Yi</creatorcontrib><creatorcontrib>Alsharif, Ouais</creatorcontrib><creatorcontrib>Nguyen, Patrick</creatorcontrib><creatorcontrib>Chen, Zhifeng</creatorcontrib><creatorcontrib>Shlens, Jonathon</creatorcontrib><creatorcontrib>Vasudevan, Vijay</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ngiam, Jiquan</au><au>Caine, Benjamin</au><au>Han, Wei</au><au>Yang, Brandon</au><au>Chai, Yuning</au><au>Sun, Pei</au><au>Zhou, Yin</au><au>Xi, Yi</au><au>Alsharif, Ouais</au><au>Nguyen, Patrick</au><au>Chen, Zhifeng</au><au>Shlens, Jonathon</au><au>Vasudevan, Vijay</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>StarNet: Targeted Computation for Object Detection in Point Clouds</atitle><jtitle>arXiv.org</jtitle><date>2019-12-02</date><risdate>2019</risdate><eissn>2331-8422</eissn><abstract>Detecting objects from LiDAR point clouds is an important component of self-driving car technology as LiDAR provides high resolution spatial information. Previous work on point-cloud 3D object detection has re-purposed convolutional approaches from traditional camera imagery. In this work, we present an object detection system called StarNet designed specifically to take advantage of the sparse and 3D nature of point cloud data. StarNet is entirely point-based, uses no global information, has data dependent anchors, and uses sampling instead of learned region proposals. We demonstrate how this design leads to competitive or superior performance on the large Waymo Open Dataset and the KITTI detection dataset, as compared to convolutional baselines. In particular, we show how our detector can outperform a competitive baseline on Pedestrian detection on the Waymo Open Dataset by more than 7 absolute mAP while being more computationally efficient. We show how our redesign---namely using only local information and using sampling instead of learned proposals---leads to a significantly more flexible and adaptable system: we demonstrate how we can vary the computational cost of a single trained StarNet without retraining, and how we can target proposals towards areas of interest with priors and heuristics. Finally, we show how our design allows for incorporating temporal context by using detections from previous frames to target computation of the detector, which leads to further improvements in performance without additional computational cost.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2019-12
issn 2331-8422
language eng
recordid cdi_proquest_journals_2282708884
source Publicly Available Content Database
subjects Autonomous cars
Cameras
Cloud computing
Datasets
Image detection
Lidar
Object recognition
Priorities
Spatial data
title StarNet: Targeted Computation for Object Detection in Point Clouds
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T13%3A08%3A34IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=StarNet:%20Targeted%20Computation%20for%20Object%20Detection%20in%20Point%20Clouds&rft.jtitle=arXiv.org&rft.au=Ngiam,%20Jiquan&rft.date=2019-12-02&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2282708884%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_22827088843%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2282708884&rft_id=info:pmid/&rfr_iscdi=true