Loading…

Point Contrastive learning for LiDAR-based 3D object detection in autonomous driving

Current progress in 3D Perception tasks for autonomous driving relies upon neural network architectures that their training requires a growing demand for annotated data. However, semantic annotation of 3D scenes is a very expensive and labor-intensive task. In this paper, we present an approach for...

Full description

Saved in:
Bibliographic Details
Main Authors: Karypidis, Efstathios, Zamanakos, Georgios, Tsochatzidis, Lazaros, Pratikakis, Ioannis
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 5
container_issue
container_start_page 1
container_title
container_volume
creator Karypidis, Efstathios
Zamanakos, Georgios
Tsochatzidis, Lazaros
Pratikakis, Ioannis
description Current progress in 3D Perception tasks for autonomous driving relies upon neural network architectures that their training requires a growing demand for annotated data. However, semantic annotation of 3D scenes is a very expensive and labor-intensive task. In this paper, we present an approach for self-supervised, data-efficient learning in the context of point contrastive learning, using two distinct pre-training techniques towards improving performance in LiDAR-based 3D object detection in autonomous driving. Our experimental work relies upon standard benchmarking datasets, namely KITTI and Waymo. Under a comprehensive evaluation framework it is shown that, in the absence of large annotated data, the proposed approach could achieve improved performance.
doi_str_mv 10.1109/DSP58604.2023.10167978
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_10167978</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10167978</ieee_id><sourcerecordid>10167978</sourcerecordid><originalsourceid>FETCH-LOGICAL-i119t-87be7d88e92d0b6aeab050ee2f97879b3a1f4a76b4b17fd1ea23dde9942f014d3</originalsourceid><addsrcrecordid>eNo1kM1KAzEURqMgWGvfQCQvMGNuMpkkyzLjHwxYtK5LYu5ISptIJi349hbU1Vl9B75DyC2wGoCZu_5tJXXLmpozLmpg0Cqj9BlZGGW0kEwII40-JzMOrayEVOqSXE3TljEpwMCMrFcpxEK7FEu2UwlHpDu0OYb4SceU6RD65Wvl7ISeip4mt8WPQj2WE0KKNERqDyXFtE-HifocjqflNbkY7W7CxR_n5P3hft09VcPL43O3HKoAYEqllUPltUbDPXOtReuYZIh8PH1QxgkLY2NV6xoHavSAlgvv0ZiGjwwaL-bk5tcbEHHzlcPe5u_NfwXxA2LMUpY</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Point Contrastive learning for LiDAR-based 3D object detection in autonomous driving</title><source>IEEE Xplore All Conference Series</source><creator>Karypidis, Efstathios ; Zamanakos, Georgios ; Tsochatzidis, Lazaros ; Pratikakis, Ioannis</creator><creatorcontrib>Karypidis, Efstathios ; Zamanakos, Georgios ; Tsochatzidis, Lazaros ; Pratikakis, Ioannis</creatorcontrib><description>Current progress in 3D Perception tasks for autonomous driving relies upon neural network architectures that their training requires a growing demand for annotated data. However, semantic annotation of 3D scenes is a very expensive and labor-intensive task. In this paper, we present an approach for self-supervised, data-efficient learning in the context of point contrastive learning, using two distinct pre-training techniques towards improving performance in LiDAR-based 3D object detection in autonomous driving. Our experimental work relies upon standard benchmarking datasets, namely KITTI and Waymo. Under a comprehensive evaluation framework it is shown that, in the absence of large annotated data, the proposed approach could achieve improved performance.</description><identifier>EISSN: 2165-3577</identifier><identifier>EISBN: 9798350339598</identifier><identifier>DOI: 10.1109/DSP58604.2023.10167978</identifier><language>eng</language><publisher>IEEE</publisher><subject>3D Object Detection ; Detectors ; Digital signal processing ; LiDAR ; Object detection ; Point cloud compression ; Point Clouds ; Self-Supervised Learning ; Semantics ; Three-dimensional displays ; Training</subject><ispartof>2023 24th International Conference on Digital Signal Processing (DSP), 2023, p.1-5</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10167978$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,776,780,785,786,27904,54534,54911</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10167978$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Karypidis, Efstathios</creatorcontrib><creatorcontrib>Zamanakos, Georgios</creatorcontrib><creatorcontrib>Tsochatzidis, Lazaros</creatorcontrib><creatorcontrib>Pratikakis, Ioannis</creatorcontrib><title>Point Contrastive learning for LiDAR-based 3D object detection in autonomous driving</title><title>2023 24th International Conference on Digital Signal Processing (DSP)</title><addtitle>DSP</addtitle><description>Current progress in 3D Perception tasks for autonomous driving relies upon neural network architectures that their training requires a growing demand for annotated data. However, semantic annotation of 3D scenes is a very expensive and labor-intensive task. In this paper, we present an approach for self-supervised, data-efficient learning in the context of point contrastive learning, using two distinct pre-training techniques towards improving performance in LiDAR-based 3D object detection in autonomous driving. Our experimental work relies upon standard benchmarking datasets, namely KITTI and Waymo. Under a comprehensive evaluation framework it is shown that, in the absence of large annotated data, the proposed approach could achieve improved performance.</description><subject>3D Object Detection</subject><subject>Detectors</subject><subject>Digital signal processing</subject><subject>LiDAR</subject><subject>Object detection</subject><subject>Point cloud compression</subject><subject>Point Clouds</subject><subject>Self-Supervised Learning</subject><subject>Semantics</subject><subject>Three-dimensional displays</subject><subject>Training</subject><issn>2165-3577</issn><isbn>9798350339598</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2023</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNo1kM1KAzEURqMgWGvfQCQvMGNuMpkkyzLjHwxYtK5LYu5ISptIJi349hbU1Vl9B75DyC2wGoCZu_5tJXXLmpozLmpg0Cqj9BlZGGW0kEwII40-JzMOrayEVOqSXE3TljEpwMCMrFcpxEK7FEu2UwlHpDu0OYb4SceU6RD65Wvl7ISeip4mt8WPQj2WE0KKNERqDyXFtE-HifocjqflNbkY7W7CxR_n5P3hft09VcPL43O3HKoAYEqllUPltUbDPXOtReuYZIh8PH1QxgkLY2NV6xoHavSAlgvv0ZiGjwwaL-bk5tcbEHHzlcPe5u_NfwXxA2LMUpY</recordid><startdate>20230611</startdate><enddate>20230611</enddate><creator>Karypidis, Efstathios</creator><creator>Zamanakos, Georgios</creator><creator>Tsochatzidis, Lazaros</creator><creator>Pratikakis, Ioannis</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>20230611</creationdate><title>Point Contrastive learning for LiDAR-based 3D object detection in autonomous driving</title><author>Karypidis, Efstathios ; Zamanakos, Georgios ; Tsochatzidis, Lazaros ; Pratikakis, Ioannis</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i119t-87be7d88e92d0b6aeab050ee2f97879b3a1f4a76b4b17fd1ea23dde9942f014d3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2023</creationdate><topic>3D Object Detection</topic><topic>Detectors</topic><topic>Digital signal processing</topic><topic>LiDAR</topic><topic>Object detection</topic><topic>Point cloud compression</topic><topic>Point Clouds</topic><topic>Self-Supervised Learning</topic><topic>Semantics</topic><topic>Three-dimensional displays</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Karypidis, Efstathios</creatorcontrib><creatorcontrib>Zamanakos, Georgios</creatorcontrib><creatorcontrib>Tsochatzidis, Lazaros</creatorcontrib><creatorcontrib>Pratikakis, Ioannis</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Karypidis, Efstathios</au><au>Zamanakos, Georgios</au><au>Tsochatzidis, Lazaros</au><au>Pratikakis, Ioannis</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Point Contrastive learning for LiDAR-based 3D object detection in autonomous driving</atitle><btitle>2023 24th International Conference on Digital Signal Processing (DSP)</btitle><stitle>DSP</stitle><date>2023-06-11</date><risdate>2023</risdate><spage>1</spage><epage>5</epage><pages>1-5</pages><eissn>2165-3577</eissn><eisbn>9798350339598</eisbn><abstract>Current progress in 3D Perception tasks for autonomous driving relies upon neural network architectures that their training requires a growing demand for annotated data. However, semantic annotation of 3D scenes is a very expensive and labor-intensive task. In this paper, we present an approach for self-supervised, data-efficient learning in the context of point contrastive learning, using two distinct pre-training techniques towards improving performance in LiDAR-based 3D object detection in autonomous driving. Our experimental work relies upon standard benchmarking datasets, namely KITTI and Waymo. Under a comprehensive evaluation framework it is shown that, in the absence of large annotated data, the proposed approach could achieve improved performance.</abstract><pub>IEEE</pub><doi>10.1109/DSP58604.2023.10167978</doi><tpages>5</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2165-3577
ispartof 2023 24th International Conference on Digital Signal Processing (DSP), 2023, p.1-5
issn 2165-3577
language eng
recordid cdi_ieee_primary_10167978
source IEEE Xplore All Conference Series
subjects 3D Object Detection
Detectors
Digital signal processing
LiDAR
Object detection
Point cloud compression
Point Clouds
Self-Supervised Learning
Semantics
Three-dimensional displays
Training
title Point Contrastive learning for LiDAR-based 3D object detection in autonomous driving
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T12%3A02%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Point%20Contrastive%20learning%20for%20LiDAR-based%203D%20object%20detection%20in%20autonomous%20driving&rft.btitle=2023%2024th%20International%20Conference%20on%20Digital%20Signal%20Processing%20(DSP)&rft.au=Karypidis,%20Efstathios&rft.date=2023-06-11&rft.spage=1&rft.epage=5&rft.pages=1-5&rft.eissn=2165-3577&rft_id=info:doi/10.1109/DSP58604.2023.10167978&rft.eisbn=9798350339598&rft_dat=%3Cieee_CHZPO%3E10167978%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i119t-87be7d88e92d0b6aeab050ee2f97879b3a1f4a76b4b17fd1ea23dde9942f014d3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10167978&rfr_iscdi=true