Loading…
Implicit Point Function for LiDAR Super-Resolution in Autonomous Driving
LiDAR super-resolution is a relatively new problem in which we seek to fill in the blanks between measured points when a low-resolution LiDAR is given, making a high-resolution LiDAR or even a resolution-free LiDAR. Recently, several research works have been reported regarding LiDAR super-resolution...
Saved in:
Published in: | IEEE robotics and automation letters 2023-11, Vol.8 (11), p.7003-7009 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | LiDAR super-resolution is a relatively new problem in which we seek to fill in the blanks between measured points when a low-resolution LiDAR is given, making a high-resolution LiDAR or even a resolution-free LiDAR. Recently, several research works have been reported regarding LiDAR super-resolution. However, most of the works on LiDAR super-resolution have the drawback that they first transform 3D LiDAR point cloud into 2D depth map and upsample the LiDAR output by applying the image super-resolution method, ignoring the 3D geometric information of the point cloud obtained from a LiDAR. To solve the above problem, we propose a new deep learning network named as implicit point function (IPF). The basic idea of IPF is that when we are given low-resolution point cloud and a query ray, we generate the 3D target point embeddings on the query ray using on-the-ray positional embedding and local features, preserving the 3D geometric information of the given point cloud. Then, we aggregate them into one target point via the attention mechanism. IPF enables us to learn continuous representation of 3D space from low-resolution LiDAR and upsample a small number of layers to any number that we want. Finally, our IPF is applied to large-scale synthetic dataset and real dataset, and its validity is demonstrated by comparing with the previous methods. |
---|---|
ISSN: | 2377-3766 2377-3766 |
DOI: | 10.1109/LRA.2023.3313925 |