Loading…
Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D Object Detection
3D object detection networks tend to be biased towards the data they are trained on. It has been demonstrated that the evaluation on datasets captured in different locations, conditions or with sensors of different specifications than that of the training (source) data results in a drop in model per...
Saved in:
Main Authors: | , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | 3D object detection networks tend to be biased towards the data they are trained on. It has been demonstrated that the evaluation on datasets captured in different locations, conditions or with sensors of different specifications than that of the training (source) data results in a drop in model performance due to the domain gap with the test (or target) data. Current methods for adapting to the target domain data either assume access to source data during training, which may not be available due to privacy or memory concerns, or require a sequence of LiDAR frames as an input. We propose a single-frame approach for source-free, un-supervised domain adaptation of LiDAR-based 3D object detectors that uses class prototypes to mitigate the effect of pseudo-label noise. Addressing the limitations of traditional feature aggregation methods for prototype computation in the presence of noisy labels, we utilize a transformer module to identify outlier regions that correspond to incorrect, over-confident annotations, and compute an attentive class prototype. The losses associated with noisy pseudo-labels are down-weighed in the process of self-training. We demonstrate our approach on two recent object detectors and show that our method outperforms recent source-free domain adaptation works as well as those that leverage source information during training. The code will be made available. |
---|---|
ISSN: | 2642-9381 |
DOI: | 10.1109/WACV57701.2024.00304 |