Loading…
Zero-Shot Depth Estimation From Light Field Using A Convolutional Neural Network
This article proposes a zero-shot learning-based framework for light field depth estimation, which learns an end-to-end mapping solely from an input light field to the corresponding disparity map with neither extra training data nor supervision of groundtruth depth. The proposed method overcomes two...
Saved in:
Published in: | IEEE transactions on computational imaging 2020, Vol.6, p.682-696 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This article proposes a zero-shot learning-based framework for light field depth estimation, which learns an end-to-end mapping solely from an input light field to the corresponding disparity map with neither extra training data nor supervision of groundtruth depth. The proposed method overcomes two major difficulties posed in existing learning-based methods and is thus much more feasible in practice. First, it saves the huge burden of obtaining groundtruth depth of a variety of scenes to serve as labels during training. Second, it avoids the severe domain shift effect when applied to light fields with drastically different content or captured under different camera configurations from the training data. On the other hand, compared with conventional non-learning-based methods, the proposed method better exploits the correlations in the 4D light field and generates much superior depth results. Moreover, we extend this zero-shot learning framework to depth estimation from light field videos. For the first time, we demonstrate that more accurate and robust depth can be estimated from light field videos by jointly exploiting the correlations across spatial, angular, and temporal dimensions. We conduct comprehensive experiments on both synthetic and real-world light field image datasets, as well as a self collected light field video dataset. Quantitative and qualitative results validate the superior performance of our method over the state-of-the-arts, especially for the challenging real-world scenes. |
---|---|
ISSN: | 2573-0436 2333-9403 |
DOI: | 10.1109/TCI.2020.2967148 |