Loading…

Recovering Translucent Objects Using a Single Time-of-Flight Depth Camera

Translucency introduces great challenges to 3-D acquisition because of complicated light behaviors such as refraction and transmittance. In this paper, we describe the development of a unified 3-D data acquisition framework that reconstructs translucent objects using a single commercial time-of-flig...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on circuits and systems for video technology 2016-05, Vol.26 (5), p.841-854
Main Authors: Shim, Hyunjung, Lee, Seungkyu
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Translucency introduces great challenges to 3-D acquisition because of complicated light behaviors such as refraction and transmittance. In this paper, we describe the development of a unified 3-D data acquisition framework that reconstructs translucent objects using a single commercial time-of-flight (ToF) camera. In our capture scenario, we record a depth map and intensity image of the scene twice using a static ToF camera; first, we capture the depth map and intensity image of an arbitrary background, and then we position the translucent foreground object and record a second depth map and intensity image with both the foreground and the background. As a result of material characteristics, the translucent object yields systematic distortions in the depth map. We developed a new distance representation that interprets the depth distortion induced as a result of translucency. By analyzing ToF depth sensing principles, we constructed a distance model governed by the level of translucency, foreground depth, and background depth. Using an analysis-by-synthesis approach, we can recover the 3-D geometry of a translucent object from a pair of depth maps and their intensity images. Extensive evaluation and case studies demonstrate that our method is effective for modeling the nonlinear depth distortion due to translucency and for reconstruction of a 3-D translucent object.
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2015.2397231