Loading…

Single-View 3D reconstruction: A Survey of deep learning methods

•Deep learning really helps in reconstructing 3D shapes from images.•Voxel grids is the most used representation but not the most efficient.•Choice of 3D representation is crucial to the success of single-view reconstruction.•Implicit surfaces gaining traction in Single-view object reconstruction. [...

Full description

Saved in:
Bibliographic Details
Published in:Computers & graphics 2021-02, Vol.94, p.164-190
Main Authors: Fahim, George, Amin, Khalid, Zarif, Sameh
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•Deep learning really helps in reconstructing 3D shapes from images.•Voxel grids is the most used representation but not the most efficient.•Choice of 3D representation is crucial to the success of single-view reconstruction.•Implicit surfaces gaining traction in Single-view object reconstruction. [Display omitted] The field of single-view 3D shape reconstruction and generation using deep learning techniques has seen rapid growth in the past five years. As the field is reaching a stage of maturity, a plethora of methods has been continuously proposed with the aim of pushing the state of research further. This article focuses on surveying the literature by classifying these methods according to the shape representation they use as an output. Specifically, it covers each method’s main contributions, degree of supervision, training paradigm, and its relation to the whole body of literature. Additionally, this survey discusses common 3D datasets, loss functions, and evaluation metrics used in the field. Finally, it provides a thorough analysis and reflections on the current state of research and provides a summary of the open problems and possible future directions. This work is an effort to introduce the field of data-driven single-view 3D reconstruction to interested researchers while being comprehensive enough to act as a reference to those who already do research in the field.
ISSN:0097-8493
1873-7684
DOI:10.1016/j.cag.2020.12.004