Loading…

DV-Net: Dual-view network for 3D reconstruction by fusing multiple sets of gated control point clouds

•An end-to-end dual-view 3D reconstruction architecture is proposed in this paper.•The structure feature learning network is proposed in this paper.•The gated control point cloud fusion network is proposed in this paper. Deep learning for 3D reconstruction have just shown some promising advantanges,...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition letters 2020-03, Vol.131, p.376-382
Main Authors: Jia, Xin, Yang, Shourui, Peng, Yuxin, Zhang, Junchao, Chen, Shengyong
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•An end-to-end dual-view 3D reconstruction architecture is proposed in this paper.•The structure feature learning network is proposed in this paper.•The gated control point cloud fusion network is proposed in this paper. Deep learning for 3D reconstruction have just shown some promising advantanges, where 3D shapes can be predicted from a single RGB image. However, such works are often limited by single feature cue, which does not capture the 3D shape of objects well. To address this problem, an end-to-end 3D reconstruction approach that predicts 3D point cloud from dual-view RGB images is proposed in this paper. It consists of several processing parts. A dual-view 3D reconstruction network is proposed for 3D reconstruction, which predicts object’s point clouds by exploiting two RGB images with different views, and avoids the limitation of single feature cue. Another structure feature learning network is performed to extract the structure features with stronger representation ability from point clouds. A gated control network for data fusion is proposed to gather point clouds. It takes two sets of point clouds with different views as input and fuses them. The proposed approach is thoroughly evaluated with extensive experiments on the widely-used ShapeNet dataset. Both the qualitative results and quantitative analysis demonstrate that this method not only captures the detailed geometric structures of 3D shapes for different object categories with complex topologies, but also achieves state-of-the-art performance.
ISSN:0167-8655
1872-7344
DOI:10.1016/j.patrec.2020.02.001