Loading…
3D Reconstruction for Multi-view Objects
Deep learning-based 3D reconstruction neural networks have achieved good performance on generating 3D features from 2D features. However, they often lead to feature loss in reconstruction. In this paper we propose a multi-view object 3D reconstruction neural network, named P2VNet. The depth estimati...
Saved in:
Published in: | Computers & electrical engineering 2023-03, Vol.106, p.108567, Article 108567 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Deep learning-based 3D reconstruction neural networks have achieved good performance on generating 3D features from 2D features. However, they often lead to feature loss in reconstruction. In this paper we propose a multi-view object 3D reconstruction neural network, named P2VNet. The depth estimation module of the front and back layers of P2VNet realizes the smooth transformation from 2D features to 3D features, which improves the performance of single view reconstruction. A multi-scale fusion sensing module in multi-view fusion is also proposed, where more receptive fields are added to generate richer context-aware features. We also introduce 3DFocal Loss to replace binary cross-entropy to address the problems of unbalanced space occupation of the voxel grid and complex division of partial grid occupation. Our experimental results have demonstrated that P2VNet has achieved higher accuracy than existing works. |
---|---|
ISSN: | 0045-7906 1879-0755 |
DOI: | 10.1016/j.compeleceng.2022.108567 |