Loading…

PesRec: A parametric estimation method for indoor semantic scene reconstruction from a single image

•PesRec Improves Indoor Scene Reconstruction Quality.•PesRec models perform cooperative training.•PesRec designed a new spatial layout sampling module for obtaining high-precision attributes.•Improved interpretability of PesRec models. Reconstructing semantic indoor scenes is a challenging task in a...

Full description

Saved in:
Bibliographic Details
Published in:International journal of applied earth observation and geoinformation 2024-09, Vol.133, p.104135, Article 104135
Main Authors: Cao, Xingwen, Zheng, Xueting, Zheng, Hongwei, Chen, Xi, Bao, Anming, Liu, Ying, Liu, Tie, Zhang, Haoran, Zhao, Muhua, Zhang, Zichen
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•PesRec Improves Indoor Scene Reconstruction Quality.•PesRec models perform cooperative training.•PesRec designed a new spatial layout sampling module for obtaining high-precision attributes.•Improved interpretability of PesRec models. Reconstructing semantic indoor scenes is a challenging task in augmented and virtual reality. The quality of scene reconstruction is limited by the complexity and occlusion of indoor scenes. This is due to the difficulty in estimating the spatial structure of the scene and insufficient learning for object location inference. To address these challenges, we have developed PesRec, an end-to-end multi-task scene reconstruction network for parameterizing indoor semantic information. PesRec incorporates a newly designed spatial layout estimator and a 3D object detector to effectively learn scene parameter features from RGB images. We modify an object mesh generator to enhance the robustness of reconstructing indoor occluded objects through point cloud optimization in PesRec. Using the analyzed scene parameters and spatial structure, the proposed PesRec reconstructs an indoor scene by placing object meshes scaled to 3D detection boxes in an estimated layout cuboid. Extensive experiments on two benchmark datasets demonstrate that PesRec performs exceptionally well for object reconstruction with an average chamfer distance of 5.24 × 10-3 on the Pix3D dataset including 53.61 % mAP for 3D object detection and 79.7 % 3D IoU for the estimation of layout on the commonly-used SUN RGB-D datasets. The proposed computing network breaks through the limitations caused by complex indoor scenes and occlusions, showing optimization results that improve the quality of reconstruction in the fields of augmented reality and virtual reality.
ISSN:1569-8432
DOI:10.1016/j.jag.2024.104135