Loading…

Improved Panoramic Representation via Bidirectional Recurrent View Aggregation for Three-Dimensional Model Retrieval

In a view-based three-dimensional (3-D) model retrieval task, extracting discriminative high-level features of models from projected images is considered as an effective approach. The challenge of view-based 3-D shape retrieval is that the shape information of each view is limited due to information...

Full description

Saved in:
Bibliographic Details
Published in:IEEE computer graphics and applications 2019-03, Vol.39 (2), p.65-76
Main Authors: Xu, Cheng, Zhang, Cheng, Zhou, Xiaochen, Leng, Biao
Format: Magazinearticle
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In a view-based three-dimensional (3-D) model retrieval task, extracting discriminative high-level features of models from projected images is considered as an effective approach. The challenge of view-based 3-D shape retrieval is that the shape information of each view is limited due to information deficiency in projection. Traditional methods in this direction mostly convert the model into a panoramic view, making it hard to recognize the original shape. To resolve this problem, we propose a novel deep neural network, recurrent panorama network (RePanoNet), which can learn to build panoramic representation from view sequences. A view sequence is rendered at a circle around the model to provide enough panoramic information. For each view sequence, we employ the bidirectional long short-term memory in RePanoNet to recognize spatial correlations between adjacent views to construct a panoramic feature. In our experiments on ModelNet and ShapeNet Core55, RePanoNet outperforms the methods in the state of the art, which demonstrates its effectiveness.
ISSN:0272-1716
1558-1756
DOI:10.1109/MCG.2018.2884861