Loading…

A dimensional reduction guiding deep learning architecture for 3D shape retrieval

•A method for extracting short descriptors from lengthy descriptors is developed.•The dimension reduction results are strengthened by an attraction/repulsion model.•A deep residual network is trained for generating the short descriptors.•The short descriptors improve the retrieval speed greatly. [Di...

Full description

Saved in:
Bibliographic Details
Published in:Computers & graphics 2019-06, Vol.81, p.82-91
Main Authors: Wang, Zihao, Lin, Hongwei, Yu, Xiaofeng, Hamza, Yusuf Fatihu
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•A method for extracting short descriptors from lengthy descriptors is developed.•The dimension reduction results are strengthened by an attraction/repulsion model.•A deep residual network is trained for generating the short descriptors.•The short descriptors improve the retrieval speed greatly. [Display omitted] The state-of-the-art shape descriptors are usually lengthy for gaining high retrieval precision. With the rapidly growing number of 3-dimensional models, the retrieval speed becomes a prominent problem in shape retrieval. In this paper, by exploiting the capabilities of the dimensionality reduction methods and the deep convolutional residual network (ResNet), we developed a method for extracting short shape descriptors (with just 2 real numbers, named 2-descriptors) from lengthy descriptors, while keeping or even improving the retrieval precision of the original lengthy descriptors. Specifically, an attraction and repulsion model is devised to strengthen the direct dimensionality reduction results. In this way, the dimensionality reduction results turn into desirable labels for the ResNet. Moreover, to extract the 2-descriptors using ResNet, we transformed it as a classification problem. For this purpose, the range of each component of the dimensionality reduction results (including two components in total) is uniformly divided into n intervals corresponding to n classes. Experiments on 3D shape retrieval show that our method not only accelerates the retrieval speed greatly but also improves the retrieval precisions of the original shape descriptors.
ISSN:0097-8493
1873-7684
DOI:10.1016/j.cag.2019.04.002