Loading…

Large-Scale 3D Shape Reconstruction and Segmentation from ShapeNet Core55

We introduce a large-scale 3D shape understanding benchmark using data and annotation from ShapeNet 3D object database. The benchmark consists of two tasks: part-level segmentation of 3D shapes and 3D reconstruction from single view images. Ten teams have participated in the challenge and the best p...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2017-10
Main Authors: Li, Yi, Shao, Lin, Savva, Manolis, Huang, Haibin, Zhou, Yang, Wang, Qirui, Graham, Benjamin, Engelcke, Martin, Klokov, Roman, Lempitsky, Victor, Gan, Yuan, Wang, Pengyu, Liu, Kun, Yu, Fenggen, Panpan Shui, Hu, Bingyang, Zhang, Yan, Li, Yangyan, Bu, Rui, Sun, Mingchao, Wu, Wei, Jeong, Minki, Choi, Jaehoon, Kim, Changick, Angom Geetchandra, Murthy, Narasimha, Bhargava Ramu, Bharadwaj Manda, Ramanathan, M, Kumar, Gautam, Preetham, P, Srivastava, Siddharth, Bhugra, Swati, Lall, Brejesh, Haene, Christian, Tulsiani, Shubham, Malik, Jitendra, Lafer, Jared, Jones, Ramsey, Li, Siyuan, Lu, Jie, Shi, Jin, Yu, Jingyi, Huang, Qixing, Kalogerakis, Evangelos, Savarese, Silvio, Hanrahan, Pat, Funkhouser, Thomas, Su, Hao, Guibas, Leonidas
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We introduce a large-scale 3D shape understanding benchmark using data and annotation from ShapeNet 3D object database. The benchmark consists of two tasks: part-level segmentation of 3D shapes and 3D reconstruction from single view images. Ten teams have participated in the challenge and the best performing teams have outperformed state-of-the-art approaches on both tasks. A few novel deep learning architectures have been proposed on various 3D representations on both tasks. We report the techniques used by each team and the corresponding performances. In addition, we summarize the major discoveries from the reported results and possible trends for the future work in the field.
ISSN:2331-8422