Loading…
MMScan: A Multi-Modal 3D Scene Dataset with Hierarchical Grounded Language Annotations
With the emergence of LLMs and their integration with other data modalities, multi-modal 3D perception attracts more attention due to its connectivity to the physical world and makes rapid progress. However, limited by existing datasets, previous works mainly focus on understanding object properties...
Saved in:
Published in: | arXiv.org 2024-06 |
---|---|
Main Authors: | , , , , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Lyu, Ruiyuan Wang, Tai Lin, Jingli Yang, Shuai Mao, Xiaohan Chen, Yilun Xu, Runsen Huang, Haifeng Zhu, Chenming Lin, Dahua Pang, Jiangmiao |
description | With the emergence of LLMs and their integration with other data modalities, multi-modal 3D perception attracts more attention due to its connectivity to the physical world and makes rapid progress. However, limited by existing datasets, previous works mainly focus on understanding object properties or inter-object spatial relationships in a 3D scene. To tackle this problem, this paper builds the first largest ever multi-modal 3D scene dataset and benchmark with hierarchical grounded language annotations, MMScan. It is constructed based on a top-down logic, from region to object level, from a single target to inter-target relationships, covering holistic aspects of spatial and attribute understanding. The overall pipeline incorporates powerful VLMs via carefully designed prompts to initialize the annotations efficiently and further involve humans' correction in the loop to ensure the annotations are natural, correct, and comprehensive. Built upon existing 3D scanning data, the resulting multi-modal 3D dataset encompasses 1.4M meta-annotated captions on 109k objects and 7.7k regions as well as over 3.04M diverse samples for 3D visual grounding and question-answering benchmarks. We evaluate representative baselines on our benchmarks, analyze their capabilities in different aspects, and showcase the key problems to be addressed in the future. Furthermore, we use this high-quality dataset to train state-of-the-art 3D visual grounding and LLMs and obtain remarkable performance improvement both on existing benchmarks and in-the-wild evaluation. Codes, datasets, and benchmarks will be available at https://github.com/OpenRobotLab/EmbodiedScan. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3068241029</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3068241029</sourcerecordid><originalsourceid>FETCH-proquest_journals_30682410293</originalsourceid><addsrcrecordid>eNqNyrsOgjAUgOHGxESivMNJnElKC4huRLwMMmFcyQlUKCGt9hJfXwYfwOkf_m9BAsZ5HOUJYysSWjtSSlm2Y2nKA_KoqrpFdYACKj85GVW6wwl4CXUrlIASHVrh4CPdAFcpDJp2kO1MLkZ71YkObqh6j72AQint0Emt7IYsnzhZEf66Jtvz6X68Ri-j315Y14zaGzWvhtMsZ0lM2Z7_p74LHD-w</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3068241029</pqid></control><display><type>article</type><title>MMScan: A Multi-Modal 3D Scene Dataset with Hierarchical Grounded Language Annotations</title><source>Publicly Available Content Database</source><creator>Lyu, Ruiyuan ; Wang, Tai ; Lin, Jingli ; Yang, Shuai ; Mao, Xiaohan ; Chen, Yilun ; Xu, Runsen ; Huang, Haifeng ; Zhu, Chenming ; Lin, Dahua ; Pang, Jiangmiao</creator><creatorcontrib>Lyu, Ruiyuan ; Wang, Tai ; Lin, Jingli ; Yang, Shuai ; Mao, Xiaohan ; Chen, Yilun ; Xu, Runsen ; Huang, Haifeng ; Zhu, Chenming ; Lin, Dahua ; Pang, Jiangmiao</creatorcontrib><description>With the emergence of LLMs and their integration with other data modalities, multi-modal 3D perception attracts more attention due to its connectivity to the physical world and makes rapid progress. However, limited by existing datasets, previous works mainly focus on understanding object properties or inter-object spatial relationships in a 3D scene. To tackle this problem, this paper builds the first largest ever multi-modal 3D scene dataset and benchmark with hierarchical grounded language annotations, MMScan. It is constructed based on a top-down logic, from region to object level, from a single target to inter-target relationships, covering holistic aspects of spatial and attribute understanding. The overall pipeline incorporates powerful VLMs via carefully designed prompts to initialize the annotations efficiently and further involve humans' correction in the loop to ensure the annotations are natural, correct, and comprehensive. Built upon existing 3D scanning data, the resulting multi-modal 3D dataset encompasses 1.4M meta-annotated captions on 109k objects and 7.7k regions as well as over 3.04M diverse samples for 3D visual grounding and question-answering benchmarks. We evaluate representative baselines on our benchmarks, analyze their capabilities in different aspects, and showcase the key problems to be addressed in the future. Furthermore, we use this high-quality dataset to train state-of-the-art 3D visual grounding and LLMs and obtain remarkable performance improvement both on existing benchmarks and in-the-wild evaluation. Codes, datasets, and benchmarks will be available at https://github.com/OpenRobotLab/EmbodiedScan.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Annotations ; Benchmarks ; Datasets ; Space perception</subject><ispartof>arXiv.org, 2024-06</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by-nc-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/3068241029?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Lyu, Ruiyuan</creatorcontrib><creatorcontrib>Wang, Tai</creatorcontrib><creatorcontrib>Lin, Jingli</creatorcontrib><creatorcontrib>Yang, Shuai</creatorcontrib><creatorcontrib>Mao, Xiaohan</creatorcontrib><creatorcontrib>Chen, Yilun</creatorcontrib><creatorcontrib>Xu, Runsen</creatorcontrib><creatorcontrib>Huang, Haifeng</creatorcontrib><creatorcontrib>Zhu, Chenming</creatorcontrib><creatorcontrib>Lin, Dahua</creatorcontrib><creatorcontrib>Pang, Jiangmiao</creatorcontrib><title>MMScan: A Multi-Modal 3D Scene Dataset with Hierarchical Grounded Language Annotations</title><title>arXiv.org</title><description>With the emergence of LLMs and their integration with other data modalities, multi-modal 3D perception attracts more attention due to its connectivity to the physical world and makes rapid progress. However, limited by existing datasets, previous works mainly focus on understanding object properties or inter-object spatial relationships in a 3D scene. To tackle this problem, this paper builds the first largest ever multi-modal 3D scene dataset and benchmark with hierarchical grounded language annotations, MMScan. It is constructed based on a top-down logic, from region to object level, from a single target to inter-target relationships, covering holistic aspects of spatial and attribute understanding. The overall pipeline incorporates powerful VLMs via carefully designed prompts to initialize the annotations efficiently and further involve humans' correction in the loop to ensure the annotations are natural, correct, and comprehensive. Built upon existing 3D scanning data, the resulting multi-modal 3D dataset encompasses 1.4M meta-annotated captions on 109k objects and 7.7k regions as well as over 3.04M diverse samples for 3D visual grounding and question-answering benchmarks. We evaluate representative baselines on our benchmarks, analyze their capabilities in different aspects, and showcase the key problems to be addressed in the future. Furthermore, we use this high-quality dataset to train state-of-the-art 3D visual grounding and LLMs and obtain remarkable performance improvement both on existing benchmarks and in-the-wild evaluation. Codes, datasets, and benchmarks will be available at https://github.com/OpenRobotLab/EmbodiedScan.</description><subject>Annotations</subject><subject>Benchmarks</subject><subject>Datasets</subject><subject>Space perception</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNyrsOgjAUgOHGxESivMNJnElKC4huRLwMMmFcyQlUKCGt9hJfXwYfwOkf_m9BAsZ5HOUJYysSWjtSSlm2Y2nKA_KoqrpFdYACKj85GVW6wwl4CXUrlIASHVrh4CPdAFcpDJp2kO1MLkZ71YkObqh6j72AQint0Emt7IYsnzhZEf66Jtvz6X68Ri-j315Y14zaGzWvhtMsZ0lM2Z7_p74LHD-w</recordid><startdate>20240613</startdate><enddate>20240613</enddate><creator>Lyu, Ruiyuan</creator><creator>Wang, Tai</creator><creator>Lin, Jingli</creator><creator>Yang, Shuai</creator><creator>Mao, Xiaohan</creator><creator>Chen, Yilun</creator><creator>Xu, Runsen</creator><creator>Huang, Haifeng</creator><creator>Zhu, Chenming</creator><creator>Lin, Dahua</creator><creator>Pang, Jiangmiao</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240613</creationdate><title>MMScan: A Multi-Modal 3D Scene Dataset with Hierarchical Grounded Language Annotations</title><author>Lyu, Ruiyuan ; Wang, Tai ; Lin, Jingli ; Yang, Shuai ; Mao, Xiaohan ; Chen, Yilun ; Xu, Runsen ; Huang, Haifeng ; Zhu, Chenming ; Lin, Dahua ; Pang, Jiangmiao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30682410293</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Annotations</topic><topic>Benchmarks</topic><topic>Datasets</topic><topic>Space perception</topic><toplevel>online_resources</toplevel><creatorcontrib>Lyu, Ruiyuan</creatorcontrib><creatorcontrib>Wang, Tai</creatorcontrib><creatorcontrib>Lin, Jingli</creatorcontrib><creatorcontrib>Yang, Shuai</creatorcontrib><creatorcontrib>Mao, Xiaohan</creatorcontrib><creatorcontrib>Chen, Yilun</creatorcontrib><creatorcontrib>Xu, Runsen</creatorcontrib><creatorcontrib>Huang, Haifeng</creatorcontrib><creatorcontrib>Zhu, Chenming</creatorcontrib><creatorcontrib>Lin, Dahua</creatorcontrib><creatorcontrib>Pang, Jiangmiao</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lyu, Ruiyuan</au><au>Wang, Tai</au><au>Lin, Jingli</au><au>Yang, Shuai</au><au>Mao, Xiaohan</au><au>Chen, Yilun</au><au>Xu, Runsen</au><au>Huang, Haifeng</au><au>Zhu, Chenming</au><au>Lin, Dahua</au><au>Pang, Jiangmiao</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>MMScan: A Multi-Modal 3D Scene Dataset with Hierarchical Grounded Language Annotations</atitle><jtitle>arXiv.org</jtitle><date>2024-06-13</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>With the emergence of LLMs and their integration with other data modalities, multi-modal 3D perception attracts more attention due to its connectivity to the physical world and makes rapid progress. However, limited by existing datasets, previous works mainly focus on understanding object properties or inter-object spatial relationships in a 3D scene. To tackle this problem, this paper builds the first largest ever multi-modal 3D scene dataset and benchmark with hierarchical grounded language annotations, MMScan. It is constructed based on a top-down logic, from region to object level, from a single target to inter-target relationships, covering holistic aspects of spatial and attribute understanding. The overall pipeline incorporates powerful VLMs via carefully designed prompts to initialize the annotations efficiently and further involve humans' correction in the loop to ensure the annotations are natural, correct, and comprehensive. Built upon existing 3D scanning data, the resulting multi-modal 3D dataset encompasses 1.4M meta-annotated captions on 109k objects and 7.7k regions as well as over 3.04M diverse samples for 3D visual grounding and question-answering benchmarks. We evaluate representative baselines on our benchmarks, analyze their capabilities in different aspects, and showcase the key problems to be addressed in the future. Furthermore, we use this high-quality dataset to train state-of-the-art 3D visual grounding and LLMs and obtain remarkable performance improvement both on existing benchmarks and in-the-wild evaluation. Codes, datasets, and benchmarks will be available at https://github.com/OpenRobotLab/EmbodiedScan.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-06 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3068241029 |
source | Publicly Available Content Database |
subjects | Annotations Benchmarks Datasets Space perception |
title | MMScan: A Multi-Modal 3D Scene Dataset with Hierarchical Grounded Language Annotations |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T18%3A43%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=MMScan:%20A%20Multi-Modal%203D%20Scene%20Dataset%20with%20Hierarchical%20Grounded%20Language%20Annotations&rft.jtitle=arXiv.org&rft.au=Lyu,%20Ruiyuan&rft.date=2024-06-13&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3068241029%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_30682410293%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3068241029&rft_id=info:pmid/&rfr_iscdi=true |