Loading…

Language-driven Grasp Detection

Grasp detection is a persistent and intricate challenge with various industrial applications. Recently, many meth-ods and datasets have been proposed to tackle the grasp detection problem. However, most of them do not consider using natural language as a condition to detect the grasp poses. In this...

Full description

Saved in:
Bibliographic Details
Main Authors: Vuong, An Dinh, Vu, Minh Nhat, Huang, Baoru, Nguyen, Nghia, Le, Hieu, Vo, Thieu, Nguyen, Anh
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 17912
container_issue
container_start_page 17902
container_title
container_volume
creator Vuong, An Dinh
Vu, Minh Nhat
Huang, Baoru
Nguyen, Nghia
Le, Hieu
Vo, Thieu
Nguyen, Anh
description Grasp detection is a persistent and intricate challenge with various industrial applications. Recently, many meth-ods and datasets have been proposed to tackle the grasp detection problem. However, most of them do not consider using natural language as a condition to detect the grasp poses. In this paper, we introduce Grasp-Anything++, a new language-driven grasp detection dataset featuring 1M samples, over 3M objects, and upwards of 10M grasping in-structions. We utilize foundation models to create a large-scale scene corpus with corresponding images and grasp prompts. We approach the language-driven grasp detection task as a conditional generation problem. Drawing on the success of diffusion models in generative tasks and given that language plays a vital role in this task, we propose a new language-driven grasp detection method based on dif-fusion models. Our key contribution is the contrastive training objective, which explicitly contributes to the denoising process to detect the grasp pose given the language instructions. We illustrate that our approach is theoretically sup-portive. The intensive experiments show that our method outperforms state-of-the-art approaches and allows real-world robotic grasping. Finally, we demonstrate our large-scale dataset enables zero-short grasp detection and is a challenging benchmark for future work.
doi_str_mv 10.1109/CVPR52733.2024.01695
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_10657374</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10657374</ieee_id><sourcerecordid>10657374</sourcerecordid><originalsourceid>FETCH-LOGICAL-i675-9b11b459e96e4ef760c706168ba15d7580b20e1cc857b2988f7d9ab0374694593</originalsourceid><addsrcrecordid>eNotjMFKw0AQQFdBsNT8QcH-QOLMbmZn5yhRWyGgSPFadpNJiWgsSRX8ewN6epf3njErhAIR5KZ6fX4hy84VFmxZAHqhM5MJS3AEjhyAPzcLS0w5A9OlyabpDQCcxdkNC3Ndx-HwFQ-at2P_rcN6M8bpuL7Tkzan_nO4MhddfJ80--fS7B7ud9U2r582j9Vtnfd-fktCTCWJitdSO_bQMHj0IUWklilAsqDYNIE4WQmh41ZiAsell7lzS7P62_aquj-O_Uccf_YInnh23C9M1j1F</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Language-driven Grasp Detection</title><source>IEEE Xplore All Conference Series</source><creator>Vuong, An Dinh ; Vu, Minh Nhat ; Huang, Baoru ; Nguyen, Nghia ; Le, Hieu ; Vo, Thieu ; Nguyen, Anh</creator><creatorcontrib>Vuong, An Dinh ; Vu, Minh Nhat ; Huang, Baoru ; Nguyen, Nghia ; Le, Hieu ; Vo, Thieu ; Nguyen, Anh</creatorcontrib><description>Grasp detection is a persistent and intricate challenge with various industrial applications. Recently, many meth-ods and datasets have been proposed to tackle the grasp detection problem. However, most of them do not consider using natural language as a condition to detect the grasp poses. In this paper, we introduce Grasp-Anything++, a new language-driven grasp detection dataset featuring 1M samples, over 3M objects, and upwards of 10M grasping in-structions. We utilize foundation models to create a large-scale scene corpus with corresponding images and grasp prompts. We approach the language-driven grasp detection task as a conditional generation problem. Drawing on the success of diffusion models in generative tasks and given that language plays a vital role in this task, we propose a new language-driven grasp detection method based on dif-fusion models. Our key contribution is the contrastive training objective, which explicitly contributes to the denoising process to detect the grasp pose given the language instructions. We illustrate that our approach is theoretically sup-portive. The intensive experiments show that our method outperforms state-of-the-art approaches and allows real-world robotic grasping. Finally, we demonstrate our large-scale dataset enables zero-short grasp detection and is a challenging benchmark for future work.</description><identifier>EISSN: 2575-7075</identifier><identifier>EISBN: 9798350353006</identifier><identifier>DOI: 10.1109/CVPR52733.2024.01695</identifier><identifier>CODEN: IEEPAD</identifier><language>eng</language><publisher>IEEE</publisher><subject>Benchmark testing ; Computer vision ; contrastive learning ; Diffusion models ; grasp detection ; Grasping ; Natural languages ; Noise reduction ; Training</subject><ispartof>2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, p.17902-17912</ispartof><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10657374$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,27924,54554,54931</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10657374$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Vuong, An Dinh</creatorcontrib><creatorcontrib>Vu, Minh Nhat</creatorcontrib><creatorcontrib>Huang, Baoru</creatorcontrib><creatorcontrib>Nguyen, Nghia</creatorcontrib><creatorcontrib>Le, Hieu</creatorcontrib><creatorcontrib>Vo, Thieu</creatorcontrib><creatorcontrib>Nguyen, Anh</creatorcontrib><title>Language-driven Grasp Detection</title><title>2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)</title><addtitle>CVPR</addtitle><description>Grasp detection is a persistent and intricate challenge with various industrial applications. Recently, many meth-ods and datasets have been proposed to tackle the grasp detection problem. However, most of them do not consider using natural language as a condition to detect the grasp poses. In this paper, we introduce Grasp-Anything++, a new language-driven grasp detection dataset featuring 1M samples, over 3M objects, and upwards of 10M grasping in-structions. We utilize foundation models to create a large-scale scene corpus with corresponding images and grasp prompts. We approach the language-driven grasp detection task as a conditional generation problem. Drawing on the success of diffusion models in generative tasks and given that language plays a vital role in this task, we propose a new language-driven grasp detection method based on dif-fusion models. Our key contribution is the contrastive training objective, which explicitly contributes to the denoising process to detect the grasp pose given the language instructions. We illustrate that our approach is theoretically sup-portive. The intensive experiments show that our method outperforms state-of-the-art approaches and allows real-world robotic grasping. Finally, we demonstrate our large-scale dataset enables zero-short grasp detection and is a challenging benchmark for future work.</description><subject>Benchmark testing</subject><subject>Computer vision</subject><subject>contrastive learning</subject><subject>Diffusion models</subject><subject>grasp detection</subject><subject>Grasping</subject><subject>Natural languages</subject><subject>Noise reduction</subject><subject>Training</subject><issn>2575-7075</issn><isbn>9798350353006</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2024</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotjMFKw0AQQFdBsNT8QcH-QOLMbmZn5yhRWyGgSPFadpNJiWgsSRX8ewN6epf3njErhAIR5KZ6fX4hy84VFmxZAHqhM5MJS3AEjhyAPzcLS0w5A9OlyabpDQCcxdkNC3Ndx-HwFQ-at2P_rcN6M8bpuL7Tkzan_nO4MhddfJ80--fS7B7ud9U2r582j9Vtnfd-fktCTCWJitdSO_bQMHj0IUWklilAsqDYNIE4WQmh41ZiAsell7lzS7P62_aquj-O_Uccf_YInnh23C9M1j1F</recordid><startdate>20240616</startdate><enddate>20240616</enddate><creator>Vuong, An Dinh</creator><creator>Vu, Minh Nhat</creator><creator>Huang, Baoru</creator><creator>Nguyen, Nghia</creator><creator>Le, Hieu</creator><creator>Vo, Thieu</creator><creator>Nguyen, Anh</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>20240616</creationdate><title>Language-driven Grasp Detection</title><author>Vuong, An Dinh ; Vu, Minh Nhat ; Huang, Baoru ; Nguyen, Nghia ; Le, Hieu ; Vo, Thieu ; Nguyen, Anh</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i675-9b11b459e96e4ef760c706168ba15d7580b20e1cc857b2988f7d9ab0374694593</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Benchmark testing</topic><topic>Computer vision</topic><topic>contrastive learning</topic><topic>Diffusion models</topic><topic>grasp detection</topic><topic>Grasping</topic><topic>Natural languages</topic><topic>Noise reduction</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Vuong, An Dinh</creatorcontrib><creatorcontrib>Vu, Minh Nhat</creatorcontrib><creatorcontrib>Huang, Baoru</creatorcontrib><creatorcontrib>Nguyen, Nghia</creatorcontrib><creatorcontrib>Le, Hieu</creatorcontrib><creatorcontrib>Vo, Thieu</creatorcontrib><creatorcontrib>Nguyen, Anh</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Vuong, An Dinh</au><au>Vu, Minh Nhat</au><au>Huang, Baoru</au><au>Nguyen, Nghia</au><au>Le, Hieu</au><au>Vo, Thieu</au><au>Nguyen, Anh</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Language-driven Grasp Detection</atitle><btitle>2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)</btitle><stitle>CVPR</stitle><date>2024-06-16</date><risdate>2024</risdate><spage>17902</spage><epage>17912</epage><pages>17902-17912</pages><eissn>2575-7075</eissn><eisbn>9798350353006</eisbn><coden>IEEPAD</coden><abstract>Grasp detection is a persistent and intricate challenge with various industrial applications. Recently, many meth-ods and datasets have been proposed to tackle the grasp detection problem. However, most of them do not consider using natural language as a condition to detect the grasp poses. In this paper, we introduce Grasp-Anything++, a new language-driven grasp detection dataset featuring 1M samples, over 3M objects, and upwards of 10M grasping in-structions. We utilize foundation models to create a large-scale scene corpus with corresponding images and grasp prompts. We approach the language-driven grasp detection task as a conditional generation problem. Drawing on the success of diffusion models in generative tasks and given that language plays a vital role in this task, we propose a new language-driven grasp detection method based on dif-fusion models. Our key contribution is the contrastive training objective, which explicitly contributes to the denoising process to detect the grasp pose given the language instructions. We illustrate that our approach is theoretically sup-portive. The intensive experiments show that our method outperforms state-of-the-art approaches and allows real-world robotic grasping. Finally, we demonstrate our large-scale dataset enables zero-short grasp detection and is a challenging benchmark for future work.</abstract><pub>IEEE</pub><doi>10.1109/CVPR52733.2024.01695</doi><tpages>11</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2575-7075
ispartof 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, p.17902-17912
issn 2575-7075
language eng
recordid cdi_ieee_primary_10657374
source IEEE Xplore All Conference Series
subjects Benchmark testing
Computer vision
contrastive learning
Diffusion models
grasp detection
Grasping
Natural languages
Noise reduction
Training
title Language-driven Grasp Detection
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T20%3A55%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Language-driven%20Grasp%20Detection&rft.btitle=2024%20IEEE/CVF%20Conference%20on%20Computer%20Vision%20and%20Pattern%20Recognition%20(CVPR)&rft.au=Vuong,%20An%20Dinh&rft.date=2024-06-16&rft.spage=17902&rft.epage=17912&rft.pages=17902-17912&rft.eissn=2575-7075&rft.coden=IEEPAD&rft_id=info:doi/10.1109/CVPR52733.2024.01695&rft.eisbn=9798350353006&rft_dat=%3Cieee_CHZPO%3E10657374%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i675-9b11b459e96e4ef760c706168ba15d7580b20e1cc857b2988f7d9ab0374694593%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10657374&rfr_iscdi=true