Loading…

RGB-(D) scene labeling: Features and algorithms

Scene labeling research has mostly focused on outdoor scenes, leaving the harder case of indoor scenes poorly understood. Microsoft Kinect dramatically changed the landscape, showing great potentials for RGB-D perception (color+depth). Our main objective is to empirically understand the promises and...

Full description

Saved in:
Bibliographic Details
Main Authors: Xiaofeng Ren, Liefeng Bo, Fox, D.
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 2766
container_issue
container_start_page 2759
container_title
container_volume
creator Xiaofeng Ren
Liefeng Bo
Fox, D.
description Scene labeling research has mostly focused on outdoor scenes, leaving the harder case of indoor scenes poorly understood. Microsoft Kinect dramatically changed the landscape, showing great potentials for RGB-D perception (color+depth). Our main objective is to empirically understand the promises and challenges of scene labeling with RGB-D. We use the NYU Depth Dataset as collected and analyzed by Silberman and Fergus [30]. For RGB-D features, we adapt the framework of kernel descriptors that converts local similarities (kernels) to patch descriptors. For contextual modeling, we combine two lines of approaches, one using a superpixel MRF, and the other using a segmentation tree. We find that (1) kernel descriptors are very effective in capturing appearance (RGB) and shape (D) similarities; (2) both superpixel MRF and segmentation tree are useful in modeling context; and (3) the key to labeling accuracy is the ability to efficiently train and test with large-scale data. We improve labeling accuracy on the NYU Dataset from 56.6% to 76.1%. We also apply our approach to image-only scene labeling and improve the accuracy on the Stanford Background Dataset from 79.4% to 82.9%.
doi_str_mv 10.1109/CVPR.2012.6247999
format conference_proceeding
fullrecord <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_6247999</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>6247999</ieee_id><sourcerecordid>6247999</sourcerecordid><originalsourceid>FETCH-LOGICAL-i175t-3ed73b9d722e8d1b8fb2aeaf844471d33c6b7a45e7d18e11c0a8693ee77926ba3</originalsourceid><addsrcrecordid>eNo1j8FKAzEURSMqWOt8gLiZpS5mmpekeXnutNoqFJSibksyeVNHpqNMxoV_b8F6N5cDlwNXiHOQJYCkyezteVUqCaq0yiARHYhTMBY1KOXUocgI3T9bcyRGIK0uLAGdiCylD7nLbiFJjcRktbgtLu-u8lRxx3nrA7dNt7nO5-yH755T7ruY-3bz2TfD-zadiePat4mzfY_F6_z-ZfZQLJ8Wj7ObZdEATodCc0QdKKJS7CIEVwfl2dfOGIMQta5sQG-mjBEcA1TSO0uaGZGUDV6PxcWft2Hm9VffbH3_s97_1b9eH0UV</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>RGB-(D) scene labeling: Features and algorithms</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Xiaofeng Ren ; Liefeng Bo ; Fox, D.</creator><creatorcontrib>Xiaofeng Ren ; Liefeng Bo ; Fox, D.</creatorcontrib><description>Scene labeling research has mostly focused on outdoor scenes, leaving the harder case of indoor scenes poorly understood. Microsoft Kinect dramatically changed the landscape, showing great potentials for RGB-D perception (color+depth). Our main objective is to empirically understand the promises and challenges of scene labeling with RGB-D. We use the NYU Depth Dataset as collected and analyzed by Silberman and Fergus [30]. For RGB-D features, we adapt the framework of kernel descriptors that converts local similarities (kernels) to patch descriptors. For contextual modeling, we combine two lines of approaches, one using a superpixel MRF, and the other using a segmentation tree. We find that (1) kernel descriptors are very effective in capturing appearance (RGB) and shape (D) similarities; (2) both superpixel MRF and segmentation tree are useful in modeling context; and (3) the key to labeling accuracy is the ability to efficiently train and test with large-scale data. We improve labeling accuracy on the NYU Dataset from 56.6% to 76.1%. We also apply our approach to image-only scene labeling and improve the accuracy on the Stanford Background Dataset from 79.4% to 82.9%.</description><identifier>ISSN: 1063-6919</identifier><identifier>ISBN: 9781467312264</identifier><identifier>ISBN: 1467312266</identifier><identifier>EISBN: 1467312282</identifier><identifier>EISBN: 1467312274</identifier><identifier>EISBN: 9781467312271</identifier><identifier>EISBN: 9781467312288</identifier><identifier>DOI: 10.1109/CVPR.2012.6247999</identifier><language>eng</language><publisher>IEEE</publisher><subject>Accuracy ; Context modeling ; Image color analysis ; Image segmentation ; Kernel ; Labeling ; Vegetation</subject><ispartof>2012 IEEE Conference on Computer Vision and Pattern Recognition, 2012, p.2759-2766</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/6247999$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,2058,27925,54920</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/6247999$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Xiaofeng Ren</creatorcontrib><creatorcontrib>Liefeng Bo</creatorcontrib><creatorcontrib>Fox, D.</creatorcontrib><title>RGB-(D) scene labeling: Features and algorithms</title><title>2012 IEEE Conference on Computer Vision and Pattern Recognition</title><addtitle>CVPR</addtitle><description>Scene labeling research has mostly focused on outdoor scenes, leaving the harder case of indoor scenes poorly understood. Microsoft Kinect dramatically changed the landscape, showing great potentials for RGB-D perception (color+depth). Our main objective is to empirically understand the promises and challenges of scene labeling with RGB-D. We use the NYU Depth Dataset as collected and analyzed by Silberman and Fergus [30]. For RGB-D features, we adapt the framework of kernel descriptors that converts local similarities (kernels) to patch descriptors. For contextual modeling, we combine two lines of approaches, one using a superpixel MRF, and the other using a segmentation tree. We find that (1) kernel descriptors are very effective in capturing appearance (RGB) and shape (D) similarities; (2) both superpixel MRF and segmentation tree are useful in modeling context; and (3) the key to labeling accuracy is the ability to efficiently train and test with large-scale data. We improve labeling accuracy on the NYU Dataset from 56.6% to 76.1%. We also apply our approach to image-only scene labeling and improve the accuracy on the Stanford Background Dataset from 79.4% to 82.9%.</description><subject>Accuracy</subject><subject>Context modeling</subject><subject>Image color analysis</subject><subject>Image segmentation</subject><subject>Kernel</subject><subject>Labeling</subject><subject>Vegetation</subject><issn>1063-6919</issn><isbn>9781467312264</isbn><isbn>1467312266</isbn><isbn>1467312282</isbn><isbn>1467312274</isbn><isbn>9781467312271</isbn><isbn>9781467312288</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2012</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNo1j8FKAzEURSMqWOt8gLiZpS5mmpekeXnutNoqFJSibksyeVNHpqNMxoV_b8F6N5cDlwNXiHOQJYCkyezteVUqCaq0yiARHYhTMBY1KOXUocgI3T9bcyRGIK0uLAGdiCylD7nLbiFJjcRktbgtLu-u8lRxx3nrA7dNt7nO5-yH755T7ruY-3bz2TfD-zadiePat4mzfY_F6_z-ZfZQLJ8Wj7ObZdEATodCc0QdKKJS7CIEVwfl2dfOGIMQta5sQG-mjBEcA1TSO0uaGZGUDV6PxcWft2Hm9VffbH3_s97_1b9eH0UV</recordid><startdate>201206</startdate><enddate>201206</enddate><creator>Xiaofeng Ren</creator><creator>Liefeng Bo</creator><creator>Fox, D.</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>201206</creationdate><title>RGB-(D) scene labeling: Features and algorithms</title><author>Xiaofeng Ren ; Liefeng Bo ; Fox, D.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i175t-3ed73b9d722e8d1b8fb2aeaf844471d33c6b7a45e7d18e11c0a8693ee77926ba3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2012</creationdate><topic>Accuracy</topic><topic>Context modeling</topic><topic>Image color analysis</topic><topic>Image segmentation</topic><topic>Kernel</topic><topic>Labeling</topic><topic>Vegetation</topic><toplevel>online_resources</toplevel><creatorcontrib>Xiaofeng Ren</creatorcontrib><creatorcontrib>Liefeng Bo</creatorcontrib><creatorcontrib>Fox, D.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE/IET Electronic Library</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Xiaofeng Ren</au><au>Liefeng Bo</au><au>Fox, D.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>RGB-(D) scene labeling: Features and algorithms</atitle><btitle>2012 IEEE Conference on Computer Vision and Pattern Recognition</btitle><stitle>CVPR</stitle><date>2012-06</date><risdate>2012</risdate><spage>2759</spage><epage>2766</epage><pages>2759-2766</pages><issn>1063-6919</issn><isbn>9781467312264</isbn><isbn>1467312266</isbn><eisbn>1467312282</eisbn><eisbn>1467312274</eisbn><eisbn>9781467312271</eisbn><eisbn>9781467312288</eisbn><abstract>Scene labeling research has mostly focused on outdoor scenes, leaving the harder case of indoor scenes poorly understood. Microsoft Kinect dramatically changed the landscape, showing great potentials for RGB-D perception (color+depth). Our main objective is to empirically understand the promises and challenges of scene labeling with RGB-D. We use the NYU Depth Dataset as collected and analyzed by Silberman and Fergus [30]. For RGB-D features, we adapt the framework of kernel descriptors that converts local similarities (kernels) to patch descriptors. For contextual modeling, we combine two lines of approaches, one using a superpixel MRF, and the other using a segmentation tree. We find that (1) kernel descriptors are very effective in capturing appearance (RGB) and shape (D) similarities; (2) both superpixel MRF and segmentation tree are useful in modeling context; and (3) the key to labeling accuracy is the ability to efficiently train and test with large-scale data. We improve labeling accuracy on the NYU Dataset from 56.6% to 76.1%. We also apply our approach to image-only scene labeling and improve the accuracy on the Stanford Background Dataset from 79.4% to 82.9%.</abstract><pub>IEEE</pub><doi>10.1109/CVPR.2012.6247999</doi><tpages>8</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1063-6919
ispartof 2012 IEEE Conference on Computer Vision and Pattern Recognition, 2012, p.2759-2766
issn 1063-6919
language eng
recordid cdi_ieee_primary_6247999
source IEEE Electronic Library (IEL) Conference Proceedings
subjects Accuracy
Context modeling
Image color analysis
Image segmentation
Kernel
Labeling
Vegetation
title RGB-(D) scene labeling: Features and algorithms
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T18%3A49%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=RGB-(D)%20scene%20labeling:%20Features%20and%20algorithms&rft.btitle=2012%20IEEE%20Conference%20on%20Computer%20Vision%20and%20Pattern%20Recognition&rft.au=Xiaofeng%20Ren&rft.date=2012-06&rft.spage=2759&rft.epage=2766&rft.pages=2759-2766&rft.issn=1063-6919&rft.isbn=9781467312264&rft.isbn_list=1467312266&rft_id=info:doi/10.1109/CVPR.2012.6247999&rft.eisbn=1467312282&rft.eisbn_list=1467312274&rft.eisbn_list=9781467312271&rft.eisbn_list=9781467312288&rft_dat=%3Cieee_6IE%3E6247999%3C/ieee_6IE%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i175t-3ed73b9d722e8d1b8fb2aeaf844471d33c6b7a45e7d18e11c0a8693ee77926ba3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=6247999&rfr_iscdi=true