Loading…

Detecting interaction above digital tabletops using a single depth camera

Digital tabletop environments offer a huge potential to realize application scenarios where multiple users interact simultaneously or aim to solve collaborative tasks. So far, research in this field focuses on touch and tangible interaction, which only takes place on the tabletop’s surface. First ap...

Full description

Saved in:
Bibliographic Details
Published in:Machine vision and applications 2013-11, Vol.24 (8), p.1575-1587
Main Authors: Haubner, Nadia, Schwanecke, Ulrich, Dörner, Ralf, Lehmann, Simon, Luderschmidt, Johannes
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c379t-c8a6fba239d67607c6399e47c7f29897d4b59d39990140b8a460c4f86c719f973
cites cdi_FETCH-LOGICAL-c379t-c8a6fba239d67607c6399e47c7f29897d4b59d39990140b8a460c4f86c719f973
container_end_page 1587
container_issue 8
container_start_page 1575
container_title Machine vision and applications
container_volume 24
creator Haubner, Nadia
Schwanecke, Ulrich
Dörner, Ralf
Lehmann, Simon
Luderschmidt, Johannes
description Digital tabletop environments offer a huge potential to realize application scenarios where multiple users interact simultaneously or aim to solve collaborative tasks. So far, research in this field focuses on touch and tangible interaction, which only takes place on the tabletop’s surface. First approaches aim at involving the space above the surface, e.g., by employing freehand gestures. However, these are either limited to specific scenarios or employ obtrusive tracking solutions. In this paper, we propose an approach to unobtrusively segment and detect interaction above a digital surface using a depth sensing camera. To achieve this, we adapt a previously presented approach that segments arms in depth data from a front-view to a top-view setup facilitating the detection of hand positions. Moreover, we propose a novel algorithm to merge segments and give a comparison to the original segmentation algorithm. Since the algorithm involves a large number of parameters, estimating the optimal configuration is necessary. To accomplish this, we describe a low effort approach to estimate the parameter configuration based on simulated annealing. An evaluation of our system to detect hands shows that a repositioning precision of approximately 1 cm is achieved. This accuracy is sufficient to reliably realize interaction metaphors above a surface.
doi_str_mv 10.1007/s00138-013-0538-5
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_1506381739</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2262656221</sourcerecordid><originalsourceid>FETCH-LOGICAL-c379t-c8a6fba239d67607c6399e47c7f29897d4b59d39990140b8a460c4f86c719f973</originalsourceid><addsrcrecordid>eNp1kEtLxDAUhYMoOI7-AHcFEdxUb97NUsbXwIAbXYc0TccOnbYmqeC_N6WiILi595B853I4CJ1juMYA8iYAYFrkaeTAk-AHaIEZJTmWQh2iBaikC1DkGJ2EsAMAJiVboPWdi87GpttmTRedN0n3XWbK_sNlVbNtommzaMrWxX4I2Rgm0mTTahPghviWWbNPxlN0VJs2uLPvvUSvD_cvq6d88_y4Xt1uckulirktjKhLQ6iqhBQgraBKOSatrIkqlKxYyVWV3hRgBmVhmADL6kJYiVWtJF2iq_nu4Pv30YWo902wrm1N5_oxaMxB0AJLqhJ68Qfd9aPvUjpNiCCCC0JwovBMWd-H4F2tB9_sjf_UGPRUrp7L1WnoqVzNk-fy-7IJ1rS1N51two-RyAI4l1NYMnMhfXVb538T_H_8C4Rbh6A</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2262656221</pqid></control><display><type>article</type><title>Detecting interaction above digital tabletops using a single depth camera</title><source>Springer Nature:Jisc Collections:Springer Nature Read and Publish 2023-2025: Springer Reading List</source><creator>Haubner, Nadia ; Schwanecke, Ulrich ; Dörner, Ralf ; Lehmann, Simon ; Luderschmidt, Johannes</creator><creatorcontrib>Haubner, Nadia ; Schwanecke, Ulrich ; Dörner, Ralf ; Lehmann, Simon ; Luderschmidt, Johannes</creatorcontrib><description>Digital tabletop environments offer a huge potential to realize application scenarios where multiple users interact simultaneously or aim to solve collaborative tasks. So far, research in this field focuses on touch and tangible interaction, which only takes place on the tabletop’s surface. First approaches aim at involving the space above the surface, e.g., by employing freehand gestures. However, these are either limited to specific scenarios or employ obtrusive tracking solutions. In this paper, we propose an approach to unobtrusively segment and detect interaction above a digital surface using a depth sensing camera. To achieve this, we adapt a previously presented approach that segments arms in depth data from a front-view to a top-view setup facilitating the detection of hand positions. Moreover, we propose a novel algorithm to merge segments and give a comparison to the original segmentation algorithm. Since the algorithm involves a large number of parameters, estimating the optimal configuration is necessary. To accomplish this, we describe a low effort approach to estimate the parameter configuration based on simulated annealing. An evaluation of our system to detect hands shows that a repositioning precision of approximately 1 cm is achieved. This accuracy is sufficient to reliably realize interaction metaphors above a surface.</description><identifier>ISSN: 0932-8092</identifier><identifier>EISSN: 1432-1769</identifier><identifier>DOI: 10.1007/s00138-013-0538-5</identifier><identifier>CODEN: MVAPEO</identifier><language>eng</language><publisher>Berlin/Heidelberg: Springer Berlin Heidelberg</publisher><subject>Algorithms ; Applied sciences ; Artificial intelligence ; Cameras ; Communications Engineering ; Computer Science ; Computer science; control theory; systems ; Computer simulation ; Computer systems and distributed systems. User interface ; Configurations ; Exact sciences and technology ; Image Processing and Computer Vision ; Metaphor ; Networks ; Original Paper ; Parameter estimation ; Pattern Recognition ; Pattern recognition. Digital image processing. Computational geometry ; Segmentation ; Segments ; Simulated annealing ; Software ; Vision systems</subject><ispartof>Machine vision and applications, 2013-11, Vol.24 (8), p.1575-1587</ispartof><rights>Springer-Verlag Berlin Heidelberg 2013</rights><rights>2015 INIST-CNRS</rights><rights>Machine Vision and Applications is a copyright of Springer, (2013). All Rights Reserved.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c379t-c8a6fba239d67607c6399e47c7f29897d4b59d39990140b8a460c4f86c719f973</citedby><cites>FETCH-LOGICAL-c379t-c8a6fba239d67607c6399e47c7f29897d4b59d39990140b8a460c4f86c719f973</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&amp;idt=27805577$$DView record in Pascal Francis$$Hfree_for_read</backlink></links><search><creatorcontrib>Haubner, Nadia</creatorcontrib><creatorcontrib>Schwanecke, Ulrich</creatorcontrib><creatorcontrib>Dörner, Ralf</creatorcontrib><creatorcontrib>Lehmann, Simon</creatorcontrib><creatorcontrib>Luderschmidt, Johannes</creatorcontrib><title>Detecting interaction above digital tabletops using a single depth camera</title><title>Machine vision and applications</title><addtitle>Machine Vision and Applications</addtitle><description>Digital tabletop environments offer a huge potential to realize application scenarios where multiple users interact simultaneously or aim to solve collaborative tasks. So far, research in this field focuses on touch and tangible interaction, which only takes place on the tabletop’s surface. First approaches aim at involving the space above the surface, e.g., by employing freehand gestures. However, these are either limited to specific scenarios or employ obtrusive tracking solutions. In this paper, we propose an approach to unobtrusively segment and detect interaction above a digital surface using a depth sensing camera. To achieve this, we adapt a previously presented approach that segments arms in depth data from a front-view to a top-view setup facilitating the detection of hand positions. Moreover, we propose a novel algorithm to merge segments and give a comparison to the original segmentation algorithm. Since the algorithm involves a large number of parameters, estimating the optimal configuration is necessary. To accomplish this, we describe a low effort approach to estimate the parameter configuration based on simulated annealing. An evaluation of our system to detect hands shows that a repositioning precision of approximately 1 cm is achieved. This accuracy is sufficient to reliably realize interaction metaphors above a surface.</description><subject>Algorithms</subject><subject>Applied sciences</subject><subject>Artificial intelligence</subject><subject>Cameras</subject><subject>Communications Engineering</subject><subject>Computer Science</subject><subject>Computer science; control theory; systems</subject><subject>Computer simulation</subject><subject>Computer systems and distributed systems. User interface</subject><subject>Configurations</subject><subject>Exact sciences and technology</subject><subject>Image Processing and Computer Vision</subject><subject>Metaphor</subject><subject>Networks</subject><subject>Original Paper</subject><subject>Parameter estimation</subject><subject>Pattern Recognition</subject><subject>Pattern recognition. Digital image processing. Computational geometry</subject><subject>Segmentation</subject><subject>Segments</subject><subject>Simulated annealing</subject><subject>Software</subject><subject>Vision systems</subject><issn>0932-8092</issn><issn>1432-1769</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2013</creationdate><recordtype>article</recordtype><recordid>eNp1kEtLxDAUhYMoOI7-AHcFEdxUb97NUsbXwIAbXYc0TccOnbYmqeC_N6WiILi595B853I4CJ1juMYA8iYAYFrkaeTAk-AHaIEZJTmWQh2iBaikC1DkGJ2EsAMAJiVboPWdi87GpttmTRedN0n3XWbK_sNlVbNtommzaMrWxX4I2Rgm0mTTahPghviWWbNPxlN0VJs2uLPvvUSvD_cvq6d88_y4Xt1uckulirktjKhLQ6iqhBQgraBKOSatrIkqlKxYyVWV3hRgBmVhmADL6kJYiVWtJF2iq_nu4Pv30YWo902wrm1N5_oxaMxB0AJLqhJ68Qfd9aPvUjpNiCCCC0JwovBMWd-H4F2tB9_sjf_UGPRUrp7L1WnoqVzNk-fy-7IJ1rS1N51two-RyAI4l1NYMnMhfXVb538T_H_8C4Rbh6A</recordid><startdate>20131101</startdate><enddate>20131101</enddate><creator>Haubner, Nadia</creator><creator>Schwanecke, Ulrich</creator><creator>Dörner, Ralf</creator><creator>Lehmann, Simon</creator><creator>Luderschmidt, Johannes</creator><general>Springer Berlin Heidelberg</general><general>Springer</general><general>Springer Nature B.V</general><scope>IQODW</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20131101</creationdate><title>Detecting interaction above digital tabletops using a single depth camera</title><author>Haubner, Nadia ; Schwanecke, Ulrich ; Dörner, Ralf ; Lehmann, Simon ; Luderschmidt, Johannes</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c379t-c8a6fba239d67607c6399e47c7f29897d4b59d39990140b8a460c4f86c719f973</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2013</creationdate><topic>Algorithms</topic><topic>Applied sciences</topic><topic>Artificial intelligence</topic><topic>Cameras</topic><topic>Communications Engineering</topic><topic>Computer Science</topic><topic>Computer science; control theory; systems</topic><topic>Computer simulation</topic><topic>Computer systems and distributed systems. User interface</topic><topic>Configurations</topic><topic>Exact sciences and technology</topic><topic>Image Processing and Computer Vision</topic><topic>Metaphor</topic><topic>Networks</topic><topic>Original Paper</topic><topic>Parameter estimation</topic><topic>Pattern Recognition</topic><topic>Pattern recognition. Digital image processing. Computational geometry</topic><topic>Segmentation</topic><topic>Segments</topic><topic>Simulated annealing</topic><topic>Software</topic><topic>Vision systems</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Haubner, Nadia</creatorcontrib><creatorcontrib>Schwanecke, Ulrich</creatorcontrib><creatorcontrib>Dörner, Ralf</creatorcontrib><creatorcontrib>Lehmann, Simon</creatorcontrib><creatorcontrib>Luderschmidt, Johannes</creatorcontrib><collection>Pascal-Francis</collection><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Machine vision and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Haubner, Nadia</au><au>Schwanecke, Ulrich</au><au>Dörner, Ralf</au><au>Lehmann, Simon</au><au>Luderschmidt, Johannes</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Detecting interaction above digital tabletops using a single depth camera</atitle><jtitle>Machine vision and applications</jtitle><stitle>Machine Vision and Applications</stitle><date>2013-11-01</date><risdate>2013</risdate><volume>24</volume><issue>8</issue><spage>1575</spage><epage>1587</epage><pages>1575-1587</pages><issn>0932-8092</issn><eissn>1432-1769</eissn><coden>MVAPEO</coden><abstract>Digital tabletop environments offer a huge potential to realize application scenarios where multiple users interact simultaneously or aim to solve collaborative tasks. So far, research in this field focuses on touch and tangible interaction, which only takes place on the tabletop’s surface. First approaches aim at involving the space above the surface, e.g., by employing freehand gestures. However, these are either limited to specific scenarios or employ obtrusive tracking solutions. In this paper, we propose an approach to unobtrusively segment and detect interaction above a digital surface using a depth sensing camera. To achieve this, we adapt a previously presented approach that segments arms in depth data from a front-view to a top-view setup facilitating the detection of hand positions. Moreover, we propose a novel algorithm to merge segments and give a comparison to the original segmentation algorithm. Since the algorithm involves a large number of parameters, estimating the optimal configuration is necessary. To accomplish this, we describe a low effort approach to estimate the parameter configuration based on simulated annealing. An evaluation of our system to detect hands shows that a repositioning precision of approximately 1 cm is achieved. This accuracy is sufficient to reliably realize interaction metaphors above a surface.</abstract><cop>Berlin/Heidelberg</cop><pub>Springer Berlin Heidelberg</pub><doi>10.1007/s00138-013-0538-5</doi><tpages>13</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0932-8092
ispartof Machine vision and applications, 2013-11, Vol.24 (8), p.1575-1587
issn 0932-8092
1432-1769
language eng
recordid cdi_proquest_miscellaneous_1506381739
source Springer Nature:Jisc Collections:Springer Nature Read and Publish 2023-2025: Springer Reading List
subjects Algorithms
Applied sciences
Artificial intelligence
Cameras
Communications Engineering
Computer Science
Computer science
control theory
systems
Computer simulation
Computer systems and distributed systems. User interface
Configurations
Exact sciences and technology
Image Processing and Computer Vision
Metaphor
Networks
Original Paper
Parameter estimation
Pattern Recognition
Pattern recognition. Digital image processing. Computational geometry
Segmentation
Segments
Simulated annealing
Software
Vision systems
title Detecting interaction above digital tabletops using a single depth camera
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T22%3A14%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Detecting%20interaction%20above%20digital%20tabletops%20using%20a%20single%20depth%20camera&rft.jtitle=Machine%20vision%20and%20applications&rft.au=Haubner,%20Nadia&rft.date=2013-11-01&rft.volume=24&rft.issue=8&rft.spage=1575&rft.epage=1587&rft.pages=1575-1587&rft.issn=0932-8092&rft.eissn=1432-1769&rft.coden=MVAPEO&rft_id=info:doi/10.1007/s00138-013-0538-5&rft_dat=%3Cproquest_cross%3E2262656221%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c379t-c8a6fba239d67607c6399e47c7f29897d4b59d39990140b8a460c4f86c719f973%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2262656221&rft_id=info:pmid/&rfr_iscdi=true