Loading…

How Object Information Improves Skeleton-based Human Action Recognition in Assembly Tasks

As the use of collaborative robots (cobots) in industrial manufacturing continues to grow, human action recognition for effective human-robot collaboration becomes increasingly important. This ability is crucial for cobots to act autonomously and assist in assembly tasks. Recently, skeleton-based ap...

Full description

Saved in:
Bibliographic Details
Main Authors: Aganian, Dustin, Kohler, Mona, Baake, Sebastian, Eisenbach, Markus, Grob, Horst-Michael
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 09
container_issue
container_start_page 01
container_title
container_volume
creator Aganian, Dustin
Kohler, Mona
Baake, Sebastian
Eisenbach, Markus
Grob, Horst-Michael
description As the use of collaborative robots (cobots) in industrial manufacturing continues to grow, human action recognition for effective human-robot collaboration becomes increasingly important. This ability is crucial for cobots to act autonomously and assist in assembly tasks. Recently, skeleton-based approaches are often used as they tend to generalize better to different people and environments. However, when processing skeletons alone, information about the objects a human interacts with is lost. Therefore, we present a novel approach of integrating object information into skeleton-based action recognition. We enhance two state-of-the-art methods by treating object centers as further skeleton joints. Our experiments on the assembly dataset IKEA ASM show that our approach improves the performance of these state-of-the-art methods to a large extent when combining skeleton joints with objects predicted by a state-of-the-art instance segmentation model. Our research sheds light on the benefits of combining skeleton joints with object information for human action recognition in assembly tasks. We analyze the effect of the object detector on the combination for action classification and discuss the important factors that must be taken into account.
doi_str_mv 10.1109/IJCNN54540.2023.10191686
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_10191686</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10191686</ieee_id><sourcerecordid>10191686</sourcerecordid><originalsourceid>FETCH-LOGICAL-i204t-d28081ebecff520485f7ec080ab4638610bf63a08038bee6b4bcca161bff03d93</originalsourceid><addsrcrecordid>eNo1kNFKwzAYhaMguE3fwIu8QOufJk3Ty1HUVsYGOi-8Gkn6R7q1zWiqsre3TL06h4_DgXMIoQxixiC_r56L9ToVqYA4gYTHDFjOpJIXZM6kTIVSMoNLMkuYZJEQkF2TeQh7mLJ5zmfkvfTfdGP2aEda9c4PnR4b39OqOw7-CwN9PWCLo-8jowPWtPzsdE-X9hx6Qes_-ubsm4mGgJ1pT3SrwyHckCun24C3f7ogb48P26KMVpunqliuoiYBMUZ1okAxNGidSyeiUpehBQXaCMmVZGCc5HoCXBlEaYSxVk9rjHPA65wvyN1vb4OIu-PQdHo47f5_4D8daVQc</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>How Object Information Improves Skeleton-based Human Action Recognition in Assembly Tasks</title><source>IEEE Xplore All Conference Series</source><creator>Aganian, Dustin ; Kohler, Mona ; Baake, Sebastian ; Eisenbach, Markus ; Grob, Horst-Michael</creator><creatorcontrib>Aganian, Dustin ; Kohler, Mona ; Baake, Sebastian ; Eisenbach, Markus ; Grob, Horst-Michael</creatorcontrib><description>As the use of collaborative robots (cobots) in industrial manufacturing continues to grow, human action recognition for effective human-robot collaboration becomes increasingly important. This ability is crucial for cobots to act autonomously and assist in assembly tasks. Recently, skeleton-based approaches are often used as they tend to generalize better to different people and environments. However, when processing skeletons alone, information about the objects a human interacts with is lost. Therefore, we present a novel approach of integrating object information into skeleton-based action recognition. We enhance two state-of-the-art methods by treating object centers as further skeleton joints. Our experiments on the assembly dataset IKEA ASM show that our approach improves the performance of these state-of-the-art methods to a large extent when combining skeleton joints with objects predicted by a state-of-the-art instance segmentation model. Our research sheds light on the benefits of combining skeleton joints with object information for human action recognition in assembly tasks. We analyze the effect of the object detector on the combination for action classification and discuss the important factors that must be taken into account.</description><identifier>EISSN: 2161-4407</identifier><identifier>EISBN: 1665488670</identifier><identifier>EISBN: 9781665488679</identifier><identifier>DOI: 10.1109/IJCNN54540.2023.10191686</identifier><language>eng</language><publisher>IEEE</publisher><subject>Collaboration ; Detectors ; Manufacturing ; Neural networks ; Predictive models ; Service robots ; Skeleton</subject><ispartof>2023 International Joint Conference on Neural Networks (IJCNN), 2023, p.01-09</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>0009-0006-3925-6718</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10191686$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,23930,23931,25140,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10191686$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Aganian, Dustin</creatorcontrib><creatorcontrib>Kohler, Mona</creatorcontrib><creatorcontrib>Baake, Sebastian</creatorcontrib><creatorcontrib>Eisenbach, Markus</creatorcontrib><creatorcontrib>Grob, Horst-Michael</creatorcontrib><title>How Object Information Improves Skeleton-based Human Action Recognition in Assembly Tasks</title><title>2023 International Joint Conference on Neural Networks (IJCNN)</title><addtitle>IJCNN</addtitle><description>As the use of collaborative robots (cobots) in industrial manufacturing continues to grow, human action recognition for effective human-robot collaboration becomes increasingly important. This ability is crucial for cobots to act autonomously and assist in assembly tasks. Recently, skeleton-based approaches are often used as they tend to generalize better to different people and environments. However, when processing skeletons alone, information about the objects a human interacts with is lost. Therefore, we present a novel approach of integrating object information into skeleton-based action recognition. We enhance two state-of-the-art methods by treating object centers as further skeleton joints. Our experiments on the assembly dataset IKEA ASM show that our approach improves the performance of these state-of-the-art methods to a large extent when combining skeleton joints with objects predicted by a state-of-the-art instance segmentation model. Our research sheds light on the benefits of combining skeleton joints with object information for human action recognition in assembly tasks. We analyze the effect of the object detector on the combination for action classification and discuss the important factors that must be taken into account.</description><subject>Collaboration</subject><subject>Detectors</subject><subject>Manufacturing</subject><subject>Neural networks</subject><subject>Predictive models</subject><subject>Service robots</subject><subject>Skeleton</subject><issn>2161-4407</issn><isbn>1665488670</isbn><isbn>9781665488679</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2023</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNo1kNFKwzAYhaMguE3fwIu8QOufJk3Ty1HUVsYGOi-8Gkn6R7q1zWiqsre3TL06h4_DgXMIoQxixiC_r56L9ToVqYA4gYTHDFjOpJIXZM6kTIVSMoNLMkuYZJEQkF2TeQh7mLJ5zmfkvfTfdGP2aEda9c4PnR4b39OqOw7-CwN9PWCLo-8jowPWtPzsdE-X9hx6Qes_-ubsm4mGgJ1pT3SrwyHckCun24C3f7ogb48P26KMVpunqliuoiYBMUZ1okAxNGidSyeiUpehBQXaCMmVZGCc5HoCXBlEaYSxVk9rjHPA65wvyN1vb4OIu-PQdHo47f5_4D8daVQc</recordid><startdate>20230618</startdate><enddate>20230618</enddate><creator>Aganian, Dustin</creator><creator>Kohler, Mona</creator><creator>Baake, Sebastian</creator><creator>Eisenbach, Markus</creator><creator>Grob, Horst-Michael</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope><orcidid>https://orcid.org/0009-0006-3925-6718</orcidid></search><sort><creationdate>20230618</creationdate><title>How Object Information Improves Skeleton-based Human Action Recognition in Assembly Tasks</title><author>Aganian, Dustin ; Kohler, Mona ; Baake, Sebastian ; Eisenbach, Markus ; Grob, Horst-Michael</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i204t-d28081ebecff520485f7ec080ab4638610bf63a08038bee6b4bcca161bff03d93</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Collaboration</topic><topic>Detectors</topic><topic>Manufacturing</topic><topic>Neural networks</topic><topic>Predictive models</topic><topic>Service robots</topic><topic>Skeleton</topic><toplevel>online_resources</toplevel><creatorcontrib>Aganian, Dustin</creatorcontrib><creatorcontrib>Kohler, Mona</creatorcontrib><creatorcontrib>Baake, Sebastian</creatorcontrib><creatorcontrib>Eisenbach, Markus</creatorcontrib><creatorcontrib>Grob, Horst-Michael</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Explore</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Aganian, Dustin</au><au>Kohler, Mona</au><au>Baake, Sebastian</au><au>Eisenbach, Markus</au><au>Grob, Horst-Michael</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>How Object Information Improves Skeleton-based Human Action Recognition in Assembly Tasks</atitle><btitle>2023 International Joint Conference on Neural Networks (IJCNN)</btitle><stitle>IJCNN</stitle><date>2023-06-18</date><risdate>2023</risdate><spage>01</spage><epage>09</epage><pages>01-09</pages><eissn>2161-4407</eissn><eisbn>1665488670</eisbn><eisbn>9781665488679</eisbn><abstract>As the use of collaborative robots (cobots) in industrial manufacturing continues to grow, human action recognition for effective human-robot collaboration becomes increasingly important. This ability is crucial for cobots to act autonomously and assist in assembly tasks. Recently, skeleton-based approaches are often used as they tend to generalize better to different people and environments. However, when processing skeletons alone, information about the objects a human interacts with is lost. Therefore, we present a novel approach of integrating object information into skeleton-based action recognition. We enhance two state-of-the-art methods by treating object centers as further skeleton joints. Our experiments on the assembly dataset IKEA ASM show that our approach improves the performance of these state-of-the-art methods to a large extent when combining skeleton joints with objects predicted by a state-of-the-art instance segmentation model. Our research sheds light on the benefits of combining skeleton joints with object information for human action recognition in assembly tasks. We analyze the effect of the object detector on the combination for action classification and discuss the important factors that must be taken into account.</abstract><pub>IEEE</pub><doi>10.1109/IJCNN54540.2023.10191686</doi><tpages>9</tpages><orcidid>https://orcid.org/0009-0006-3925-6718</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2161-4407
ispartof 2023 International Joint Conference on Neural Networks (IJCNN), 2023, p.01-09
issn 2161-4407
language eng
recordid cdi_ieee_primary_10191686
source IEEE Xplore All Conference Series
subjects Collaboration
Detectors
Manufacturing
Neural networks
Predictive models
Service robots
Skeleton
title How Object Information Improves Skeleton-based Human Action Recognition in Assembly Tasks
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T17%3A42%3A51IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=How%20Object%20Information%20Improves%20Skeleton-based%20Human%20Action%20Recognition%20in%20Assembly%20Tasks&rft.btitle=2023%20International%20Joint%20Conference%20on%20Neural%20Networks%20(IJCNN)&rft.au=Aganian,%20Dustin&rft.date=2023-06-18&rft.spage=01&rft.epage=09&rft.pages=01-09&rft.eissn=2161-4407&rft_id=info:doi/10.1109/IJCNN54540.2023.10191686&rft.eisbn=1665488670&rft.eisbn_list=9781665488679&rft_dat=%3Cieee_CHZPO%3E10191686%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i204t-d28081ebecff520485f7ec080ab4638610bf63a08038bee6b4bcca161bff03d93%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10191686&rfr_iscdi=true