Loading…

Voxel Based Motion Prediction for Efficient HRC Utilizing Latent Space

Safe human-robot-collaboration requires the robot not to pose a risk to the operator in the shared workspace. To avoid collisions different approaches such as sekleton-model based motion predictions of the human operator have been explored. Those approaches limit themselves to the motion of the oper...

Full description

Saved in:
Bibliographic Details
Main Authors: Spielbauer, Niklas, Reichard, Daniel, Bolano, Gabriele, Stelzer, Annett, Suppa, Michael, Leske, Michael, Steinbronn, Janus, Rothe, Diana, Roennau, Arne, Dillmann, Rudiger
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Safe human-robot-collaboration requires the robot not to pose a risk to the operator in the shared workspace. To avoid collisions different approaches such as sekleton-model based motion predictions of the human operator have been explored. Those approaches limit themselves to the motion of the operator and neglect other dynamic or movable objects in the workspace. We propose an approach that utilizes a purely voxel based prediction of motion in an arbitrary workspace without the use of models to predict any type of motion. This is done by encoding the voxelized 3D space as a latent vector and predict future occupation of the workspace by predicting possible future latent vectors. To archive this a combination of a Variational Autoencoder (VAE) and a GRU based prediction network is utilized. Through the nature of the used latent vectors it is possible to remove noise and complete hidden voxels from the input data and allow interpolation between similar voxel configurations. Interpolation is key to enable a meaningful decoding of prediced latent vectors back into the voxel domain. The VAE and GRU training are evaluated on two complex workspaces without prior knowledge. With our approach we can reconstruct complex environments without the need of any models with high accuracy and can predict human and object motion. As a second contribution the extensive workspace data sets will be made publicly available.
ISSN:2161-8089
DOI:10.1109/CASE56687.2023.10260680