Loading…
Learning accurate kinematic control of cable-driven surgical robots using data cleaning and Gaussian Process Regression
Precise control of industrial automation systems with non-linear kinematics due to joint elasticity, variation in cable tensioning, or backlash is challenging; especially in systems that can only be controlled through an interface with an imprecise internal kinematic model. Cable-driven Robotic Surg...
Saved in:
Main Authors: | , , , , , , , , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Precise control of industrial automation systems with non-linear kinematics due to joint elasticity, variation in cable tensioning, or backlash is challenging; especially in systems that can only be controlled through an interface with an imprecise internal kinematic model. Cable-driven Robotic Surgical Assistants (RSAs) are one example of such an automation system, as they are designed for master-slave teleoperation. We consider the problem of learning a function to modify commands to the inaccurate control interface such that executing the modified command on the system results in a desired state. To achieve this, we must learn a mapping that accounts for the non-linearities in the kinematic chain that are not accounted for by the system's internal model. Gaussian Process Regression (GPR) is a data-driven technique that can estimate this non-linear correction in a task-specific region of state space, but it is sensitive to corruption of training examples due to partial occlusion or lighting changes. In this paper, we extend the use of GPR to learn a non-linear correction for cable-driven surgical robots by using (i) velocity as a feature in the regression and (ii) removing corrupted training observations based on rotation limits and the magnitude of velocity. We evaluate this approach on the Raven II Surgical Robot on the task of grasping foam "damaged tissue" fragments, using the PhaseSpace LED-based motion capture system to track the Raven end-effector. Our main result is a reduction in the norm of the mean position error from 2.6 cm to 0.2 cm and the norm of the mean angular error from 20.6 degrees to 2.8 degrees when correcting commands for a set of held-out trajectories. We also use the learned mapping to achieve a 3.8Ă— speedup over past results on the task of autonomous surgical debridement. |
---|---|
ISSN: | 2161-8070 2161-8089 |
DOI: | 10.1109/CoASE.2014.6899377 |