Loading…

Eye-in-Hand Visual Servoing of Concentric Tube Robots

This letter deals with the development of a vision-based controller for a continuum robot architecture. More precisely, the controlled robotic structure is based on three-tube concentric tube robot (CTR), an emerging paradigm to design accurate, miniaturized, and flexible endoscopic robots. This app...

Full description

Saved in:
Bibliographic Details
Published in:IEEE robotics and automation letters 2018-07, Vol.3 (3), p.2315-2321
Main Authors: Kudryavtsev, Andrey V., Chikhaoui, Mohamed Taha, Liadov, Aleksandr, Rougeot, Patrick, Spindler, Fabien, Rabenorosoa, Kanty, Burgner-Kahrs, Jessica, Tamadazte, Brahim, Andreff, Nicolas
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This letter deals with the development of a vision-based controller for a continuum robot architecture. More precisely, the controlled robotic structure is based on three-tube concentric tube robot (CTR), an emerging paradigm to design accurate, miniaturized, and flexible endoscopic robots. This approach has grown considerably in the recent years finding applications in numerous surgical disciplines. In contrast to conventional robotic structures, CTR kinematics arise many challenges for an optimal control, such as friction, torsion, shear, and nonlinear constitutive behavior. In fact, in order to ensure efficient and reliable control, in addition to computing an analytical and complete kinematic model, it is also important to close the control loop. To do this, we developed an eye-in-hand visual servoing scheme using a millimeter-sized camera embedded at the robot's tip. Both the kinematic model and the visual servoing controller were successfully validated in simulation with visual servoing platform and using an experimental setup. The obtained results showed satisfactory performances for three-degrees of freedom positioning and path following tasks with adaptive gain control.
ISSN:2377-3766
2377-3766
DOI:10.1109/LRA.2018.2807592