Loading…

A new approach for detection of weld joint by image segmentation with deep learning-based TransUNet

In recent years, seam tracking has become a key focus in autonomous intelligent robotic welding. Accurate detection and recognition of the weld seam are crucial for effective tracking by welding robots. Passive vision technology, favored for its simplicity and cost-effectiveness, is commonly used in...

Full description

Saved in:
Bibliographic Details
Published in:International journal of advanced manufacturing technology 2024-10, Vol.134 (11-12), p.5225-5240
Main Authors: Eren, Berkay, Demir, Mehmet Hakan, Mistikoglu, Selcuk
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In recent years, seam tracking has become a key focus in autonomous intelligent robotic welding. Accurate detection and recognition of the weld seam are crucial for effective tracking by welding robots. Passive vision technology, favored for its simplicity and cost-effectiveness, is commonly used in the industry. However, software-based improvements are necessary to achieve high-precision weld joint detection, as passive vision systems do not use external light sources. To overcome this problem, the detection of the weld joint on the image is transformed into an image segmentation problem in this study, and a TransUNet structure that combines convolution and transformer structures is proposed to obtain the shape of the welding joint. The proposed method’s detection performance was tested under various lighting conditions. An augmented joint image set was created by adding different contrast values and noises. During training, various loss functions were compared to find the best detection performance. Additionally, the detection performance of the proposed model was compared with various model architectures. The model’s performance was further analyzed by adjusting certain model parameters and modifying the image dataset. Experimental results indicate that the proposed method is robust against different lighting and noise conditions, with TransUNet achieving the highest accuracy rates.
ISSN:0268-3768
1433-3015
DOI:10.1007/s00170-024-14459-x