Loading…

DenseFusion-DA2: End-to-End Pose-Estimation Network Based on RGB-D Sensors and Multi-Channel Attention Mechanisms

Notably, 6D pose estimation is a critical technology that enables robotics to perceive and interact with their operational environment. However, occlusion causes a loss of local features, which, in turn, restricts the estimation accuracy. To address these challenges, this paper proposes an end-to-en...

Full description

Saved in:
Bibliographic Details
Published in:Sensors (Basel, Switzerland) Switzerland), 2024-10, Vol.24 (20), p.6643
Main Authors: Li, Hanqi, Wan, Guoyang, Li, Xuna, Wang, Chengwen, Zhang, Hong, Liu, Bingyou
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Notably, 6D pose estimation is a critical technology that enables robotics to perceive and interact with their operational environment. However, occlusion causes a loss of local features, which, in turn, restricts the estimation accuracy. To address these challenges, this paper proposes an end-to-end pose-estimation network based on a multi-channel attention mechanism, DA2Net. Firstly, a multi-channel attention mechanism, designated as "DA2Net", was devised using A -Nets as its foundation. This mechanism is constructed in two steps. In the first step, the essential characteristics are extracted from the global feature space through the second-order attention pool. In the second step, a feature map is generated by the integration of position and channel attention. Subsequently, the extracted key features are assigned to each position of the feature map, enhancing both the feature representation capacity and the overall performance. Secondly, the designed attention mechanism is introduced into both the feature fusion and pose iterative refinement networks to enhance the network's capacity to acquire local features thus improving its overall performance. The experimental results demonstrated that the estimation accuracy of DenseFusion-DA2 on the LineMOD dataset was approximately 3.4% higher than that of DenseFusion. Furthermore, the estimation accuracy surpassed that of PoseCNN, PVNet, SSD6D, and PointFusion by 8.3%, 11.1%, 20.3%, and 23.8%, respectively. The estimation accuracy also shows a significant advantage on the Occluded LineMOD and HR-Vision datasets. This research not only presents a more efficient solution for robot perception but also introduces novel ideas and methods for technological advancements and applications in related fields.
ISSN:1424-8220
1424-8220
DOI:10.3390/s24206643