Loading…

Adversarial Unsupervised Domain Adaptation for Hand Gesture Recognition using Thermal Images

Hand gesture recognition has a wide range of applications, including in automotive and industrial sectors, health assistive systems, authentication, and so on. Thermal images are more resistant to environmental changes than red-green-blue (RGB) images for hand gesture recognition. However, one disad...

Full description

Saved in:
Bibliographic Details
Published in:IEEE sensors journal 2023-02, Vol.23 (4), p.1-1
Main Authors: Dayal, Aveen, Aishwarya, M., Abhilash, S., Krishna Mohan, C., Kumar, Abhinav, Cenkeramaddi, Linga Reddy
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Hand gesture recognition has a wide range of applications, including in automotive and industrial sectors, health assistive systems, authentication, and so on. Thermal images are more resistant to environmental changes than red-green-blue (RGB) images for hand gesture recognition. However, one disadvantage of using thermal images for the aforementioned task is the scarcity of labeled thermal datasets. To tackle this problem, we propose a method that combines unsupervised domain adaptation (UDA) techniques with deep learning (DL) technology to remove the need for labeled data in the learning process. There are several types and methods for implementing UDA, with Adversarial UDA being one of the most common. In this paper, first time in this field, we propose a novel Adversarial UDA model that uses channel attention and bottleneck layers to learn domain invariant features across RGB and thermal domains. Thus the proposed model leverages the information from the labeled RGB data to solve the hand gesture recognition task using thermal images. We evaluate the proposed model on two hand gesture datasets, namely, Sign Digit Classification and Alphabet Gesture Classification, and compare it to other benchmark models in terms of accuracy, model size, and model parameters. Our model outperforms the other state-of-the-art methods on the Sign Digit Classification and Alphabet Gesture Classification datasets and achieves 91.32% and 80.91% target test accuracy, respectively.
ISSN:1530-437X
1558-1748
DOI:10.1109/JSEN.2023.3235379