Loading…
Cross-Domain Similarity in Domain Adaptation for Human Activity Recognition
Human Activity Recognition (HAR) is a difficult machine learning problem, even for state-of-the-art deep learning models, due to HAR data's within-domain and cross-domain heterogeneity. Our research addresses the challenge of closed-set domain adaptation in heterogeneous, parameter-based, and t...
Saved in:
Main Authors: | , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Human Activity Recognition (HAR) is a difficult machine learning problem, even for state-of-the-art deep learning models, due to HAR data's within-domain and cross-domain heterogeneity. Our research addresses the challenge of closed-set domain adaptation in heterogeneous, parameter-based, and transductive transfer learning on HAR datasets. We use a Bidirectional Long Short Term Memory (BLSTM)-based model that, in addition to training for classification accuracy using only labeled data from the source domain, also jointly trains on source and unlabeled target datasets to reduce the discrepancy between source and target domains using cross-domain similarity as an additional loss function. Our work contributes to existing research in the area of domain adaptation for HAR by evaluating the performance of the following cross-domain similarity metrics as loss functions in improving model classification accuracy: 1) Maximum Mean Discrepancy (MMD), which uses feature means to measure similarity between two domains; 2) Kernel Canonical Correlation Analysis (KCCA), which utilizes canonical correlations for similarity determination; and 3) Cosine Similarity, a metric that uses the cosine of the angle between two vectors as similarity measure. Our results demonstrate that MMD as a cross-domain similarity metric not only outperforms KCCA and Cosine Similarity in domain adaption, but also results in mean F1 score improvement of 45% over results where a model is trained solely on the target dataset. |
---|---|
ISSN: | 2161-4407 |
DOI: | 10.1109/IJCNN54540.2023.10191305 |