Loading…
Food Intake Detection in the Face of Limited Sensor Signal Annotations
Food Intake Detection from wearable sensor signals is essential in applications like dietary monitoring. Traditional annotation methods for sensor datasets are notably labor-intensive and time-consuming. Addressing this, our paper presents a novel methodology that employs a denoising convolutional a...
Saved in:
Main Authors: | , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Food Intake Detection from wearable sensor signals is essential in applications like dietary monitoring. Traditional annotation methods for sensor datasets are notably labor-intensive and time-consuming. Addressing this, our paper presents a novel methodology that employs a denoising convolutional autoencoder trained on an extensive unannotated dataset to extract key discriminative features from the sensor data. Subsequently, a small, selectively annotated subset of this dataset is utilized to train a LightGBM classifier. The classifier leverages the features extracted by the autoencoder to distinguish between eating and non-eating events effectively. Our research utilizes the AIM-2 sensor system, equipped with a 3D accelerometer and an optical sensor mounted on eyeglasses to monitor muscle movement and head motion. Data was collected from 17 participants, each wearing eyeglasses for varying durations. For validating the LightGBM classifier, a cross-validation method was used where the data from each participant was used once as a test set, and the data from eight randomly selected others formed the training set. Repeated for each participant, this approach ensured individual data contribution to the validation. The final performance metrics, calculated as an average across all participants, demonstrated robust results with an F1 score of 83.71%, a specificity of 88.43%, and a sensitivity of 86.26%. These performance metrics underscore the efficacy of our approach, which significantly reduces the need for extensive data annotation. |
---|---|
ISSN: | 2836-4392 |
DOI: | 10.1109/ICCE62051.2024.10634684 |