Loading…
Identification of Characteristic Points in Multivariate Physiological Signals by Sensor Fusion and Multi-Task Deep Networks
Identification of characteristic points in physiological signals, such as the peak of the R wave in the electrocardiogram and the peak of the systolic wave of the photopletismogram, is a fundamental step for the quantification of clinical parameters, such as the pulse transit time. In this work, we...
Saved in:
Published in: | Sensors (Basel, Switzerland) Switzerland), 2022-03, Vol.22 (7), p.2684 |
---|---|
Main Authors: | , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Identification of characteristic points in physiological signals, such as the peak of the R wave in the electrocardiogram and the peak of the systolic wave of the photopletismogram, is a fundamental step for the quantification of clinical parameters, such as the pulse transit time. In this work, we presented a novel neural architecture, called eMTUnet, to automate point identification in multivariate signals acquired with a chest-worn device. The eMTUnet consists of a single deep network capable of performing three tasks simultaneously: (i) localization in time of characteristic points (labeling task), (ii) evaluation of the quality of signals (classification task); (iii) estimation of the reliability of classification (reliability task). Preliminary results in overnight monitoring showcased the ability to detect characteristic points in the four signals with a recall index of about 1.00, 0.90, 0.90, and 0.80, respectively. The accuracy of the signal quality classification was about 0.90, on average over four different classes. The average confidence of the correctly classified signals, against the misclassifications, was 0.93 vs. 0.52, proving the worthiness of the confidence index, which may better qualify the point identification. From the achieved outcomes, we point out that high-quality segmentation and classification are both ensured, which brings the use of a multi-modal framework, composed of wearable sensors and artificial intelligence, incrementally closer to clinical translation. |
---|---|
ISSN: | 1424-8220 1424-8220 |
DOI: | 10.3390/s22072684 |