Loading…
Bootstrap each lead’s latent: A novel method for self-supervised learning of multilead electrocardiograms
Electrocardiogram (ECG) is one of the most important diagnostic tools for cardiovascular diseases (CVDs). Recent studies show that deep learning models can be trained using labeled ECGs to achieve automatic detection of CVDs, assisting cardiologists in diagnosis. However, the deep learning models he...
Saved in:
Published in: | Computer methods and programs in biomedicine 2024-12, Vol.257, p.108452, Article 108452 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Electrocardiogram (ECG) is one of the most important diagnostic tools for cardiovascular diseases (CVDs). Recent studies show that deep learning models can be trained using labeled ECGs to achieve automatic detection of CVDs, assisting cardiologists in diagnosis. However, the deep learning models heavily rely on labels in training, while manual labeling is costly and time-consuming. This paper proposes a new self-supervised learning (SSL) method for multilead ECGs: bootstrap each lead’s latent (BELL) to reduce the reliance and boost model performance in various tasks, especially when training data are insufficient.
BELL is a variant of the well-known bootstrap your own latent (BYOL). The BELL aims to learn prior knowledge from unlabeled ECGs by pretraining, benefitting downstream tasks. It leverages the characteristics of multilead ECGs. First, BELL uses the multiple-branch skeleton, which is more effective in processing multilead ECGs. Moreover, it proposes intra-lead and inter-lead mean square error (MSE) to guide pretraining, and their fusion can result in better performances. Additionally, BELL inherits the main advantage of the BYOL: No negative pair is used in pretraining, making it more efficient.
In most cases, BELL surpasses previous works in the experiments. More importantly, the pretraining improves model performances by 0.69% ∼ 8.89% in downstream tasks when only 10% of training data are available. Furthermore, BELL shows excellent adaptability to uncurated ECG data from a real-world hospital. Only slight performance degradation occurs ( |
---|---|
ISSN: | 0169-2607 1872-7565 1872-7565 |
DOI: | 10.1016/j.cmpb.2024.108452 |