Loading…
Abstract 5361: Deep learning models capture high dimensional features for cell morphology analysis
Many methods for imaging and sorting tumor cells require biomarker labels that perturb cells and may create a selection bias. The Deepcell platform can characterize and sort cells using only brightfield images of unlabeled single cells, thereby enabling more comprehensive and unbiased assessment of...
Saved in:
Published in: | Cancer research (Chicago, Ill.) Ill.), 2023-04, Vol.83 (7_Supplement), p.5361-5361 |
---|---|
Main Authors: | , , , , , , , , , , , |
Format: | Article |
Language: | English |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Many methods for imaging and sorting tumor cells require biomarker labels that perturb cells and may create a selection bias. The Deepcell platform can characterize and sort cells using only brightfield images of unlabeled single cells, thereby enabling more comprehensive and unbiased assessment of tumor cell morphology and heterogeneity. Cells are imaged with a high-speed camera during microfluidic flow and brightfield images are analyzed in real-time by deep learning models to generate quantitative embeddings, which are reproducible and multi-dimensional descriptions of cell morphology. One of the key technical challenges in building this platform was the development of an AI model to extract features from cell images from diverse human cells without prior knowledge of specific cell types, cell preparation, or other application-specific knowledge for an exploratory approach.
Deepcell’s ‘Human Foundation Model’ (HFM) is a feature encoder that transforms cell images into 128 dimensional embedding vectors. The model backbone, responsible for extracting image features, is based on the ResNet18 convolutional deep neural network architecture. Training utilizes a multi-task semi-supervised training framework that combines the VicReg self-supervised learning model, which learns images features without labels, along with supervised auxiliary models using labeled data. These sub-models enable the backbone model to recognize specific cell attributes such as granulation, pigmentation, characteristics of malignancy, attributes related to cellular states like apoptosis and necrosis. We augment the deep learning embeddings with computer vision derived cellular features, such as area, perimeter, intensity, and texture features to improve model interpretability and accuracy.
Here, we describe how the HFM self-supervised backbone model was trained, the discriminatory power added by the supervised tasks, and validation of the reproducibility and generalization capabilities of the resulting model. We also demonstrate how the resulting embeddings can be visualized using the Deepcell Cloud software and how clusters of cells in our embedding space can be related to cell images, interpretable morphological features, and actionable biological characteristics. Potential applications of the Deepcell platform are diverse and promising, such as label-free enrichment of malignant cells and discovery of morphologically distinct subpopulations of heterogeneous cancer cell samples.
Cita |
---|---|
ISSN: | 1538-7445 1538-7445 |
DOI: | 10.1158/1538-7445.AM2023-5361 |