Loading…
DeepHCS++: Bright-field to fluorescence microscopy image conversion using multi-task learning with adversarial losses for label-free high-content screening
•Image translation method to transform a bright-field microscopy image into three different fluorescence images to observe died cells, nuclei of cells and cytoplasm of cells, respectively.•By using multi-task learning, the abundant and useful information from additional sources based on three correl...
Saved in:
Published in: | Medical image analysis 2021-05, Vol.70, p.101995-101995, Article 101995 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | •Image translation method to transform a bright-field microscopy image into three different fluorescence images to observe died cells, nuclei of cells and cytoplasm of cells, respectively.•By using multi-task learning, the abundant and useful information from additional sources based on three correlated tasks is generated in feature maps and the performance is better than the previous work.•Employing adversarial loss helps the proposed method to generate more realistic fluorescence images.•Live cell experiment shows that the images generated by the proposed method can be used for the image analysis of live cell as well as image-based drug response analysis.•The proposed method is the first image-based HCS workflow using deep learning approach.
[Display omitted]
In this paper, we propose a novel microscopy image translation method for transforming a bright-field microscopy image into three different fluorescence images to observe the apoptosis, nuclei, and cytoplasm of cells, which visualize dead cells, nuclei of cells, and cytoplasm of cells, respectively. These biomarkers are commonly used in high-content drug screening to analyze drug response. The main contribution of the proposed work is the automatic generation of three fluorescence images from a conventional bright-field image; this can greatly reduce the time-consuming and laborious tissue preparation process and improve throughput of the screening process. Our proposed method uses only a single bright-field image and the corresponding fluorescence images as a set of image pairs for training an end-to-end deep convolutional neural network. By leveraging deep convolutional neural networks with a set of image pairs of bright-field and corresponding fluorescence images, our proposed method can produce synthetic fluorescence images comparable to real fluorescence microscopy images with high accuracy. Our proposed model uses multi-task learning with adversarial losses to generate more accurate and realistic microscopy images. We assess the efficacy of the proposed method using real bright-field and fluorescence microscopy image datasets from patient-driven samples of a glioblastoma, and validate the method’s accuracy with various quality metrics including cell number correlation (CNC), peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), cell viability correlation (CVC), error maps, and R2 correlation. |
---|---|
ISSN: | 1361-8415 1361-8423 |
DOI: | 10.1016/j.media.2021.101995 |