Loading…

Learning to predict RNA sequence expressions from whole slide images with applications for search and classification

Deep learning methods are widely applied in digital pathology to address clinical challenges such as prognosis and diagnosis. As one of the most recent applications, deep models have also been used to extract molecular features from whole slide images. Although molecular tests carry rich information...

Full description

Saved in:
Bibliographic Details
Published in:Communications biology 2023-03, Vol.6 (1), p.304-304, Article 304
Main Authors: Alsaafin, Areej, Safarpoor, Amir, Sikaroudi, Milad, Hipp, Jason D., Tizhoosh, H. R.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Deep learning methods are widely applied in digital pathology to address clinical challenges such as prognosis and diagnosis. As one of the most recent applications, deep models have also been used to extract molecular features from whole slide images. Although molecular tests carry rich information, they are often expensive, time-consuming, and require additional tissue to sample. In this paper, we propose tRNAsformer, an attention-based topology that can learn both to predict the bulk RNA-seq from an image and represent the whole slide image of a glass slide simultaneously. The tRNAsformer uses multiple instance learning to solve a weakly supervised problem while the pixel-level annotation is not available for an image. We conducted several experiments and achieved better performance and faster convergence in comparison to the state-of-the-art algorithms. The proposed tRNAsformer can assist as a computational pathology tool to facilitate a new generation of search and classification methods by combining the tissue morphology and the molecular fingerprint of the biopsy samples. tRNAsformer enables prediction of bulk RNA-seq from histological slides using machine learning approaches.
ISSN:2399-3642
2399-3642
DOI:10.1038/s42003-023-04583-x