Loading…

Multi-Prototype Few-shot Learning in Histopathology

The ability to adapt quickly to a new task or data distribution based on only a few examples is a challenge in AI and highly relevant for various domains. In digital pathology, slight variations in the scanning and staining process can lead to a distribution shift that provokes significant performan...

Full description

Saved in:
Bibliographic Details
Main Authors: Deuschel, Jessica, Firmbach, Daniel, Geppert, Carol I., Eckstein, Markus, Hartmann, Arndt, Bruns, Volker, Kuritcyn, Petr, Dexl, Jakob, Hartmann, David, Perrin, Dominik, Wittenberg, Thomas, Benz, Michaela
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The ability to adapt quickly to a new task or data distribution based on only a few examples is a challenge in AI and highly relevant for various domains. In digital pathology, slight variations in the scanning and staining process can lead to a distribution shift that provokes significant performance degradation of classical neural networks for tasks like tissue cartography where a reliable classification is essential. To overcome this problem, we propose a few-shot learning technique, specifically a k-means extension of Prototypical Networks, to train a highly flexible model that adapts to new, unseen scanner data based on only a few examples. We evaluate our approach on a multi-scanner database comprising a total amount of 356 annotated whole slide images digitized by a base scanner for training and additional five different scanners for evaluation. We verify our method's effectiveness by comparing it to a classically trained benchmark and Prototypical Networks, both trained on the same data. A particular focus for us is to investigate the support set, used for adapting the proto-types, to provide recommended actions for digital pathology. The best results are obtained by employing multiple prototypes per class, calculated from a distributed support set, and domain-specific data augmentation. This results in 86.9 - 88.2% accuracy for a classification task of seven tissue classes on unseen, shifted data from the automated scanners, which is almost equal to the accuracy on the in-distribution data of 89.2%.
ISSN:2473-9944
DOI:10.1109/ICCVW54120.2021.00075