Loading…

Test-time bi-directional adaptation between image and model for robust segmentation

•An effective test-time bi-directional adaptation strategy is proposed to seek robust segmentation.•A window-based order statistics alignment module is presented to adapt appearance-agnostic test images to existing learned models.•An augmented self-supervised learning is developed to adapt the segme...

Full description

Saved in:
Bibliographic Details
Published in:Computer methods and programs in biomedicine 2023-05, Vol.233, p.107477-107477, Article 107477
Main Authors: Huang, Xiaoqiong, Yang, Xin, Dou, Haoran, Huang, Yuhao, Zhang, Li, Liu, Zhendong, Yan, Zhongnuo, Liu, Lian, Zou, Yuxin, Hu, Xindi, Gao, Rui, Zhang, Yuanji, Xiong, Yi, Xue, Wufeng, Ni, Dong
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•An effective test-time bi-directional adaptation strategy is proposed to seek robust segmentation.•A window-based order statistics alignment module is presented to adapt appearance-agnostic test images to existing learned models.•An augmented self-supervised learning is developed to adapt the segmentation model to images with unknown appearance shifts.•The method generalizes well across multi-vendor/center datasets. Deep learning models often suffer from performance degradations when deployed in real clinical environments due to appearance shifts between training and testing images. Most extant methods use training-time adaptation, which almost require target domain samples in the training phase. However, these solutions are limited by the training process and cannot guarantee the accurate prediction of test samples with unforeseen appearance shifts. Further, it is impractical to collect target samples in advance. In this paper, we provide a general method of making existing segmentation models robust to samples with unknown appearance shifts when deployed in daily clinical practice. Our proposed test-time bi-directional adaptation framework combines two complementary strategies. First, our image-to-model (I2M) adaptation strategy adapts appearance-agnostic test images to the learned segmentation model using a novel plug-and-play statistical alignment style transfer module during testing. Second, our model-to-image (M2I) adaptation strategy adapts the learned segmentation model to test images with unknown appearance shifts. This strategy applies an augmented self-supervised learning module to fine-tune the learned model with proxy labels that it generates. This innovative procedure can be adaptively constrained using our novel proxy consistency criterion. This complementary I2M and M2I framework demonstrably achieves robust segmentation against unknown appearance shifts using existing deep-learning models. Extensive experiments on 10 datasets containing fetal ultrasound, chest X-ray, and retinal fundus images demonstrate that our proposed method achieves promising robustness and efficiency in segmenting images with unknown appearance shifts. To address the appearance shift problem in clinically acquired medical images, we provide robust segmentation by using two complementary strategies. Our solution is general and amenable for deployment in clinical settings.
ISSN:0169-2607
1872-7565
DOI:10.1016/j.cmpb.2023.107477