Poster
in
Workshop: Medical Imaging meets NeurIPS
Learn Complementary Pseudo-label for Source-free Domain Adaptive Medical Segmentation
Wanqing Xie · Mingzhen Li · Jinzhou Wu · Yongsong HUANG · Yuanyuan Bu · Jane You · Xiaofeng Liu
Source-free unsupervised domain adaptations (SFUDA) have been a predominant solution for transferring knowledge inherent in the model parameters trained with a privately labeled source domain to apply to an unlabeled target domain. In the case of missing source domain labeled data training, unfortunately, the conventional SFUDA approaches can be easily caught in the pitfall of "winner takes all", i.e., the majority class dominates the predictions of the deep segmentation model in a class-imbalanced task while the minority classes are overlooked. In this work, we provide a complementary self-training (CST) approach for SFUDA segmentation to get over these challenges, since it can be much easier to exclude certain classes with low probabilities than to predict the correct one. Specifically, we resort to the complementary pseudo-label, which can be easier to learn and able to keep low noise level. Its superior performance has been evidenced in a CT-to-MR cardiac anatomical segmentation task with throughout quantitative evaluation.