Poster
Self-Calibrated Tuning of Vision-Language Models for Out-of-Distribution Detection
Geng Yu · Jianing Zhu · Jiangchao Yao · Bo Han
Out-of-distribution (OOD) detection is crucial for deploying reliable machine learning models on open-world applications. Recent advances in CLIP-based OOD detection have shown promising results via regularizing prompt-tuning with OOD features extracted from ID data. However, the irrelevant context mined from ID data can be spurious due to the inaccurate foreground-background decomposition, thus limiting the OOD detection performance. In this work, we propose a novel framework, namely, Self-Calibrated Tuning (SCT), to mitigate this problem for effective OOD detection with only the given few-shot ID data. Specifically, SCT introduces modulating factors respectively on the two components of the original learning objective. It adaptively directs the model's attention towards suitable tasks during training on data with different prediction uncertainty to calibrate the influence of OOD regularization, which is compatible with many prompt tuning based OOD detection methods. Extensive experiments and analyses have been conducted to characterize and verify the effectiveness of the proposed SCT.
Live content is unavailable. Log in and register to view live content