Poster
in
Workshop: 5th Workshop on Self-Supervised Learning: Theory and Practice
Uncovering the Risk of Model Collapsing in Self-Supervised Continual Test-time Adaptation
Trung Hieu Hoang · MinhDuc Vo · Minh Do
Current test-time adaptation (TTA) approaches have emerged as a promising solution to tackle the continual domain shift in machine learning research. However, updating model parameters at test time, via self-supervised learning (SSL) on unlabeled testing data can open the door to unforeseen security vulnerabilities. This work highlights two such scenarios. The first comes from a recurring TTA scenario, where an extensive testing stream reveals the risk of lifelong performance degradation of a TTA model after rounds of adaptation. The second is the Reusing Incorrect Prediction (RIP), demonstrating a surprisingly simple scheme that attackers can intentionally submit malicious samples to silently degrade TTA model performance. We extensively benchmark the performance of the most recent continual TTA approaches when facing these risks, provide theoretical insights into this phenomenon, and propose best practices that can potentially strengthen the robustness when adopting SSL in future continual TTA systems.