Poster Session
in
Workshop: Scientific Methods for Understanding Neural Networks
Hiding in a Plain Sight: Out-of-Distribution Data in the Logit Space Embeddings
Vangjush Komini · Sarunas Girdzijauskas
Out-of-distribution (OOD) data are detrimental to the performance of deep learning (DL) classifiers, leading to extensive research focused on their detection.Current state-of-the-art OOD detection methods employ a scoring technique designed to assign lower scores to OOD samples compared to in-distribution (ID) ones. Nevertheless, these approaches lack foresight into the configuration of OOD and ID data within the latent space, instead making an implicit assumption regarding their inherent separation.As a result, most OOD detection methods result in complicated and hard-to-validate scoring techniques.This study conducts a thorough analysis of the logit embedding landscape, revealing that both ID and OOD data exhibit a distinct trend.Specifically, we demonstrate that OOD data tends to reside near to the center of the logit space.In contrast, ID data tends to be situated farther from the center, predominantly in the positive regions of the logit space, thus forming class-wise clusters along the orthogonal axes that span the logit space.This study highlights the critical role of the DL classifier in differentiating between ID and OOD logits.