Skip to yearly menu bar Skip to main content


Poster
in
Workshop: UniReps: Unifying Representations in Neural Models

Task-Relevant Covariance from Manifold Capacity Theory Improves Robustness in Deep Networks

William Yang · Chi-Ning Chou · SueYeon Chung

Keywords: [ Deep learning ] [ Out-of-distribution generalization ] [ Neural manifolds ] [ Domain adaptation ] [ Representational geometry ] [ Deep neural networks ]


Abstract:

Analysis of high-dimensional representations in neuroscience and deep learning traditionally places equal importance on all points in a representation, potentially leading to significant information loss. Recent advances in manifold capacity theory offer a principled framework for identifying the computationally relevant points on neural manifolds. In this work, we introduce the concept of task-relevant class covariance to identify directions in representation-space supporting class discriminability. We demonstrate that scaling representations along these directions markedly improves simulated accuracy under distribution shift. Building on these insights, we propose AnchorBlocks, architectural modules that use task-relevant class covariance to align representations with a task-relevant eigenspace. By appending one AnchorBlock onto ResNet18, we achieve competitive performance in a standard domain adaptation benchmark (CIFAR-10C) against much larger robustness-promoting architectures. Our findings provide insight into neural population geometry and methods to interpret/build robust deep learning systems.

Chat is not available.