Poster
in
Workshop: Attributing Model Behavior at Scale (ATTRIB)
Evaluating Sparse Autoencoders on Targeted Concept Removal Tasks
Adam Karvonen · Can Rager · Samuel Marks · Neel Nanda
Sparse Autoencoders (SAEs) are an interpretability technique aimed at decomposing neural network activations into interpretable units. However, a major bottleneck for SAE development has been the lack of high-quality performance metrics, with prior work largely relying on unsupervised proxies. In this work, we introduce a family of evaluations based on SHIFT, a downstream task from Marks et al. that measures an SAE's ability to disentangle and reduce spurious correlations. To create an automated evaluation, we extend SHIFT by replacing human judgment with LLMs. Additionally, we introduce the Targeted Probe Perturbation (TPP) metric that quantifies an SAE's ability to disentangle similar concepts, effectively scaling SHIFT to a wider range of datasets. We apply both SHIFT and TPP to multiple open-source models, demonstrating that these metrics effectively differentiate between various SAE training hyperparameters and architectures.