Skip to yearly menu bar Skip to main content


Poster Session
in
Workshop: Scientific Methods for Understanding Neural Networks

Training Neural Networks for Modularity aids Interpretability

Satvik Golechha · Dylan Cope · Nandi Schoots

[ ] [ Project Page ]
Sun 15 Dec 11:20 a.m. PST — 12:20 p.m. PST

Abstract:

An approach to improve network interpretability is via clusterability, i.e., splitting a model into disjoint clusters that can be studied independently. We find pretrained models to be highly unclusterable and thus train models to be more modular using an "enmeshment loss" function that encourages the formation of non-interacting clusters. Using automated interpretability measures, we show that our method finds clusters that learn different, disjoint, and smaller circuits for CIFAR-10 labels. Our approach provides a promising direction for making neural networks easier to interpret and thereby control.

Chat is not available.