Skip to yearly menu bar Skip to main content


Poster

DisCEdit: Model Editing by Identifying Discriminative Components

Chaitanya Murti · Chiranjib Bhattacharyya

[ ]
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Model editing is a growing area of research that is particularly valuable in contexts where modifying key model components, like neurons or filters, can significantly impact the model’s performance. The key challenge lies in identifying important components useful to the model's predictions. We apply model editing to address two active areas of research, Structured Pruning, and Selective Class Forgetting. In this work, we adopt a distributional approach to the problem of identifying important components, leveraging the recently proposed \emph{discriminative filters hypothesis}, which states that well-trained (convolutional) models possess discriminative filters that are essential to prediction. To do so, we define discriminative ability in terms of the Bayes error rate associated with the feature distributions, which is equivalent to computing the Total Variation (TV) distance between the distributions. However, computing the TV distance is intractable, motivating us to derive novel witness function-based lower bounds on the TV distance that require no assumptions on the underlying distributions; using this bound generalizes prior work such as TVSPrune that relied on unrealistic Gaussianity assumptions on the feature distributions. With these bounds, we are able to discover critical subnetworks responsible for classwise predictions, and derive DisCEdit-SP and DisCEdit-U, algorithms for structured pruning requiring no access to the training data and loss function, and selective forgetting respectively. We apply DisCEdit-U to selective class forgetting on models trained on CIFAR10 and CIFAR100, and we show that we can reduce accuracy on a single class by over 80% with a 1.2% improvement in test accuracy on the remaining classes. Similarly, on Structured pruning problems, we obtain 40.8% sparsity on ResNet50 on Imagenet, with only a 2.6% drop in accuracy with minimal fine-tuning.

Live content is unavailable. Log in and register to view live content