Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Fine-Tuning in Modern Machine Learning: Principles and Scalability

Hierarchical Unlearning Framework for Multi-Class Classification

Abraham Chan · Arpan Gujarati · Karthik Pattabiraman · Sathish Gopalakrishnan


Abstract:

As machine learning (ML) systems require increasing quantities of data for training, regulations around ML privacy and fairness have been created.Machine unlearning (MU) aims to comply with such regulations by fulfilling data deletion requests on trained ML models.In multi-class classification, existing MU techniques often shift the weight of the forget data, to other classes through fine-tuning.However, such techniques do not scale well when a large number of classes are present.We propose HUF, a hierarchical unlearning framework to effectively process MU requests, by adopting a hierarchical classification architecture.We evaluate the efficiency of HUF by measuring the number of epochs needed to reach a similar MU efficacy to a retrained model, against both random data and class-wise forgetting.We find that HUF is able to unlearn with fewer epochs compared to a single unlearned model, while sustaining the test accuracy of the original model.

Chat is not available.