Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 3rd Workshop on New Frontiers in Adversarial Machine Learning (AdvML-Frontiers)

Sparse Transfer Learning Accelerates and Enhances Certified Robustness: A Comprehensive Study

Zhangheng LI · Tianlong Chen · Linyi Li · Bo Li · Zhangyang "Atlas" Wang

Keywords: [ Deep Learning ] [ certified robustness ] [ sparse neural network ] [ transfer learning ]


Abstract: Certified robustness is a critical measure for assessing the reliability of machine learning systems. Traditionally, the computational burden associated with certifying the robustness of machine learning models has posed a substantial challenge, particularly with the continuous expansion of model sizes. In this paper, we introduce an innovative approach to expedite the verification process for $L_2$-norm certified robustness through sparse transfer learning. Our approach is both efficient and effective. It leverages verification results obtained from pre-training tasks and applies sparse updates to these results. To enhance performance, we incorporate dynamic sparse mask selection and introduce a novel stability-based regularizer called DiffStab. Empirical results demonstrate that our method accelerates the verification process for downstream tasks by as much as \textbf{70-80\%}, with only slight reductions in certified accuracy compared to dense parameter updates. We further validate that this performance improvement is even more pronounced in the few-shot transfer learning scenario.

Chat is not available.