Skip to yearly menu bar Skip to main content


Poster

Provable Editing of Deep Neural Networks using Parametric Linear Relaxation

Zhe Tao · Aditya V Thakur

Poster Room - TBD
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract: Ensuring that a DNN satisfies a desired property is critical when deploying DNNsin safety-critical applications. There are efficient methods that can verifywhether a DNN satisfies a property, as seen in the annual DNN verificationcompetition (VNN-COMP). However, the problem of provably editing a DNN tosatisfy a property remains challenging. We present HRDNN, the first efficienttechnique for provable editing of DNNs. Specifically, given a DNN $\mathcal{N}$with parameters $\theta$, input polytope $P$, and outputpolytope $Q$, HRDNN finds new parameters $\theta'$ such that $\forall\mathrm{x} \in P . \mathcal{N}(\mathrm{x}; \theta') \in Q$while minimizing the changes $\lVert{\theta' - \theta}\rVert$.Given a DNN and a property it violates from the VNN-COMP benchmarks,HRDNN is able to provably edit the DNN to satisfy this property within45 seconds.HRDNN is efficientbecause it relaxes the NP-hard provable editing problem to solving a linearprogram (LP). The key contribution is the novel notion of Parametric Linear Relaxation,which enables HRDNN to construct tight bounds of the DNN that are parameterizedby the new parameters $\theta'$. We demonstrate that HRDNN is more efficient andeffective compared to prior DNN editing approaches a) using the VNN-COMPbenchmarks, b) by editing CIFAR10 and TinyImageNet image-recognition DNNs, andBERT sentiment-classification DNNs for local robustness, and c) by training aDNN to model a geodynamics process and satisfy physics constraints.

Live content is unavailable. Log in and register to view live content