Skip to yearly menu bar Skip to main content


Poster

Make Sure You're Unsure: A Framework for Verifying Probabilistic Specifications

Leonard Berrada · Sumanth Dathathri · Krishnamurthy Dvijotham · Robert Stanforth · Rudy Bunel · Jonathan Uesato · Sven Gowal · M. Pawan Kumar

Keywords: [ Robustness ] [ Optimization ] [ Deep Learning ] [ Adversarial Robustness and Security ]


Abstract: Most real world applications require dealing with stochasticity like sensor noise or predictive uncertainty, where formal specifications of desired behavior are inherently probabilistic. Despite the promise of formal verification in ensuring the reliability of neural networks, progress in the direction of probabilistic specifications has been limited. In this direction, we first introduce a general formulation of probabilistic specifications for neural networks, which captures both probabilistic networks (e.g., Bayesian neural networks, MC-Dropout networks) and uncertain inputs (distributions over inputs arising from sensor noise or other perturbations). We then propose a general technique to verify such specifications by generalizing the notion of Lagrangian duality, replacing standard Lagrangian multipliers with "functional multipliers" that can be arbitrary functions of the activations at a given layer. We show that an optimal choice of functional multipliers leads to exact verification (i.e., sound and complete verification), and for specific forms of multipliers, we develop tractable practical verification algorithms. We empirically validate our algorithms by applying them to Bayesian Neural Networks (BNNs) and MC Dropout Networks, and certifying properties such as adversarial robustness and robust detection of out-of-distribution (OOD) data. On these tasks we are able to provide significantly stronger guarantees when compared to prior work -- for instance, for a VGG-64 MC-Dropout CNN trained on CIFAR-10 in a verification-agnostic manner, we improve the certified AUC (a verified lower bound on the true AUC) for robust OOD detection (on CIFAR-100) from $0 \% \rightarrow 29\%$. Similarly, for a BNN trained on MNIST, we improve on the $\ell_\infty$ robust accuracy from $60.2 \% \rightarrow 74.6\%$. Further, on a novel specification -- distributionally robust OOD detection -- we improve on the certified AUC from $5\% \rightarrow 23\%$.

Chat is not available.