Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: Algorithmic Fairness through the lens of Metrics and Evaluation

Verifiable evaluations of machine learning models using zkSNARKs

Tobin South · Alexander Camuto · Shrey Jain · Robert Mahari · Christian Paquin · Jason Morton · Alex `Sandy' Pentland

Keywords: [ Audits ] [ Interdisciplinary considerations ] [ Evaluation Methods and Techniques ]

[ ]
Sat 14 Dec 5:27 p.m. PST — 5:30 p.m. PST
 
presentation: Algorithmic Fairness through the lens of Metrics and Evaluation
Sat 14 Dec 9 a.m. PST — 5:30 p.m. PST

Abstract:

In a world of increasing closed-source commercial machine learning models, model evaluations from developers, including fairness assessments and bias checks, must be taken at face value. These benchmark results are traditionally impossible to verify by a model end-user without the costly or impossible process of re-performing the benchmark on black-box model outputs. This work presents a method of verifiable model evaluation using model inference through zkSNARKs. The resulting zero-knowledge computational proofs of model outputs over datasets can be packaged into verifiable evaluation attestations showing that models with fixed private weights achieve stated performance or fairness metrics over public inputs. We present a flexible proving system that enables verifiable attestations to be performed on any standard neural network model with varying compute requirements. For the first time, we demonstrate this across a sample of real-world models and highlight key challenges and design solutions. This presents a new transparency paradigm in the verifiable evaluation of private models.

Chat is not available.