Poster
in
Workshop: Regulatable ML: Towards Bridging the Gaps between Machine Learning Research and Regulations
Non-Interactive and Publicly Verifiable Zero-Knowledge Proof for Fair Decision Trees
Elisaweta Masserova · Antigoni Polychroniadou · Akira Takahashi
AI systems are ubiquitous, and extend their reach even into highly sensitive domains such as finance. Despite these fields being heavily regulated, it has been demonstrated time and time again that employed AI systems tend to discriminate against legally protected groups. Ensuring fairness is a complex problem which is even getting more complex by privacy-related concerns. For example, while a company may wish to demonstrate that its proprietary model meets specific fairness standards – either to comply with regulations or to build customer trust – they are often reluctant to disclose the model to facilitate the verification. In our work we address this problem for the case of decision trees. We utilize zero-knowledge proofs – a well-known technique from cryptography – to design a new algorithm which proves that a proprietary model satisfies a given fairness constraint without revealing anything about the model. While the usage of zero-knowledge proofs in the context of privacy-preserving ML is not new, our protocols are tailor-made for the specific case of proving fairness of decision trees, and we provide the first non-interactive solution for this scenario, where "non-interactive" means the prover sends a single message to a verifier.Our protocol improves upon previous zero-knowledge proofs for fairness in terms of communication bandwidth. While our prover time is higher, we believe that the non-interactive and public verifiability features offer greater practical utility, as they enable the creation of a compact, reusable certificate that multiple verifiers can validate asynchronously.