Incorporating Interpretable Output Constraints in Bayesian Neural Networks
Wanqian Yang, Lars Lorch, Moritz Graule, Himabindu Lakkaraju, Finale Doshi-Velez
Spotlight presentation: Orals & Spotlights Track 35: Neuroscience/Probabilistic
on 2020-12-10T19:50:00-08:00 - 2020-12-10T20:00:00-08:00
on 2020-12-10T19:50:00-08:00 - 2020-12-10T20:00:00-08:00
Poster Session 7 (more posters)
on 2020-12-10T21:00:00-08:00 - 2020-12-10T23:00:00-08:00
GatherTown: Probabilistic Methods ( Town B0 - Spot B3 )
on 2020-12-10T21:00:00-08:00 - 2020-12-10T23:00:00-08:00
GatherTown: Probabilistic Methods ( Town B0 - Spot B3 )
Join GatherTown
Only iff poster is crowded, join Zoom . Authors have to start the Zoom call from their Profile page / Presentation History.
Only iff poster is crowded, join Zoom . Authors have to start the Zoom call from their Profile page / Presentation History.
Toggle Abstract Paper (in Proceedings / .pdf)
Abstract: Domains where supervised models are deployed often come with task-specific constraints, such as prior expert knowledge on the ground-truth function, or desiderata like safety and fairness. We introduce a novel probabilistic framework for reasoning with such constraints and formulate a prior that enables us to effectively incorporate them into Bayesian neural networks (BNNs), including a variant that can be amortized over tasks. The resulting Output-Constrained BNN (OC-BNN) is fully consistent with the Bayesian framework for uncertainty quantification and is amenable to black-box inference. Unlike typical BNN inference in uninterpretable parameter space, OC-BNNs widen the range of functional knowledge that can be incorporated, especially for model users without expertise in machine learning. We demonstrate the efficacy of OC-BNNs on real-world datasets, spanning multiple domains such as healthcare, criminal justice, and credit scoring.