Poster
in
Workshop: Bayesian Deep Learning
Pathologies in Priors and Inference for Bayesian Transformers
Tristan Cinquin · Alexander Immer · Max Horn · Vincent Fortuin
In recent years, the transformer has established itself as a workhorse in many applications ranging from natural language processing to reinforcement learning. Similarly, Bayesian deep learning has become the gold-standard for uncertainty estimation in safety-critical applications, where robustness and calibration are crucial.Surprisingly, no successful attempts to improve transformer models in terms of predictive uncertainty using Bayesian inference exist.In this work, we study this curiously underpopulated area of Bayesian transformers.We find that weight-space inference in transformers does not work well, regardless of the approximate posterior.We also find that the prior is at least partially at fault, but that it is very hard to find well-specified weight priors for these models.We hypothesize that these problems stem from the complexity of obtaining a meaningful mapping from weight-space to function-space distributions in the transformer.Therefore, moving closer to function-space, we propose a novel method based on the implicit reparameterization of the Dirichlet distribution to apply variational inference directly to the attention weights.We find that this proposed method performs competitively with our baselines.