Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Safe Generative AI

Waste not, want not; Recycled Gumbel noise improves consistency in natural language generation

Damien de Mijolla · Hannan Saddiq · Kim Moore


Abstract:

Consistency in the output of language models is critical for their reliability and practical utility. Due to their training objective, language models learn to model the full space of possible continuations, leading to outputs that can vary significantly in style, content, and tone, even for similar inputs. To address this, we propose a novel decoding algorithm that enhances response consistency across different prompts with no degradation in response quality. By incorporating a latent variable into the next-token sampling process based on the Gumbel reparametrisation trick, our method outperforms standard sampling by up to 10\% across semantic and stylistic consistency benchmarks. Additionally, our approach integrates seamlessly with existing sampling methods with negligible computational overhead, providing a practical solution for improving the reliability of language model outputs.

Chat is not available.