Poster
Variable-rate hierarchical CPC leads to acoustic unit discovery in speech
Santiago Cuervo · Adrian Lancucki · Ricard Marxer · PaweÅ‚ Rychlikowski · Jan Chorowski
Hall J (level 1) #605
Keywords: [ Self-supervised learning ] [ acoustic unit discovery ] [ contrastive predictive coding ] [ spoken language ] [ hierarchical learning ] [ speech ] [ Representation Learning ] [ (Application) Signal and Speech Processing ]
The success of deep learning comes from its ability to capture the hierarchical structure of data by learning high-level representations defined in terms of low-level ones. In this paper we explore self-supervised learning of hierarchical representations of speech by applying multiple levels of Contrastive Predictive Coding (CPC). We observe that simply stacking two CPC models does not yield significant improvements over single-level architectures. Inspired by the fact that speech is often described as a sequence of discrete units unevenly distributed in time, we propose a model in which the output of a low-level CPC module is non-uniformly downsampled to directly minimize the loss of a high-level CPC module. The latter is designed to also enforce a prior of separability and discreteness in its representations by enforcing dissimilarity of successive high-level representations through focused negative sampling, and by quantization of the prediction targets. Accounting for the structure of the speech signal improves upon single-level CPC features and enhances the disentanglement of the learned representations, as measured by downstream speech recognition tasks, while resulting in a meaningful segmentation of the signal that closely resembles phone boundaries.