Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Intrinsically Motivated Open-ended Learning (IMOL)

Prioritizing Compression Explains Human Perceptual Preferences

Francisco López · Bertram Shi · Jochen Triesch

Keywords: [ Efficient coding ] [ Compression ] [ Unsupervised learning ] [ Perceptual fluency ] [ Human preferences ]


Abstract:

We present prioritized representation learning (PRL), a method to enhance unsupervised representation learning by drawing inspiration from active learning and intrinsic motivations. PRL re-weights training samples based on an intrinsic priority function embodying preferences for certain inputs. We show how common human perceptual biases across different sensory modalities emerge through a priority function promoting compression and demonstrate the effects of biased early exposure on individual preferences. Our results reveal that PRL can mimic the results of active unsupervised learning even in the absence of active control over the input.

Chat is not available.