Skip to yearly menu bar Skip to main content


Poster

When does perceptual alignment benefit vision representations?

Shobhita Sundaram · Stephanie Fu · Lukas Muttenthaler · Netanel Tamir · Lucy Chai · Simon Kornblith · Trevor Darrell · Phillip Isola

[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Humans judge perceptual similarity according to diverse visual attributes, including scene layout, subject location, and camera pose. Existing vision models understand a wide range of semantic abstractions but improperly weigh these attributes and thus make inferences misaligned with human perception. While vision representations have previously benefited from human preference alignment in contexts like image generation, the utility of perceptually aligned representations in more general-purpose settings remains unclear. Here, we investigate how aligning vision model representations to human perceptual judgments impacts their usability in standard computer vision tasks. We finetune state-of-the-art models on a dataset of human similarity judgments for synthetic image triplets and evaluate them across diverse computer vision tasks. We find that aligning models to perceptual judgments yields representations that improve upon the original backbones across many downstream tasks, including counting, semantic segmentation, depth estimation, instance retrieval, and retrieval-augmented generation. In addition, we find that performance is widely preserved on other tasks, including specialized out-of-distribution domains such as in medical imaging and 3D environment frames. Our results suggest that injecting an inductive bias about human perceptual knowledge into vision models can make them better representation learners.

Live content is unavailable. Log in and register to view live content