Poster
in
Workshop: Attributing Model Behavior at Scale (ATTRIB)
Attributing Statistics to Synthesis Quality in Correlation-Based Texture Models
Vasha DuTell · Anne Harrington · Zeyu Yun · Mark Hamilton · Christian Koevesdi · Edward Adelson · Bill Freeman · Ruth Rosenholtz
Learning strong and interpretable representations for textures is fundamental in many computer vision tasks, particularly texture synthesis, where the aim is to match the intricate statistical patterns of one texture to generate new syntheses. With modern deep learning architectures, it is difficult to obtain interpretable and attributable features with their highly over-parameterized representation spaces despite their strong task performance. More traditional approaches to representing texture on the other hand, rely on highly interpretable, hand-picked statistic sets, but often at the cost of performance. In order to bridge the gap between these two approaches and obtain performant yet interpretable texture features, we introduce a new texture representation model. Our method combines the interpretable neuroscience-based multi-scale pyramid filter structure of traditional well-tested texture models with the power of pairwise-correlation approaches. This analysis-by-synthesis model generates texture images with similar quality to style-transfer based approaches. With our interpretable approach, we create a organizational structure for our statistics, breaking them into families, and evaluating the contribution of these families to synthesis quality. We then use contrastive learning to identify which statistics are most and least important for differentiating textures, and show that this ordering transfers to synthesis quality. By attributing synthesis quality to a subset of interpretable statistics, we are able to reduce the number of parameters to below that of previous methods while retaining similar or better synthesis quality.