Skip to yearly menu bar Skip to main content


Poster

Self-supervised Transformation Learning for Equivariant Representations

Jaemyung Yu · Jaehyun Choi · DongJae Lee · HyeongGwon Hong · Junmo Kim

[ ]
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Unsupervised representation learning has significantly advanced various machine learning tasks. In the computer vision domain, state-of-the-art approaches utilize transformations like random cropping and color jittering to achieve invariant representations, embedding semantically the same inputs despite transformations. However, this can degrade performance in tasks requiring precise features, such as localization or flower classification. To address this, recent research incorporates equivariant representation learning, which captures transformation-sensitive information. However, current methods depend on transformation labels and thus struggle with interdependency and complex transformations. We propose Self-supervised Transformation Learning (STL), replacing transformation labels with transformation representations derived from data pairs. STL ensures transformation representation is sample-invariant and learns corresponding equivariant transformations, enhancing performance without increased batch complexity. Experimentally, STL excels in classification and detection tasks. Specifically, STL outperforms existing methods in 7 out of 11 classification tasks and shows superior average performance. Furthermore, incorporating AugMix in STL achieves better performance across all tasks and transformation representation visualizations show that STL successfully captures interdependencies between transformations.

Live content is unavailable. Log in and register to view live content