Poster
Soft Tensor Product Representations for Fully Continuous, Compositional Visual Representations
Bethia Sun · Maurice Pagnucco · Yang Song
Since the inception of the classicalist/connectionist debate, it has been argued that the ability to systematically combine symbol-like entities into compositional representations is necessary for much of intelligent human behaviour. The field of disentangled representation learning has emerged to address this need in connectionist systems by producing explicitly compositional, vector-valued representations. By treating the overall representation as a concatenation of the inferred factors of variation (FoVs), however, conventional disentanglement approaches provide a fundamentally symbolic, string-like treatment of compositional structure. We hypothesise that the fundamental incompatibility between such symbolic representations of compositional structure and the continuous vector spaces underlying deep learning systems produces suboptimal behaviour in both the representation learner and downstream models that use these representations. To fully align compositional structure with continuous vector spaces, we extend Smolensky's Tensor Product Representation (TPR) framework and propose a new type of inherently continuous compositional representation, the Soft TPR. We further introduce a novel, weakly supervised method of learning the representation we propose. Our framework confers demonstrable benefits for both the representation learner (state-of-the-art disentanglement and increased representation learning convergence), and downstream models (improved sample efficiency and superior low sample regime performance), offering strong support for our central hypothesis.
Live content is unavailable. Log in and register to view live content