Invited Talk
in
Workshop: CtrlGen: Controllable Generative Modeling in Language and Vision
Invited Talk #3 - Disentanglement for Controllable Image Generation (Irina Higgins)
Irina Higgins
Talk: Disentanglement for Controllable Image Generation
Abstract: When it comes to generating diverse and plausible complex visual scenes from interpretable interfaces using deep learning, unsupervised disentangled representation learning can be very helpful. These methods can automatically discover the semantically meaningful attributes of a dataset, and represent them in a human-interpretable low-dimensional representation which can be manipulated to generate a large range of new plausible visual scenes. Disentangled representations are also conducive to semantic analogy making and sample efficient language grounding, which allows diverse language-controlled image manipulation and rendering. In this talk we will cover the strengths and limitations of the current methods for disentangled representation learning, and touch on the frontiers of this line of research where radically new approaches are starting to emerge based on the causal, physics-inspired, geometric and contrastive frameworks.
Bio: Irina is a Staff Research Scientist at DeepMind, where she works in the Froniers team. Her work aims to bring together insights from the fields of neuroscience and physics to advance general artificial intelligence through improved representation learning. Before joining DeepMind, Irina was a British Psychological Society Undergraduate Award winner for her achievements as an undergraduate student in Experimental Psychology at Westminster University, followed by a DPhil at the Oxford Center for Computational Neuroscience and Artificial Intelligence, where she focused on understanding the computational principles underlying speech processing in the auditory brain. During her DPhil, Irina also worked on developing poker AI, applying machine learning in the finance sector, and working on speech recognition at Google Research.