Learning Semantic-aware Normalization for Generative Adversarial Networks
Heliang Zheng, Jianlong Fu, zengyh Zeng, Jiebo Luo, Zheng-Jun Zha
Spotlight presentation: Orals & Spotlights Track 07: Vision Applications
on 2020-12-08T07:30:00-08:00 - 2020-12-08T07:40:00-08:00
on 2020-12-08T07:30:00-08:00 - 2020-12-08T07:40:00-08:00
Toggle Abstract Paper (in Proceedings / .pdf)
Abstract: The recent advances in image generation have been achieved by style-based image generators. Such approaches learn to disentangle latent factors in different image scales and encode latent factors as “style” to control image synthesis. However, existing approaches cannot further disentangle fine-grained semantics from each other, which are often conveyed from feature channels. In this paper, we propose a novel image synthesis approach by learning Semantic-aware relative importance for feature channels in Generative Adversarial Networks (SariGAN). Such a model disentangles latent factors according to the semantic of feature channels by channel-/group- wise fusion of latent codes and feature channels. Particularly, we learn to cluster feature channels by semantics and propose an adaptive group-wise Normalization (AdaGN) to independently control the styles of different channel groups. For example, we can adjust the statistics of channel groups for a human face to control the open and close of the mouth, while keeping other facial features unchanged. We propose to use adversarial training, a channel grouping loss, and a mutual information loss for joint optimization, which not only enables high-fidelity image synthesis but leads to superior interpretable properties. Extensive experiments show that our approach outperforms the SOTA style-based approaches in both unconditional image generation and conditional image inpainting tasks.