Back to Search Start Over

Augmentation-Aware Self-Supervision for Data-Efficient GAN Training

Authors :
Hou, Liang
Cao, Qi
Yuan, Yige
Zhao, Songtao
Ma, Chongyang
Pan, Siyuan
Wan, Pengfei
Wang, Zhongyuan
Shen, Huawei
Cheng, Xueqi
Publication Year :
2022

Abstract

Training generative adversarial networks (GANs) with limited data is challenging because the discriminator is prone to overfitting. Previously proposed differentiable augmentation demonstrates improved data efficiency of training GANs. However, the augmentation implicitly introduces undesired invariance to augmentation for the discriminator since it ignores the change of semantics in the label space caused by data transformation, which may limit the representation learning ability of the discriminator and ultimately affect the generative modeling performance of the generator. To mitigate the negative impact of invariance while inheriting the benefits of data augmentation, we propose a novel augmentation-aware self-supervised discriminator that predicts the augmentation parameter of the augmented data. Particularly, the prediction targets of real data and generated data are required to be distinguished since they are different during training. We further encourage the generator to adversarially learn from the self-supervised discriminator by generating augmentation-predictable real and not fake data. This formulation connects the learning objective of the generator and the arithmetic $-$ harmonic mean divergence under certain assumptions. We compare our method with state-of-the-art (SOTA) methods using the class-conditional BigGAN and unconditional StyleGAN2 architectures on data-limited CIFAR-10, CIFAR-100, FFHQ, LSUN-Cat, and five low-shot datasets. Experimental results demonstrate significant improvements of our method over SOTA methods in training data-efficient GANs.<br />Comment: NeurIPS 2023

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2205.15677
Document Type :
Working Paper