# differentiable_augmentation_for_dataefficient_gan_training__249e9081.pdf Differentiable Augmentation for Data-Efficient GAN Training Shengyu Zhao IIIS, Tsinghua University and MIT Zhijian Liu MIT Ji Lin MIT Jun-Yan Zhu Adobe and CMU Song Han MIT The performance of generative adversarial networks (GANs) heavily deteriorates given a limited amount of training data. This is mainly because the discriminator is memorizing the exact training set. To combat it, we propose Differentiable Augmentation (Diff Augment), a simple method that improves the data efficiency of GANs by imposing various types of differentiable augmentations on both real and fake samples. Previous attempts to directly augment the training data manipulate the distribution of real images, yielding little benefit; Diff Augment enables us to adopt the differentiable augmentation for the generated samples, effectively stabilizes training, and leads to better convergence. Experiments demonstrate consistent gains of our method over a variety of GAN architectures and loss functions for both unconditional and class-conditional generation. With Diff Augment, we achieve a state-of-the-art FID of 6.80 with an IS of 100.8 on Image Net 128 128 and 2-4 reductions of FID given 1,000 images on FFHQ and LSUN. Furthermore, with only 20% training data, we can match the top performance on CIFAR-10 and CIFAR-100. Finally, our method can generate high-fidelity images using only 100 images without pre-training, while being on par with existing transfer learning algorithms. Code is available at https://github.com/mit-han-lab/data-efficient-gans. 1 Introduction Big data has enabled deep learning algorithms achieve rapid advancements. In particular, stateof-the-art generative adversarial networks (GANs) [11] are able to generate high-fidelity natural images of diverse categories [2, 18]. Many computer vision and graphics applications have been enabled [32,43,53]. However, this success comes at the cost of a tremendous amount of computation and data. Recently, researchers have proposed promising techniques to improve the computational efficiency of model inference [22,36], while the data efficiency remains to be a fundamental challenge. GANs heavily rely on vast quantities of diverse and high-quality training examples. To name a few, the FFHQ dataset [17] contains 70,000 selective post-processed high-resolution images of human faces; the Image Net dataset [6] annotates more than a million of images with various object categories. Collecting such large-scale datasets requires months or even years of considerable human efforts along with prohibitive annotation costs. In some cases, it is not even possible to have that many examples, e.g., images of rare species or photos of a specific person or landmark. Thus, it is of critical importance to eliminate the need of immense datasets for GAN training. However, reducing the amount of training data results in drastic degradation in the performance. For example in Figure 1, given only 10% or 20% of the CIFAR-10 data, the training accuracy of the discriminator saturates quickly (to nearly 100%); however, its validation accuracy keeps decreasing (to lower than 30%), suggesting that the discriminator is simply memorizing the entire training set. This severe over-fitting problem disrupts the training dynamics and leads to degraded image quality. A widely-used strategy to reduce overfitting in image classification is data augmentation [20,38,42], which can increase the diversity of training data without collecting new samples. Transformations such 34th Conference on Neural Information Processing Systems (Neur IPS 2020), Vancouver, Canada. 0 10 20 30 40 50 60 103 iterations 100% training data 20% training data 10% training data 0 10 20 30 40 50 60 103 iterations D's Training Accuracy 100% training data 20% training data 10% training data 0 10 20 30 40 50 60 103 iterations 1.0 D's Validation Accuracy 100% training data 20% training data 10% training data Figure 1: Big GAN heavily deteriorates given a limited amount of data. left: With 10% of CIFAR10 data, FID increases shortly after the training starts, and the model then collapses (red curve). middle: the training accuracy of the discriminator D quickly saturates. right: the validation accuracy of D dramatically falls, indicating that D has memorized the exact training set and fails to generalize. 100% data 20% data 10% data Inception Score Style GAN2 (baseline) + Diff Augment (ours) 9.89 12.15 14.50 0 5 10 15 20 25 30 35 40 100% data 20% data 10% data Style GAN2 (baseline) + Diff Augment (ours) Figure 2: Unconditional generation results on CIFAR-10. Style GAN2 s performance drastically degrades given less training data. With Diff Augment, we are able to roughly match its FID and outperform its Inception Score (IS) using only 20% training data. FID and IS are measured using 10k samples; the validation set is used as the reference distribution for FID calculation. as cropping, flipping, scaling, color jittering [20], and region masking (Cutout) [8] are commonly-used augmentations for vision models. However, applying data augmentation to GANs is fundamentally different. If the transformation is only added to the real images, the generator would be encouraged to match the distribution of the augmented images. As a consequence, the outputs suffer from distribution shift and the introduced artifacts (e.g., a region being masked, unnatural color, see Figure 5a). Alternatively, we can augment both the real and generated images when training the discriminator; however, this would break the subtle balance between the generator and discriminator, leading to poor convergence as they are optimizing completely different objectives (see Figure 5b). To combat it, we introduce a simple but effective method, Diff Augment, which applies the same differentiable augmentation to both real and fake images for both generator and discriminator training. It enables the gradients to be propagated through the augmentation back to the generator, regularizes the discriminator without manipulating the target distribution, and maintains the balance of training dynamics. Experiments on a variety of GAN architectures and datasets consistently demonstrate the effectiveness of our method. With Diff Augment, we improve Big GAN and achieve a Fr echet Inception Distance (FID) of 6.80 with an Inception Score (IS) of 100.8 on Image Net 128 128 without the truncation trick [2] and reduce the Style GAN2 baseline s FID by 2-4 given 1,000 images on the FFHQ and LSUN datasets. We also match the top performance on CIFAR-10 and CIFAR-100 using only 20% training data (see Figure 2). Furthermore, our method can generate high-quality images with only 100 examples (see Figure 3). Without any pre-training, we achieve competitive performance with existing transfer learning algorithms that used to require tens of thousands of training images. 2 Related Work Generative Adversarial Networks. Following the pioneering work of GAN [11], researchers have explored different ways to improve its performance and training stability. Recent efforts are centered on more stable objective functions [1,12,26,27,35], more advanced architectures [28,29,33,48], Style GAN2 (baseline) Style GAN2 + DiffAugment (ours) Figure 3: Low-shot generation without pre-training. Our method can generate high-fidelity images using only 100 Obama portraits (top) from our collected 100-shot datasets, 160 cats (middle) or 389 dogs (bottom) from the Animal Face dataset [37] at 256 256 resolution. See Figure 7 for the interpolation results; nearest neighbor tests are provided in the supplementary material. and better training strategy [7, 15, 24, 49]. As a result, both the visual fidelity and diversity of generated images have increased significantly. For example, Big GAN [2] is able to synthesize natural images with a wide range of object classes at high resolution, and Style GAN [17,18] can produce photorealistic face portraits with large varieties, often indistinguishable from natural photos. However, the above work paid less attention to the data efficiency aspect. A recent attempt [3,25] leverages semiand self-supervised learning to reduce the amount of human annotation required for training. In this paper, we study a more challenging scenario where both data and labels are limited. Regularization for GANs. GAN training often requires additional regularization as they are highly unstable. To stabilize the training dynamics, researchers have proposed several techniques including the instance noise [39], Jensen-Shannon regularization [34], gradient penalties [12, 27], spectral normalization [28], adversarial defense regularization [52], and consistency regularization [50]. All of these regularization techniques implicitly or explicitly penalize sudden changes in the discriminator s output within a local region of the input. In this paper, we provide a different perspective, data augmentation, and we encourage the discriminator to perform well under different types of augmentation. In Section 4, we show that our method is complementary to the regularization techniques in practice. Data Augmentation. Many deep learning models adopt label-preserving transformations to reduce overfitting: e.g., color jittering [20], region masking [8], flipping, rotation, cropping [20,42], data mixing [47], and local and affine distortion [38]. Recently, Auto ML [40, 54] has been used to explore adaptive augmentation policies for a given dataset and task [4,5,23]. However, applying data augmentation to generative models, such as GANs, remains an open question. Different from the classifier training where the label is invariant to transformations of the input, the goal of generative models is to learn the data distribution itself. Directly applying augmentation would inevitably alter the distribution. We present a simple strategy to circumvent the above concern. Concurrent with our work, several methods [16,41,51] independently proposed data augmentation for training GANs. We urge the readers to check out their work for more details. Generative adversarial networks (GANs) aim to model the distribution of a target dataset via a generator G and a discriminator D. The generator G maps an input latent vector z, typically drawn from a Gaussian distribution, to its output G(z). The discriminator D learns to distinguish generated samples G(z) from real observations x. The standard GANs training algorithm alternately optimizes the discriminator s loss LD and the generator s loss LG given loss functions f D and f G: LD = Ex pdata(x)[f D( D(x))] + Ez p(z)[f D(D(G(z)))], (1) LG = Ez p(z)[f G( D(G(z)))]. (2) D D(T(x)) vs. D(T(G(z))) z G D D(T(G(z))) x T(x) T(G(z)) T(G(z)) update update Figure 4: Overview of Diff Augment for updating D (left) and G (right). Diff Augment applies the augmentation T to both the real samples x and the generated output G(z). When we update G, gradients need to be back-propagated through T, which requires T to be differentiable w.r.t. the input. Method Where T? Color + Transl. + Cutout Transl. + Cutout Translation (i) (ii) (iii) IS FID IS FID IS FID Big GAN (baseline) 9.06 9.59 9.06 9.59 9.06 9.59 Aug. reals only 5.94 49.38 6.51 37.95 8.40 19.16 Aug. reals + fakes (D only) 3.00 126.96 3.76 114.14 3.50 100.13 Diff Augment (D + G, ours) 9.25 8.59 9.16 8.70 9.07 9.04 Table 1: Diff Augment vs. vanilla augmentation strategies on CIFAR-10 with 100% training data. Augment reals only applies augmentation T to (i) only (see Figure 4) and corresponds to Equations (3)-(4); Augment D only applies T to both reals (i) and fakes (ii), but not G (iii), and corresponds to Equations (5)-(6); Diff Augment applies T to reals (i), fakes (ii), and G (iii). (iii) requires T to be differentiable since gradients should be back-propagated through T to G. Diff Augment corresponds to Equations (7)-(8). IS and FID are measured using 10k samples; the validation set is the reference distribution. We select the snapshot with the best FID for each method. Results are averaged over 5 evaluation runs; all standard deviations are less than 1% relatively. Here, different loss functions can be used, such as the non-saturating loss [11], where f D(x) = f G(x) = log (1 + ex), and the hinge loss [28], where f D(x) = max(0, 1 + x) and f G(x) = x. Despite extensive ongoing efforts of better GAN architectures and loss functions, a fundamental challenge still exists: the discriminator tends to memorize the observations as the training progresses. An overfitted discriminator penalizes any generated samples other than the exact training data points, provides uninformative gradients due to poor generalization, and usually leads to training instability. Challenge: Discriminator Overfitting. Here we analyze the performance of Big GAN [2] with different amounts of data on CIFAR-10. As plotted in Figure 1, even given 100% data, the gap between the discriminator s training and validation accuracy keeps increasing, suggesting that the discriminator is simply memorizing the training images. This happens not only on limited data but also on the large-scale Image Net dataset, as observed by Brock et al. [2]. Big GAN already adopts Spectral Normalization [28], a widely-used regularization technique for both generator and discriminator architectures, but still suffers from severe overfitting. 3.1 Revisiting Data Augmentation Data augmentation is a commonly-used strategy to reduce overfitting in many recognition tasks it has an irreplaceable role and can also be applied in conjunction with other regularization techniques: e.g., weight decay. We have shown that the discriminator suffers from a similar overfitting problem as the binary classifier. However, data augmentation is seldom used in the GAN literature compared to the explicit regularizations on the discriminator [12,27,28]. In fact, a recent work [50] observes that directly applying data augmentation to GANs does not improve the baseline. So, we would like to ask the questions: what prevents us from simply applying data augmentation to GANs? Why is augmenting GANs not as effective as augmenting classifiers? Augment reals only. The most straightforward way of augmenting GANs would be directly applying augmentation T to the real observations x, which we call Augment reals only : LD = Ex pdata(x)[f D( D(T(x)))] + Ez p(z)[f D(D(G(z)))], (3) LG = Ez p(z)[f G( D(G(z)))]. (4) However, Augment reals only deviates from the original purpose of generative modeling, as the model is now learning a different data distribution of T(x) instead of x. This prevents us from TranslaƟon + Cutout Color + TranslaƟon + Cutout (a) Augment reals only : the same augmentation artifacts appear on the generated images. High accuracy on T(x) and T(G(z)) Low accuracy on G(z) (b) Augment D only : the unbalanced optimization between G and D cripples training. Figure 5: Understanding why vanilla augmentation strategies fail: (a) Augment reals only mimics the same data distortion as introduced by the augmentations, e.g., the translation padding, the Cutout square, and the color artifacts; (b) Augment D only diverges because of the unbalanced optimization D perfectly classifies the augmented images (both T(x) and T(G(z)) but barely recognizes G(z) (i.e., fake images without augmentation) from which G receives gradients. applying any augmentation that significantly alters the distribution of the real images. The choices that meet this requirement, although strongly dependent on the specific dataset, can only be horizontal flips in most cases. We find that applying random horizontal flips does increase the performance moderately, and we use it in all our experiments to make our baselines stronger. We demonstrate the side effects of enforcing stronger augmentations quantitatively in Table 1 and qualitatively in Figure 5a. As expected, the model learns to produce unwanted color and geometric distortion (e.g., unnatural color, cutout holes) as introduced by these augmentations, resulting in a significantly worse performance (see Augment reals only in Table 1). Augment D only. Previously, Augment reals only applies one-sided augmentation to the real samples, and hence the convergence can be achieved only if the generated distribution matches the manipulated real distribution. From the discriminator s perspective, it may be tempting to augment both real and fake samples when we update D: LD = Ex pdata(x)[f D( D(T(x)))] + Ez p(z)[f D(D(T(G(z))))], (5) LG = Ez p(z)[f G( D(G(z)))]. (6) Here, the same function T is applied to both real samples x and fake samples G(z). If the generator successfully models the distribution of x, T(G(z)) and T(x) should be indistinguishable to the discriminator as well as G(z) and x. However, this strategy leads to even worse results (see Augment D only in Table 1). Figure 5b plots the training dynamics of Augment D only with Translation applied. Although D classifies the augmented images (both T(G(z)) and T(x)) perfectly with an accuracy of above 90%, it fails to recognize G(z), the generated images without augmentation, with an accuracy of lower than 10%. As a result, the generator completely fools the discriminator by G(z) and cannot obtain useful information from the discriminator. This suggests that any attempts that break the delicate balance between the generator G and discriminator D are prone to failure. 3.2 Differentiable Augmentation for GANs The failure of Augment reals only motivates us to augment both real and fake samples, while the failure of Augment D only warns us that the generator should not neglect the augmented samples. Therefore, to propagate gradients through the augmented samples to G, the augmentation T must be differentiable as depicted in Figure 4. We call this Differentiable Augmentation (Diff Augment): LD = Ex pdata(x)[f D( D(T(x)))] + Ez p(z)[f D(D(T(G(z))))], (7) LG = Ez p(z)[f G( D(T(G(z))))]. (8) Note that T is required to be the same (random) function but not necessarily the same random seed across the three places illustrated in Figure 4. We demonstrate the effectiveness of Diff Augment using three simple choices of transformations and its composition, throughout the paper: Translation 0 50 100 150 200 250 103 iterations Big GAN (baseline) Diff Augment (Translation) Diff Augment (Trans. + Cutout) Diff Augment (Color + Trans. + Cutout) 0 50 100 150 200 250 103 iterations D's Training Accuracy Big GAN (baseline) Diff Augment (Translation) Diff Augment (Trans. + Cutout) Diff Augment (Color + Trans. + Cutout) 0 50 100 150 200 250 103 iterations 1.0 D's Validation Accuracy Big GAN (baseline) Diff Augment (Translation) Diff Augment (Trans. + Cutout) Diff Augment (Color + Trans. + Cutout) Figure 6: Analysis of different types of Diff Augment on CIFAR-10 with 100% training data. A stronger Diff Augment can dramatically reduce the gap between the discriminator s training accuracy (middle) and validation accuracy (right), leading to a better convergence (left). Method 100% training data 50% training data 25% training data IS FID IS FID IS FID Big GAN [2] 94.5 0.4 7.62 0.02 89.9 0.2 9.64 0.04 46.5 0.4 25.37 0.07 + Diff Augment 100.8 0.2 6.80 0.02 91.9 0.5 8.88 0.06 74.2 0.5 13.28 0.07 Table 2: Image Net 128 128 results without the truncation trick [2]. IS and FID are measured using 50k samples; the validation set is used as the reference distribution for FID. We select the snapshot with the best FID for each method. We report means and standard deviations over 3 evaluation runs. (within [ 1/8, 1/8] of the image size, padded with zeros), Cutout [8] (masking with a random square of half image size), and Color (including random brightness within [ 0.5, 0.5], contrast within [0.5, 1.5], and saturation within [0, 2]). As shown in Table 1, Big GAN can be improved using the simple Translation policy and further boosted using a composition of Cutout and Translation; it is also robust to the strongest policy when Color is used in combined. Figure 6 analyzes that stronger Diff Augment policies generally maintain a higher discriminator s validation accuracy at the cost of a lower training accuracy, alleviate the overfitting problem, and eventually achieve better convergence. 4 Experiments We conduct extensive experiments on Image Net [6], CIFAR-10 [19], CIFAR-100, FFHQ [17], and LSUN-Cat [46] based on the leading class-conditional Big GAN [2] and unconditional Style GAN2 [18]. We use the common evaluation metrics Fr echet Inception Distance (FID) [13] and Inception Score (IS) [35]. In addition, we apply our method to low-shot generation both with and without pre-training in Section 4.4. Finally, we perform analysis studies in Section 4.5. 4.1 Image Net We follow the top-performing model Big GAN [2] on Image Net dataset at 128 128 resolution. Additionally, we augment real images with random horizontal flips, yielding the best reimplementation of Big GAN to our knowledge (FID: ours 7.6 vs. 8.7 in the original paper [2]). We use the simple Translation Diff Augment for all the data percentage settings. In Table 2, our method achieves significant gains especially under the 25% data setting, in which the baseline model undergoes an early collapse, and advances the state-of-the-art FID and IS with 100% data available. 4.2 FFHQ and LSUN-Cat We further experiment with Style GAN2 [18] on the FFHQ portrait dataset [17] and the LSUN-Cat dataset [46] at 256 256 resolution. We investigate different limited data settings, with 1k, 5k, 10k, and 30k training images available. We apply the strongest Color + Translation + Cutout Diff Augment to all the Style GAN2 baselines without any hyperparameter changes. The real images are also augmented with random horizontal flips as commonly applied in Style GAN2 [18]. Results are shown in Table 3. Our performance gains are considerable under all the data percentage settings. Moreover, with the fixed policies used in Diff Augment, our performance is on par with ADA [16], a concurrent work based on the adaptive augmentation strategy. Method FFHQ LSUN-Cat 30k 10k 5k 1k 30k 10k 5k 1k ADA [16] 5.46 8.13 10.96 21.29 10.50 13.13 16.95 43.25 Style GAN2 [18] 6.16 14.75 26.60 62.16 10.12 17.93 34.69 182.85 + Diff Augment 5.05 7.86 10.45 25.66 9.68 12.07 16.11 42.26 Table 3: FFHQ and LSUN-Cat results with 1k, 5k, 10k, and 30k training samples. With the fixed Color + Translation + Cutout Diff Augment, our method improves the Style GAN2 baseline and is on par with a concurrent work ADA [16]. FID is measured using 50k generated samples; the full training set is used as the reference distribution. We select the snapshot with the best FID for each method. Results are averaged over 5 evaluation runs; all standard deviations are less than 1% relatively. Method CIFAR-10 CIFAR-100 100% data 20% data 10% data 100% data 20% data 10% data Big GAN [2] 9.59 21.58 39.78 12.87 33.11 66.71 + Diff Augment 8.70 14.04 22.40 12.00 22.14 33.70 CR-Big GAN [50] 9.06 20.62 37.45 11.26 36.91 47.16 + Diff Augment 8.49 12.84 18.70 11.25 20.28 26.90 Style GAN2 [18] 11.07 23.08 36.02 16.54 32.30 45.87 + Diff Augment 9.89 12.15 14.50 15.22 16.65 20.75 Table 4: CIFAR-10 and CIFAR-100 results. We select the snapshot with the best FID for each method. Results are averaged over 5 evaluation runs; all standard deviations are less than 1% relatively. We use 10k samples and the validation set as the reference distribution for FID calculation, as done in prior work [50]. Concurrent works [14,16] use a different protocol: 50k samples and the training set as the reference distribution. If we adopt this evaluation protocol, our Big GAN + Diff Augment achieves an FID of 4.61, CR-Big GAN + Diff Augment achieves an FID of 4.30, and Style GAN2 + Diff Augment achieves an FID of 5.79. 4.3 CIFAR-10 and CIFAR-100 We experiment on the class-conditional Big GAN [2] and CR-Big GAN [50] and unconditional Style GAN2 [18] models. For a fair comparison, we also augment real images with random horizontal flips for all the baselines. The baseline models already adopt advanced regularization techniques, including Spectral Normalization [28], Consistency Regularization [50], and R1 regularization [27]; however, none of them achieves satisfactory results under the 10% data setting. For Diff Augment, we adopt Translation + Cutout for the Big GAN models, Color + Cutout for Style GAN2 with 100% data, and Color + Translation + Cutout for Style GAN2 with 10% or 20% data. As summarized in Table 4, our method improves all the baselines independently of the baseline architectures, regularizations, and loss functions (hinge loss in Big GAN and non-saturating loss in Style GAN2) without any hyperparameter changes. We refer the readers to the supplementary material for the complete tables with IS. The improvements are considerable especially when limited data is available. This is, to our knowledge, the new state of the art on CIFAR-10 and CIFAR-100 for both class-conditional and unconditional generation under all the 10%, 20%, and 100% data settings. 4.4 Low-Shot Generation For a certain person, an object, or a landmark, it is often tedious, if not completely impossible, to collect a large-scale dataset. To address this, researchers recently exploit few-shot learning [9,21] in the setting of image generation. Wang et al. [45] use fine-tuning to transfer the knowledge of models pre-trained on external large-scale datasets. Several works propose to fine-tune only part of the model [30,31,44]. Below, we show that our method not only produces competitive results without using external datasets or models but also is orthogonal to the existing transfer learning methods. We replicate the recent transfer learning algorithms [30,31,44,45] using the same codebase as Mo et al. [30] on their datasets (Animal Face [37] with 160 cats and 389 dogs), based on the pre-trained Style GAN model from the FFHQ face dataset [17]. To further demonstrate the data efficiency, Method Pre-training? 100-shot Animal Face [37] Obama Grumpy cat Panda Cat Dog Scale/shift [31] Yes 50.72 34.20 21.38 54.83 83.04 Mine GAN [44] Yes 50.63 34.54 14.84 54.45 93.03 Transfer GAN [45] Yes 48.73 34.06 23.20 52.61 82.38 + Diff Augment Yes 39.85 29.77 17.12 49.10 65.57 Freeze D [30] Yes 41.87 31.22 17.95 47.70 70.46 + Diff Augment Yes 35.75 29.34 14.50 46.07 61.03 Style GAN2 [18] No 80.20 48.90 34.27 71.71 130.19 + Diff Augment No 46.87 27.08 12.06 42.44 58.85 Table 5: Low-shot generation results. With only 100 (Obama, Grumpy cat, Panda), 160 (Cat), or 389 (Dog) training images, our method is on par with the transfer learning algorithms that are pre-trained with 70,000 images. FID is measured using 5k generated samples; the training set is the reference distribution. We select the snapshot with the best FID for each method. Figure 7: Style space interpolation of our method for low-shot generation without pre-training. The smooth interpolation results suggest little overfitting of our method even given small datasets. we collect the 100-shot Obama, grumpy cat, and panda datasets, and train the Style GAN2 model on each dataset using only 100 images without pre-training. For Diff Augment, we adopt Color + Translation + Cutout for Style GAN2, Color + Cutout for both the vanilla fine-tuning algorithm Transfer GAN [45] and Freeze D [30] that freezes the first several layers of the discriminator. Table 5 shows that Diff Augment achieves consistent gains independently of the training algorithm on all the datasets. Without any pre-training, we still achieve results on par with the existing transfer learning algorithms that require tens of thousands of images, with an exception on the 100-shot Obama dataset where pre-training with human faces clearly leads to better generalization. See Figure 3 and the supplementary material for qualitative comparisons. While there might be a concern that the generator is likely to overfit the tiny datasets (i.e., generating identical training images), Figure 7 suggests little overfitting of our method via linear interpolation in the style space [17]; please refer to the supplementary material for the nearest neighbor tests. 4.5 Analysis Below, we investigate whether smaller model or stronger regularization would similarly reduce overfitting and whether Diff Augment still helps. Finally, we analyze additional choices of Diff Augment. (a) Impact of model size. (b) Impact of R1 regularization γ. Figure 8: Analysis of smaller models or stronger regularization on CIFAR-10 with 10% training data. (a) Smaller models reduce overfitting for the Big GAN baseline, while our method dominates its performance at all model capacities. (b) Over a wide sweep of the R1 regularization γ for the baseline Style GAN2, its best FID (26.87) is still much worse than ours (14.50). Model Size Matters? We reduce the model capacity of Big GAN by progressively halving the number of channels for both G and D. As plotted in Figure 8a, the baseline heavily overfits on CIFAR-10 with 10% training data when using the full model and achieves a minimum FID of 29.02 at 1/4 channels. However, it is surpassed by our method over all model capacities. With 1/4 channels, our model achieves a significantly better FID of 21.57, while the gap is monotonically increasing as the model becomes larger. We refer the readers to the supplementary material for the IS plot. Stronger Regularization Matters? As Style GAN2 adopts the R1 regularization [27] to stabilize training, we increase its strength from γ = 0.1 to up to 104 and plot the FID curves in Figure 8b. While we initially find that γ = 0.1 works best under the 100% data setting, the choice of γ = 103 boosts its performance from 34.05 to 26.87 under the 10% data setting. When γ = 104, within 750k iterations, we only observe a minimum FID of 29.14 at 440k iteration and the performance deteriorates after that. However, its best FID is still 1.8 worse than ours (with the default γ = 0.1). This shows that Diff Augment is more effective compared to explicitly regularizing the discriminator. 0 10 20 30 40 Color + Transl. + Cutout + Bilinear Shearing Color + Transl. + Cutout + Bilinear Rotation Color + Transl. + Cutout + Bilinear Scaling Color + Transl. + Cutout + Bilinear Translation Color + Transl. + Cutout Color + Transl. + Cutout + 90 Rotations Color + Transl. + Cutout Style GAN2 (baseline) Figure 9: Various types of Diff Augment consistently outperform the baseline. We report Style GAN2 s FID on CIFAR-10 with 10% training data. Choice of Diff Augment Matters? We investigate additional choices of Diff Augment in Figure 9, including random 90 rotations ({ 90 , 0 , 90 } each with 1/3 probability), Gaussian noise (with a standard deviation of 0.1), and general geometry transformations that involve bilinear interpolation, such as bilinear translation (within [ 0.25, 0.25]), bilinear scaling (within [0.75, 1.25]), bilinear rotation (within [ 30 , 30 ]), and bilinear shearing (within [ 0.25, 0.25]). While all these policies consistently outperform the baseline, we find that the Color + Translation + Cutout Diff Augment is especially effective. The simplicity also makes it easier to deploy. 5 Conclusion We present Diff Augment for data-efficient GAN training. Diff Augment reveals valuable observations that augmenting both real and fake samples effectively prevents the discriminator from over-fitting, and that the augmentation must be differentiable to enable both generator and discriminator training. Extensive experiments consistently demonstrate its benefits with different network architectures (Style GAN2 and Big GAN), supervision settings, and objective functions, across multiple datasets (Image Net, CIFAR, FFHQ, LSUN, and 100-shot datasets). Our method is especially effective when limited data is available. Our code, datasets, and models are available for future comparisons. Broader Impact In this paper, we investigate GANs from the data efficiency perspective, aiming to make generative modeling accessible to more people (e.g., visual artists and novice users) and research fields who have no access to abundant data. In the real-world scenarios, there could be various reasons that lead to limited amount of data available, such as rare incidents, privacy concerns, and historical visual data [10]. Diff Augment provides a promising way to alleviate the above issues and make AI more accessible to everyone. Acknowledgments We thank NSF Career Award #1943349, MIT-IBM Watson AI Lab, Google, Adobe, and Sony for supporting this research. Research supported with Cloud TPUs from Google s Tensor Flow Research Cloud (TFRC). We thank William S. Peebles and Yijun Li for helpful comments. [1] Martin Arjovsky, Soumith Chintala, and L eon Bottou. Wasserstein Generative Adversarial Networks. In International Conference on Machine Learning (ICML), 2017. 2 [2] Andrew Brock, Jeff Donahue, and Karen Simonyan. Large Scale GAN Training for High Fidelity Natural Image Synthesis. In International Conference on Learning Representations (ICLR), 2018. 1, 2, 3, 4, 6, 7 [3] Ting Chen, Xiaohua Zhai, Marvin Ritter, Mario Lucic, and Neil Houlsby. Self-supervised gans via auxiliary rotation loss. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 3 [4] Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. Auto Augment: Learning Augmentation Policies from Data. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 3 [5] Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Rand Augment: Practical Automated Data Augmentation with a Reduced Search Space. ar Xiv, 2019. 3 [6] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Image Net: A Large-Scale Hierarchical Image Database. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009. 1, 6 [7] Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using a laplacian pyramid of adversarial networks. In Conference on Neural Information Processing Systems (Neur IPS), 2015. 3 [8] Terrance De Vries and Graham W Taylor. Improved regularization of convolutional neural networks with cutout. ar Xiv, 2017. 2, 3, 6 [9] Li Fei-Fei, Rob Fergus, and Pietro Perona. One-shot learning of object categories. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 28(4):594 611, 2006. 7 [10] Shiry Ginosar, Kate Rakelly, Sarah Sachs, Brian Yin, and Alexei A Efros. A century of portraits: A visual historical record of american high school yearbooks. In Proceedings of the IEEE International Conference on Computer Vision Workshops, 2015. 10 [11] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative Adversarial Nets. In Conference on Neural Information Processing Systems (Neur IPS), 2014. 1, 2, 4 [12] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein gans. In Conference on Neural Information Processing Systems (Neur IPS), 2017. 2, 3, 4 [13] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. In Conference on Neural Information Processing Systems (Neur IPS), 2017. 6 [14] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. ar Xiv, 2020. 7 [15] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. In International Conference on Learning Representations (ICLR), 2018. 3 [16] Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, and Timo Aila. Training generative adversarial networks with limited data. ar Xiv, 2020. 3, 6, 7 [17] Tero Karras, Samuli Laine, and Timo Aila. A Style-Based Generator Architecture for Generative Adversarial Networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 1, 3, 6, 7, 8 [18] Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and Improving the Image Quality of Style GAN. ar Xiv, 2019. 1, 3, 6, 7, 8 [19] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009. 6 [20] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Image Net Classification with Deep Convolutional Neural Networks. In Conference on Neural Information Processing Systems (Neur IPS), 2012. 1, 2, 3 [21] Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332 1338, 2015. 7 [22] Muyang Li, Ji Lin, Yaoyao Ding, Zhijian Liu, Jun-Yan Zhu, and Song Han. GAN Compression: Efficient Architectures for Interactive Conditional GANs. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 1 [23] Sungbin Lim, Ildoo Kim, Taesup Kim, Chiheon Kim, and Sungwoong Kim. Fast Auto Augment. In Conference on Neural Information Processing Systems (Neur IPS), 2019. 3 [24] Steven Liu, Tongzhou Wang, David Bau, Jun-Yan Zhu, and Antonio Torralba. Diverse image generation via self-conditioned gans. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 3 [25] Mario Luˇci c, Michael Tschannen, Marvin Ritter, Xiaohua Zhai, Olivier Bachem, and Sylvain Gelly. High-Fidelity Image Generation With Fewer Labels. In International Conference on Machine Learning (ICML), 2019. 3 [26] Xudong Mao, Qing Li, Haoran Xie, Raymond YK Lau, Zhen Wang, and Stephen Paul Smolley. Least squares generative adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. 2 [27] Lars Mescheder, Andreas Geiger, and Sebastian Nowozin. Which training methods for gans do actually converge? In International Conference on Machine Learning (ICML), 2018. 2, 3, 4, 7, 9 [28] Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In International Conference on Learning Representations (ICLR), 2018. 2, 3, 4, 7 [29] Takeru Miyato and Masanori Koyama. cgans with projection discriminator. In International Conference on Learning Representations (ICLR), 2018. 2 [30] Sangwoo Mo, Minsu Cho, and Jinwoo Shin. Freeze Discriminator: A Simple Baseline for Fine-tuning GANs. ar Xiv, 2020. 7, 8 [31] Atsuhiro Noguchi and Tatsuya Harada. Image generation from small datasets via batch statistics adaptation. In IEEE International Conference on Computer Vision (ICCV), 2019. 7, 8 [32] Taesung Park, Ming-Yu Liu, Ting-Chun Wang, and Jun-Yan Zhu. Semantic image synthesis with spatiallyadaptive normalization. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 1 [33] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In International Conference on Learning Representations (ICLR), 2016. 2 [34] Kevin Roth, Aurelien Lucchi, Sebastian Nowozin, and Thomas Hofmann. Stabilizing training of generative adversarial networks through regularization. In Conference on Neural Information Processing Systems (Neur IPS), pages 2018 2028, 2017. 3 [35] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training GANs. In Conference on Neural Information Processing Systems (Neur IPS), 2016. 2, 6 [36] Han Shu, Yunhe Wang, Xu Jia, Kai Han, Hanting Chen, Chunjing Xu, Qi Tian, and Chang Xu. Co Evolutionary Compression for Unpaired Image Translation. In IEEE International Conference on Computer Vision (ICCV), 2019. 1 [37] Zhangzhang Si and Song-Chun Zhu. Learning Hybrid Image Templates (HIT) by Information Projection. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2011. 3, 7, 8 [38] Patrice Y Simard, David Steinkraus, and John C Platt. Best practices for convolutional neural networks applied to visual document analysis. In Proceedings of International Conference on Document Analysis and Recognition, 2003. 1, 3 [39] Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Husz ar. Amortised map inference for image super-resolution. ar Xiv, 2016. 3 [40] Kenneth O Stanley and Risto Miikkulainen. Evolving neural networks through augmenting topologies. Evolutionary computation, 10(2):99 127, 2002. 3 [41] Ngoc-Trung Tran, Viet-Hung Tran, Ngoc-Bao Nguyen, Trung-Kien Nguyen, and Ngai-Man Cheung. Towards good practices for data augmentation in gan training. ar Xiv, 2020. 3 [42] Li Wan, Matthew Zeiler, Sixin Zhang, Yann Le Cun, and Rob Fergus. Regularization of neural networks using dropconnect. In International Conference on Machine Learning (ICML), 2013. 1, 3 [43] Xiaolong Wang, Abhinav Shrivastava, and Abhinav Gupta. A-fast-rcnn: Hard positive generation via adversary for object detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. 1 [44] Yaxing Wang, Abel Gonzalez-Garcia, David Berga, Luis Herranz, Fahad Shahbaz Khan, and Joost van de Weijer. Minegan: effective knowledge transfer from gans to target domains with few images. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 7, 8 [45] Yaxing Wang, Chenshen Wu, Luis Herranz, Joost van de Weijer, Abel Gonzalez-Garcia, and Bogdan Raducanu. Transferring gans: generating images from limited data. In European Conference on Computer Vision (ECCV), 2018. 7, 8 [46] Fisher Yu, Ari Seff, Yinda Zhang, Shuran Song, Thomas Funkhouser, and Jianxiong Xiao. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. ar Xiv, 2015. 6 [47] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond Empirical Risk Minimization. In International Conference on Learning Representations (ICLR), 2018. 3 [48] Han Zhang, Ian Goodfellow, Dimitris Metaxas, and Augustus Odena. Self-Attention Generative Adversarial Networks. In International Conference on Machine Learning (ICML), 2019. 2 [49] Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris N Metaxas. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In IEEE International Conference on Computer Vision (ICCV), 2017. 3 [50] Han Zhang, Zizhao Zhang, Augustus Odena, and Honglak Lee. Consistency regularization for generative adversarial networks. In International Conference on Learning Representations (ICLR), 2020. 3, 4, 7 [51] Zhengli Zhao, Zizhao Zhang, Ting Chen, Sameer Singh, and Han Zhang. Image augmentations for gan training. ar Xiv, 2020. 3 [52] Brady Zhou and Philipp Kr ahenb uhl. Don t let your discriminator be fooled. In International Conference on Learning Representations (ICLR), 2019. 3 [53] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In IEEE International Conference on Computer Vision (ICCV), 2017. 1 [54] Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. In International Conference on Learning Representations (ICLR), 2017. 3