# lipschitz_generative_adversarial_nets__556a1e6f.pdf Lipschitz Generative Adversarial Nets Zhiming Zhou 1 Jiadong Liang 2 Yuxuan Song 1 Lantao Yu 3 Hongwei Wang 3 Weinan Zhang 1 Yong Yu 1 Zhihua Zhang 2 In this paper we show that generative adversarial networks (GANs) without restriction on the discriminative function space commonly suffer from the problem that the gradient produced by the discriminator is uninformative to guide the generator. By contrast, Wasserstein GAN (WGAN), where the discriminative function is restricted to 1-Lipschitz, does not suffer from such a gradient uninformativeness problem. We further show in the paper that the model with a compact dual form of Wasserstein distance, where the Lipschitz condition is relaxed, may also theoretically suffer from this issue. This implies the importance of Lipschitz condition and motivates us to study the general formulation of GANs with Lipschitz constraint, which leads to a new family of GANs that we call Lipschitz GANs (LGANs). We show that LGANs guarantee the existence and uniqueness of the optimal discriminative function as well as the existence of a unique Nash equilibrium. We prove that LGANs are generally capable of eliminating the gradient uninformativeness problem. According to our empirical analysis, LGANs are more stable and generate consistently higher quality samples compared with WGAN. 1. Introduction Generative adversarial networks (GANs) (Goodfellow et al., 2014), as one of the most successful generative models, have shown promising results in various challenging tasks. GANs are popular and widely used, but they are notoriously hard to train (Goodfellow, 2016). The underlying obstacles, though have been heavily studied (Arjovsky & Bottou, 2017; Lucic et al., 2017; Heusel et al., 2017; Mescheder et al., 2017; 1Shanghai Jiao Tong University 2Peking University 3Stanford University. Correspondence to: Zhiming Zhou . Proceedings of the 36 th International Conference on Machine Learning, Long Beach, California, PMLR 97, 2019. Copyright 2019 by the author(s). 2018; Yadav et al., 2017), are still not fully understood. The objective of GAN is usually defined as a distance metric between the real distribution Pr and the generative distribution Pg, which implies that Pr = Pg is the unique global optimum. The nonconvergence of traditional GANs has been considered as a result of ill-behaving distance metric (Arjovsky & Bottou, 2017), i.e., the distance between Pr and Pg keeps constant when their supports are disjoint. Arjovsky et al. (2017) accordingly suggested using the Wasserstein distance, which can properly measure the distance between two distributions no matter whether their supports are disjoint. In this paper, we conduct a further study on the convergence of GANs from the perspective of the informativeness of the gradient of the optimal discriminative function f . We show that for GANs that have no restriction on the discriminative function space, e.g., the vanilla GAN and its most variants, f (x) is only related to the densities of the local point x and does not reflect any information about other points in the distributions. We demonstrate that under these circumstances, the gradient of the optimal discriminative function with respect to its input, on which the generator updates generated samples, usually tells nothing about the real distribution. We refer to this phenomenon as the gradient uninformativeness, which is substantially different from the gradient vanishing and is a fundamental cause of nonconvergence of GANs. According to the analysis of Gulrajani et al. (2017), Wasserstein GAN can avoid the gradient uninformativeness problem. Meanwhile, we show in the paper that the Lipschitz constraint in the Kantorovich-Rubinstein dual of the Wasserstein distance can be relaxed, leading to a new equivalent dual; and with the new dual form, the gradient may also not reflect any information about how to refine Pg towards Pr. It suggests that Lipschitz condition would be a vital element for resolving the gradient uninformativeness problem. Motivated by the above analysis, we investigate the general formulation of GANs with Lipschitz constraint. We show that under a mild condition, penalizing Lipschitz constant guarantees the existence and uniqueness of the optimal discriminative function as well as the existence of the unique Nash equilibrium between f and Pg where Pr = Pg. It leads to a new family of GANs that we call Lipschitz GANs Lipschitz Generative Adversarial Nets Table 1: Comparison of different objectives in GANs. φ ϕ F f (x) Gradient Gradient f (x) Vanishing Uninformative Uniqueness Vanilla GAN log(σ( x)) log(σ(x)) {f : Rn R} log Pr(x) Pg(x) Yes Yes Yes Least-Squares GAN (x α)2 (x β)2 {f : Rn R} α Pg(x)+β Pr(x) Pr(x)+Pg(x) No Yes Yes µ-Fisher GAN x x {f : Rn R, Ex µ|f(x)|2 1} 1 Fµ(Pr,Pg) Pr(x) Pg(x) µ(x) No Yes Yes Wasserstein GAN x x {f : Rn R, k(f) 1} N/A No No No Lipschitz GAN any φ and ϕ satisfying Eq. (11) {f : Rn R}; k(f) is penalized N/A No No Yes (LGANs). We show that LGANs are generally capable of eliminating the gradient uninformativeness in the manner that with the optimal discriminative function, the gradient for each generated sample, if nonzero, will point towards some real sample. This process continues until the Nash equilibrium Pr = Pg is reached. The remainder of this paper is organized as follows. In Section 2, we provide some preliminaries that will be used in this paper. In Section 3, we study the gradient uninformativeness issue in detail. In Section 4, we present LGANs and their theoretical analysis. We conduct the empirical analysis in Section 5. Finally, we discuss related work in Section 6 and conclude the paper in Section 7. 2. Preliminaries In this section we first give some notions and then present a general formulation for generative adversarial networks. 2.1. Notation and Notions Given two metric spaces (X, d X) and (Y, d Y ), a function f : X Y is said to be Lipschitz continuous if there exists a constant k 0 such that d Y (f(x1), f(x2)) k d X(x1, x2), x1, x2 X. (1) In this paper and in most existing GANs, the metrics d X and d Y are by default Euclidean distance which we also denote by . The smallest constant k is called the (best) Lipschitz constant of f, denoted by k(f). The first-order Wasserstein distance W1 between two probability distributions is defined as W1(Pr, Pg) = inf π Π(Pr,Pg) E(x,y) π [d(x, y)], (2) where Π(Pr, Pg) denotes the set of all probability measures with marginals Pr and Pg. It can be interpreted as the minimum cost of transporting the distribution Pg to the distribution Pr. We use π to denote the optimal transport plan, and let Sr and Sg denote the supports of Pr and Pg, respectively. We say two distributions are disjoint if their supports are disjoint. The Kantorovich-Rubinstein (KR) duality (Villani, 2008) provides a way of more efficiently computing of Wasserstein distance. The duality states that W1(Pr, Pg) = supf Ex Pr [f(x)] Ex Pg [f(x)], s.t. f(x) f(y) d(x, y), x, y. (3) The constraint in Eq. (3) implies that f is Lipschitz continuous with k(f) 1. Interestingly, we have a more compact dual form of the Wasserstein distance. That is, W1(Pr, Pg) = supf Ex Pr [f(x)] Ex Pg [f(x)], s.t. f(x) f(y) d(x, y), x Sr, y Sg. (4) The proof for this dual form is given in Appendix A.5. We see that this new dual relaxes the Lipschitz continuity condition of the dual form in Eq. (3). 2.2. Generative Adversarial Networks (GANs) Typically, GANs can be formulated as min f F JD Ez Pz[φ(f(g(z)))] + Ex Pr[ϕ(f(x))], min g G JG Ez Pz[ψ(f(g(z)))], (5) where Pz is the source distribution of the generator in Rm and Pr is the target (real) distribution in Rn. The generative function g: Rm Rn learns to output samples that share the same dimension as samples in Pr, while the discriminative function f : Rn R learns to output a score indicating the authenticity of a given sample. Here F and G denote discriminative and generative function spaces, respectively; and φ, ϕ, ψ: R R are loss metrics. We denote the implicit distribution of the generated samples by Pg. We list the choices of F, φ and ϕ in some representative GAN models in Table 1. In these GANs, the gradient that the generator receives from the discriminator with respect to (w.r.t.) a generated sample x Sg is x JG(x) xψ(f(x)) = f(x)ψ(f(x)) xf(x), (6) where the first term f(x)ψ(f(x)) is a step-related scalar, and the second term xf(x) is a vector with the same dimension as x which indicates the direction that the generator should follow for optimizing the generated sample x. We use f to denote the optimal discriminative function, i.e., f arg minf F JD. For further notation, we let JD(x) Pg(x)φ(f(x)) + Pr(x)ϕ(f(x)). It has JD = R JD(x)dx. Lipschitz Generative Adversarial Nets 2.3. The Gradient Vanishing The gradient vanishing problem has been typically thought as a key factor for causing the nonconvergence of GANs, i.e., the gradient becomes zero when the discriminator is perfectly trained. Goodfellow et al. (2014) addressed this problem by using an alternative objective for the generator. Actually, only the scalar f(x)ψ(f(x)) is changed. The Least-Squares GAN (Mao et al., 2016), which aims at addressing the gradient vanishing problem, also focused on f(x)ψ(f(x)). Arjovsky & Bottou (2017) provided a new perspective for understanding the gradient vanishing. They argued that Sr and Sg are usually disjoint and the gradient vanishing stems from the ill-behaving of traditional distance metrics, i.e., the distance between Pr and Pg remains constant when they are disjoint. The Wasserstein distance was thus used (Arjovsky et al., 2017) as an alternative metric, which can properly measure the distance between two distributions no matter whether they are disjoint. 3. The Gradient Uninformativeness In this paper we pay our main attention on the gradient direction of the optimal discriminative function, i.e., xf (x), along which the generated sample x is updated. We show that for many distance metrics, such a gradient may fail to bring any useful information about Pr. Consequently, Pg is not guaranteed to converge to Pr. We name this phenomenon as the gradient uninformativeness and argue that it is a fundamental factor of resulting in nonconvergence and instability in the training of traditional GANs. The gradient uninformativeness is substantially different from the gradient vanishing. The gradient vanishing is about the scalar term f(x)ψ(f(x)) in x JG(x) or the overall scale of x JG(x), while the gradient uninformativeness is about the direction of x JG(x), which is defined by xf (x). The two issues are orthogonal, though they sometimes exist simultaneously. See Table 1 for a summary of issues for representative GANs. Next, we discuss the gradient uninformativeness in the taxonomy of restrictions on the discriminative function space F. We will show that for unrestricted GANs, gradient uninformativeness commonly exists; for restricted GANs, such an issue might still exist; and with Lipschitz condition, it generally does not exist. 3.1. Unrestricted GANs For many GAN models, there is no restriction on F. Typical cases includef-divergence based GANs, such as the vanilla GAN (Goodfellow et al., 2014), Least-Squares GAN (Mao et al., 2016) and f-GAN (Nowozin et al., 2016). In these GANs, the value of the optimal discriminative function at each point f (x) is independent of other points and only reflects the local densities Pr(x) and Pg(x): f (x) = arg min f(x) R Pg(x)φ(f(x)) + Pr(x)ϕ(f(x)), x. Hence, for each generated sample x which is not surrounded by real samples (there exists ϵ>0 such that for all y with 0< y x <ϵ, it holds that y / Sr), f (x) in the surrounding of x would contain no information about Pr. Thus xf (x), the gradient that x receives from the optimal discriminative function, does not reflect any information about Pr. Typical situation is that Sr and Sg are disjoint, which is common in practice according to (Arjovsky & Bottou, 2017). To further distinguish the gradient uninformativeness from the gradient vanishing, we consider an ideal case: Sr and Sg are totally overlapped and both consist of n discrete points, but their probability masses over these points are different. In this case, xf (x) for each generated sample is still uninformative, but the gradient does not vanish. 3.2. Restricted GANs: Fisher GAN as an Instance Some GANs impose restrictions on F. Typical instances are the Integral Probability Metric (IPM) based GANs (Mroueh & Sercu, 2017; Mroueh et al., 2017; Bellemare et al., 2017) and the Wasserstein GAN (Arjovsky et al., 2017). We next show that GANs with restriction on F might also suffer from the gradient uninformativeness. The optimal discriminative function of µ-Fisher IPM Fµ(Pr, Pg), the generalized objective of the Fisher GAN (Mroueh et al., 2017), has the following form: f (x) = 1 Fµ(Pr, Pg) Pr(x) Pg(x) where µ is a distribution whose support covers Sr and Sg, and 1 Fµ(Pr,Pg) is a constant. It can be observed that µ-Fisher IPM also defines f (x) at each point according to the local densities and does not reflect information of other locations. Similar as above, we can conclude that for each generated sample that is not surrounded by real samples, xf (x) is uninformative. 3.3. The Wasserstein GAN As shown by Gulrajani et al. (2017), the gradient of the optimal discriminative function in the KR dual form of the Wasserstein distance has the following property: Proposition 1. Let π be the optimal transport plan in Eq. (2) and xt = tx + (1 t)y with 0 t 1. If the optimal discriminative function f in Eq. (3) is differentiable and π (x, x) = 0 for all x, then it holds that P(x,y) π xtf (xt) = y x y x Lipschitz Generative Adversarial Nets This proposition indicates: (i) for each generated sample x, there exists a real sample y such that xtf (xt) = y x y x for all linear interpolations xt between x and y, i.e., the gradient at any xt is pointing towards the real sample y; (ii) these (x, y) pairs match the optimal coupling π in the optimal transport perspective. It implies that WGAN is able to overcome the gradient uninformativeness as well as the gradient vanishing. Our concern turns to the reason why WGAN can avoid gradient uninformativeness. To address this question, we alternatively apply the compact dual of the Wasserstein distance in Eq. (4) and study the optimal discriminative function. Since there is generally no closed-form solution for f in Eq. (4), we take an illustrative example, but the conclusion is general. Let Z U[0, 1] be a uniform variable on interval [0, 1], Pr be the distribution of (1, Z) in R2, and Pg be the distribution of (0, Z) in R2. According to Eq. (4), we have an optimal f as follows 0, x Sg. (9) Though having the constraint f(x) f(y) d(x, y), x Sr, y Sg, the Wasserstein distance in this dual form also only defines the values of f (x) on Sr and Sg. For each generated sample x which is isolated or at the boundary (there does not exist ϵ > 0 such that it holds y Sr Sg for all y with 0 < y x < ϵ), the gradient of f (x) is theoretically undefined and thus cannot provide useful information about Pr. We can consider the more extreme case where Sg are isolated points to make it clearer. These examples imply that Lipschitz condition would be critical for resolving the gradient uninformativeness problem. Motivated by this, we study the general formulation of GANs with Lipschitz constraint, which leads to a family of more general GANs that we call Lipschitz GANs. We will see that in Lipschitz GANs, the similarity measure between Pr and Pg might not be some Wasserstein distance, but they still perform very well. 4. Lipschitz GANs Lipschitz continuity recently becomes popular in GANs. It was observed that introducing Lipschitz continuity as a regularization of the discriminator leads to improved stability and sample quality (Arjovsky et al., 2017; Kodali et al., 2017; Fedus et al., 2017; Miyato et al., 2018; Qi, 2017). In this paper, we investigate the general formulation of GANs with Lipschitz constraint, where the Lipschitz constant of discriminative function is penalized via a quadratic loss, to theoretically analyze the properties of such GANs. In particular, we define the Lipschitz Generative Adversarial Nets (LGANs) as: min f F Ez Pz[φ(f(g(z)))] + Ex Pr[ϕ(f(x))] + λ k(f)2, min g G Ez Pz[ψ(f(g(z)))]. (10) In this work, we further assume that the loss functions φ and ϕ satisfy the following conditions: φ (x) > 0, ϕ (x) < 0, φ (x) 0, ϕ (x) 0, a, φ (a) + ϕ (a) = 0. The assumptions for the losses φ and ϕ are very mild. Note that in WGAN φ(x) = ϕ( x) = x is used, which satisfies Eq. (11). There are many other instances, such as φ(x) = ϕ( x) = log(σ( x)), φ(x) = ϕ( x) = x + x2 + 1 and φ(x) = ϕ( x) = exp(x). Meanwhile, there also exist losses used in GANs that do not satisfy Eq. (11), e.g., the quadratic loss (Mao et al., 2016) and the hinge loss (Zhao et al., 2016; Lim & Ye, 2017; Miyato et al., 2018). To devise a loss in LGANs, it is practical to let φ be be an increasing function with non-decreasing derivative and set φ(x) = ϕ( x). Moreover, the linear combinations of such losses still satisfy Eq. (11). Figure 13 illustrates some of these loss metrics. Note that φ(x) = ϕ( x) = log(σ( x)) is the objective of vanilla GAN. As we have shown, the vanilla GAN suffers from the gradient uninformativeness problem. However, as we will show next, when imposing the Lipschitz regularization, the resulting model as a specific case of LGANs behaves very well. 4.1. Theoretical Analysis We now present the theoretical analysis of LGANs. First, we consider the existence and uniqueness of the optimal discriminative function. Theorem 1. Under Assumption (11) and if φ or ϕ is strictly convex, the optimal discriminative function f of Eq. (10) exists and is unique. Note that although WGAN does not satisfy the condition in Theorem 1, its solution still exists but is not unique. Specifically, if f is an optimal solution then f + α for any α R is also an optimal solution. The following theorems can be regarded as a generalization of Proposition 1 to LGANs. Theorem 2. Assume φ (x) > 0, ϕ (x) < 0, and the optimal discriminator f exists and is smooth. We have (a) For all x Sr Sg, if it holds that f (x) JD(x) = 0, then there exists y Sr Sg with y = x such that |f (y) f (x)| = k(f ) y x ; (b) For all x Sr Sg Sr Sg, there exists y Sr Sg with y = x such that |f (y) f (x)| = k(f ) y x ; Lipschitz Generative Adversarial Nets (c) If Sr = Sg and Pr = Pg, then there exists (x, y) pair with both points in Sr Sg and y = x such that |f (y) f (x)| = k(f ) y x and f (x) JD(x) = 0; (d) There is a unique Nash equilibrium between Pg and f under the objective JD + λ k(f)2, where it holds that Pr = Pg and k(f ) = 0. The proof is given in Appendix A.2. This theorem states the basic properties of LGANs, including the existence of unique Nash equilibrium where Pr = Pg and the existence of bounding relationships in the optimal discriminative function (i.e., y = x such that |f (y) f (x)| = k(f ) y x ). The former ensures that the objective is a well-defined distance metric, and the latter, as we will show next, eliminates the gradient uninformativeness problem. It is worth noticing that the penalty k(f) is in fact necessary for Property-(c) and Property-(d). The reason is due to the existence of the case that f (x) JD(x) = 0 for Pr(x) = Pg(x). Minimizing k(f) guarantees that the only Nash equilibrium is achieved when Pr = Pg. In WGAN, minimizing k(f) is not necessary. However, if k(f) is not minimized towards zero, xf (x) is not guaranteed to be zero at the convergence state Pr = Pg where any function subject to 1-Lipschitz constraint is an optimal f in WGAN. It implies that minimizing k(f) also benefits WGAN. 4.2. Refining the Bounding Relationship From Theorem 2, we know that for any point x, as long as JD(x) does not hold a zero gradient with respect to f (x), f (x) must be bounded by another point y such that |f (y) f (x)| = k(f ) y x . We further clarify that when there is a bounding relationship, it must involve both real sample(s) and fake sample(s). More formally, we have Theorem 3. Under the conditions in Theorem 2, we have 1) For any x Sg, if f (x) JD(x) > 0, then there must exist some y Sr with y = x such that f (y) f (x) = k(f ) y x and f (y) JD(y) < 0; 2) For any y Sr, if f (y) JD(y) < 0, then there must exist some x Sg with y = x such that f (y) f (x) = k(f ) y x and f (x) JD(x) > 0. The intuition behind the above theorem is that samples from the same distribution (e.g., the fake samples) will not bound each other to violate the optimality of JD(x). So, when there is strict bounding relationship (i.e., it involves points that hold f (x) JD(x) = 0), it must involve both real and fake samples. It is worth noticing that if only it is not the overlapping case, all fake samples hold f (x) JD(x) > 0, while all real samples hold f (y) JD(y) < 0. Note that there might exist a dozen real and fake samples that bound each other. Under the Lipschitz continuity condition, the bounding relationship on the value surface of f is the basic building block that connects Pr and Pg, and each fake sample with f (x) JD(x) = 0 lies in at least one of these bounded relationships. Next we will further interpret the implication of bounding relationship and show that it guarantees meaningful xf (x) for all involved points. 4.3. The Implication of Bounding Relationship Recall that the Proposition 1 states that xtf (xt) = y x y x . We next show that it is actually a direct consequence of bounding relationship between x and y. We formally state it as follows: Theorem 4. Assume function f is differentiable and its Lipschitz constant is k, then for all x and y which satisfy y = x and f(y) f(x) = k y x , we have xtf(xt) = k y x y x for all xt = tx + (1 t)y with 0 t 1. In other words, if two points x and y bound each other in terms of f(y) f(x) = k y x , there is a straight line between x and y on the value surface of f. Any point in this line holds the maximum gradient slope k, and the gradient direction at any point in this line is pointing towards the x y direction. The proof is provided in Appendix A.4. Combining Theorems 2 and 3, we can conclude that when Sr and Sg are disjoint, the gradient xf (x) for each generated sample x Sg points towards some real sample y Sr, which guarantees that xf (x)-based updating would pull Pg towards Pr at every step. In fact, Theorem 2 provides further guarantee on the convergence. Property-(b) implies that for any generated sample x Sg that does not lie in Sr, its gradient xf (x) must point towards some real sample y Sr. And in the fully overlapped case, according to Property-(c), unless Pr = Pg, there must exist at least one pair of (x, y) in strict bounding relationship and xf (x) pulls x towards y. Finally, Property-(d) guarantees that the only Nash equilibrium is Pr = Pg where xf (x) = 0 for all generated samples. 5. Empirical Analysis In this section, we empirically study the gradient uninformativeness problem and the performance of various objectives of Lipschitz GANs. The anonymous code is provided in the supplemental material. 5.1. Gradient Uninformativeness in Practice According to our analysis, xf (x) for most traditional GANs is uninformative. Here we investigate the practical behaviors of the gradient uninformativeness. Note that the behaviors of GANs without restriction on F are essentially identical. We choose the Least-Squares GAN whose f is relatively simple as the representative and study it with a set Lipschitz Generative Adversarial Nets (a) Disjoint Case (b) Overlapping Case (c) Mode Collapse Figure 1: Practical behaviors of gradient uninformativeness: noisy gradient. Local greedy gradient leads to mode collapse. of synthetic experiments which benefits the visualization. The results are shown in Figure 1. We find that the gradient is very random, which we believe is the typical practical behavior of the gradient uninformativeness. Given the nondeterministic property of f (x) for points out of Sr Sg, xf (x) is highly sensitive to the hyper-parameters. We actually conduct the same experiments with a set of different hyper-parameters. The rest is provided in Appendix B. In Section 3, we discussed the gradient uninformativeness under the circumstances that the fake sample is not surrounded by real samples. Actually, the problem of xf (x) in traditional GANs is more general, which can also be regarded as the gradient uninformativeness. For example, in the case of Figure 1b where the real and fake samples are both evenly distributed in the two regions with different densities, f (x) is constant in each region and undefined outside. It theoretically has zero xf (x) for inner points and undefined xf (x) for boundary points. They in practice also behave as noisy gradient. We note that in the totally overlapping and continuous case, xf (x) is also ill-behaving, which seems to be an intrinsic cause of mode collapse, as illustrated in Figure 1c where Pr and Pg are both devised to be Gaussian(s). 5.2. Verifying xf (x) of LGANs One important theoretical benefit of LGANs is that xf (x) for each generated sample is guaranteed to point towards some real sample. We here verify the gradient direction of xf (x) with a set of φ and ϕ that satisfy Eq. (11). The tested objectives include: (a) φ(x) = ϕ( x) = x; (b) φ(x) = ϕ( x) = log(σ( x)); (c) φ(x) = ϕ( x) = x + x2 + 1; (d) φ(x) = ϕ( x) = exp(x). And they are tested in two scenarios: two-dimensional toy data and real-world high-dimensional data. In the two-dimensional case, Pr consists of two Gaussians and Pg is fixed as one Gaussian which is close to one of the two real Gaussians, as illustrated in Figure 2. For the latter case, we use the CIFAR-10 training set. To make solving f feasible, we use ten CIFAR-10 images as Pr and ten fixed noise images as Pg. Note that we fix Pg on purpose because to verify the direction of xf (x), learning Pg is not necessary. The results are shown in Figures 2 and 3, respectively. In Figure 2, we can see that the gradient of each generated sample is pointing towards some real sample. For the high dimensional case, visualizing the gradient direction is nontrivial. Hence, we plot the gradient and corresponding increments. In Figure 3, the leftmost in each row is a sample x from Pg and the second is its gradient xf(x). The interiors are x + ϵ xf(x) with increasing ϵ and the rightmost is the nearest real sample y from Pr. This result visually demonstrates that the gradient of a generated sample is towards a real sample. Note that the final results of Figure 3 keep almost identical when varying the loss metric φ and ϕ in the family of LGANs. 5.3. Stabilized Discriminative Functions The Wasserstein distance is a very special case that has solution under Lipschitz constraint. It is the only case where both φ and ϕ have constant derivative. As a result, f under the Wasserstein distance has a free offset, i.e., given some f , f + α with any α R is also an optimal. In practice, it behaves as oscillations in f(x) during training. The oscillations affect the practical performance of WGAN; Karras et al. (2017) and Adler & Lunz (2018) introduced regularization to the discriminative function to prevent f(x) drifting during the training. By contrast, any other instance of LGANs does not have this problem. We illustrate the practical difference in Figure 5. 5.4. Max Gradient Penalty (Max GP) LGANs impose penalty on the Lipschitz constant of the discriminative function. There are works that investigate different implementations of Lipschitz continuity in GANs, such as gradient penalty (GP) (Gulrajani et al., 2017), Lipschitz penalty (LP) (Petzka et al., 2017) and spectral normalization (SN) (Miyato et al., 2018). However, the existing regularization methods do not directly penalize the Lipschitz constant. According to (Adler & Lunz, 2018), Lipschitz constant k(f) Lipschitz Generative Adversarial Nets (b) log(σ( x)) Figure 2: xf (x) in LGANs point towards real samples. Figure 3: xf (x) gradation with CIFAR-10. is equivalent to the maximum scale of xf(x) . Both GP and LP penalize all gradients whose scales are larger than the given target Lipschitz constant k0. SN directly restricts the Lipschitz constant via normalizing the network weights by their largest eigenvalues. However, it is currently unclear how to effectively penalize the Lipschitz constant with SN. To directly penalize Lipschitz constant, we approximate k(f) in Eq. (10) with the maximum sampled gradient scale: k(f) max x xf(x) . (12) Practically, we follow (Gulrajani et al., 2017) and sample x as random interpolation of real and fake samples. We provide more details of this algorithm (Max GP) in Appendix C. According to our experiments, Max GP in practice is usually comparable with GP and LP. However, in some of our synthetic experiments, we find that Max GP is able to achieve the optimal discriminative function while GP and LP fail, e.g., the problem of solving f in Figure 3. Also, in some real data experiments, we find the training with GP or LP diverges and it is able to converge if we switch to Max GP, e.g., the training with metric φ(x) = ϕ( x) = exp(x). 5.5. Benchmark with Unsupervised Image Generation To quantitatively compare the performance of different objectives under Lipschitz constraint, we test them with unsupervised image generation tasks. In this part of experiments, we also include the hinge loss φ(x) = ϕ( x) = max(0, x + α) and quadratic loss (Mao et al., 2016), which do not fit the assumption of strict monotonicity. For the quadratic loss, we set φ(x) = ϕ( x) = (x + α)2. To make the comparison simple, we fix ψ(x) in the objective of generator as x. We set α = 1.0 in the experiment. The strict monotonicity assumption of φ and ϕ is critical in Theorem 2 to theoretically guarantee the existences of bounding relationships for arbitrary datas. But if we further assume Sr and Sg are limited, it is possible that there exists a suitable λ such that all real and fake samples lie in a strict monotone region of φ and ϕ: for the hinge loss, it would mean 2α < k(f) y x for all y Sr and x Sg. The results in terms of Inception Score (IS) (Salimans et al., 2016) and Frechet Inception Distance (FID) (Heusel et al., 2017) are presented in Table 2. For all experiments, we adopt the network structures and hyper-parameter setting from (Gulrajani et al., 2017), where WGAN-GP in our implementation achieves IS 7.71 0.03 and FID 18.86 0.13 on CIFAR-10. We use Max GP for all experiments and search the best λ in [0.01, 0.1, 1.0, 10.0]. We use 200, 000 iterations for better convergence and use 500k samples to evaluate IS and FID for preferable stability. We note that IS is remarkably unstable during training and among different initializations. By contrast, FID is fairly stable. From Table 2, we can see that LGANs generally work better than WGAN. Different LGANs have relatively similar final results, while the objectives φ(x) = ϕ( x) = exp(x) and φ(x) = ϕ( x) = x + x2 + 1 achieve the best performances. The hinge loss and quadratic loss with a suitable λ turn out to also work pretty good. We plot the training curves in terms of FID in Figures 4 and 6. Due to page limitation, we leave more results and details in Appendix D. 6. Related Work WGAN (Arjovsky et al., 2017) based on the KR dual does not suffer from the gradient uninformativeness problem. We have shown that the Lipschitz constraint in the KR dual of the Wasserstein distance can be relaxed. With the new dual form, the resulting model suffers from the gradient Lipschitz Generative Adversarial Nets Table 2: Quantitative comparisons with unsupervised image generation. Objective CIFAR-10 Tiny Image Net IS FID IS FID x 7.68 0.03 18.35 0.12 8.66 0.04 16.47 0.04 exp(x) 8.03 0.03 15.64 0.07 8.67 0.04 14.90 0.07 log(σ( x)) 7.95 0.04 16.47 0.11 8.70 0.04 15.05 0.07 x + x2 + 1 7.97 0.03 16.03 0.09 8.82 0.03 15.11 0.06 (x + 1)2 7.97 0.04 15.90 0.09 8.53 0.04 15.72 0.11 max(0, x + 1) 7.91 0.04 16.52 0.12 8.63 0.04 15.75 0.06 Figure 4: Training curves on CIFAR. Figure 5: f (x) in LGANs is more stable. Left: WGAN. Right: LGANs. Figure 6: Training curves on Tiny. uninformativeness problem. We have shown that Lipschitz constraint is able to ensure the convergence for a family of GAN objectives, which is not limited to the Wasserstein distance. For example, Lipschitz continuity is also introduced to the vanilla GAN (Miyato et al., 2018; Kodali et al., 2017; Fedus et al., 2017), achieving improvements in the quality of generated samples. As a matter of fact, the vanilla GAN objective φ(x) = ϕ( x) = log(σ( x)) is an special case of our LGANs. Thus our analysis explains why and how it works. (Farnia & Tse, 2018) also provide some analysis on how f-divergence behaviors when combined with Lipschitz. However, their analysis is limited to the symmetric f-divergence. Fedus et al. (2017) also argued that divergence is not the primary guide of the training of GANs. However, they thought that the vanilla GAN with a non-saturating generator objective somehow works. According to our analysis, given the optimal f , the vanilla GAN has no guarantee on its convergence. Unterthiner et al. (2017) provided some arguments on the unreliability of xf (x) in traditional GANs, which motivates their proposal of Coulomb GAN. However, the arguments there are not thorough. By contrast, we identify the gradient uninformativeness problem and link it to the restrictions on F. Moreover, we have accordingly proposed a new solution, i.e., the Lipschitz GANs. Some work studies the suboptimal convergence of GANs (Mescheder et al., 2017; 2018; Arora et al., 2017; Liu et al., 2017; Farnia & Tse, 2018), which is another important direction for theoretically understanding GANs. Despite the fact that the behaviors of suboptimal can be different, we think the optimal should well-behave in the first place, e.g., informative gradient and stable Nash equilibrium. Researchers found that applying Lipschitz continuity condition to the generator also benefits the quality of generated samples (Zhang et al., 2018; Odena et al., 2018). And (Qi, 2017) studied the Lipschitz condition from the perspective of losssensitive with a Lipschitz data density assumption. 7. Conclusion In this paper we have studied one fundamental cause of failure in the training of GANs, i.e., the gradient uninformativeness issue. In particular, for generated samples which are not surrounded by real samples, the gradients of the optimal discriminative function xf (x) tell nothing about Pr. That is, in a sense, there is no guarantee that Pg will converge to Pr. Typical case is that Pr and Pg are disjoint, which is common in practice. The gradient uninformativeness is common for unrestricted GANs and also appears in restricted GANs. To address the nonconvergence problem caused by uninformative xf (x), we have proposed LGANs and shown that it makes xf (x) informative in the way that the gradient for each generated sample points towards some real sample. We have also shown that in LGANs, the optimal discriminative function exists and is unique, and the only Nash equilibrium is achieved when Pr = Pg where k(f ) = 0. Our experiments shown LGANs lead to more stable discriminative functions and achieve higher sample qualities. Lipschitz Generative Adversarial Nets Acknowledgements This work is sponsored by APEX-YITU Joint Research Program. The corresponding authors Zhiming Zhou and Weinan Zhang thank the support of National Natural Science Foundation of China (61702327, 61772333, 61632017), Shanghai Sailing Program (17YF1428200) and the helpful discussions with Dachao Lin. Jiadong Liang and Zhihua Zhang have been supported by Beijing Municipal Commission of Science and Technology under Grant No. 181100008918005, and by Beijing Academy of Artificial Intelligence (BAAI). Adler, J. and Lunz, S. Banach Wasserstein GAN. ar Xiv preprint ar Xiv:1806.06621, 2018. Arjovsky, M. and Bottou, L. Towards principled methods for training generative adversarial networks. In ICLR, 2017. Arjovsky, M., Chintala, S., and Bottou, L. Wasserstein GAN. ar Xiv preprint ar Xiv:1701.07875, 2017. Arora, S., Ge, R., Liang, Y., Ma, T., and Zhang, Y. Generalization and equilibrium in generative adversarial nets (GANs). ar Xiv preprint ar Xiv:1703.00573, 2017. Bellemare, M. G., Danihelka, I., Dabney, W., Mohamed, S., Lakshminarayanan, B., Hoyer, S., and Munos, R. The Cramer distance as a solution to biased Wasserstein gradients. ar Xiv preprint ar Xiv:1705.10743, 2017. Farnia, F. and Tse, D. A convex duality framework for GANs. In Advances in Neural Information Processing Systems 31. 2018. Fedus, W., Rosca, M., Lakshminarayanan, B., Dai, A. M., Mohamed, S., and Goodfellow, I. Many paths to equilibrium: GANs do not need to decrease divergence at every step. ar Xiv preprint ar Xiv:1710.08446, 2017. Goodfellow, I. Nips 2016 tutorial: Generative adversarial networks. ar Xiv preprint ar Xiv:1701.00160, 2016. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672 2680, 2014. Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., and Courville, A. Improved training of Wasserstein GANs. ar Xiv preprint ar Xiv:1704.00028, 2017. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. GANs trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, pp. 6626 6637, 2017. Karras, T., Aila, T., Laine, S., and Lehtinen, J. Progressive growing of GANs for improved quality, stability, and variation. ar Xiv preprint ar Xiv:1710.10196, 2017. Kodali, N., Abernethy, J., Hays, J., and Kira, Z. On convergence and stability of GANs. ar Xiv preprint ar Xiv:1705.07215, 2017. Lim, J. H. and Ye, J. C. Geometric GAN. ar Xiv preprint ar Xiv:1705.02894, 2017. Liu, S., Bousquet, O., and Chaudhuri, K. Approximation and convergence properties of generative adversarial learning. In Advances in Neural Information Processing Systems, pp. 5545 5553, 2017. Lucic, M., Kurach, K., Michalski, M., Gelly, S., and Bousquet, O. Are GANs created equal? a large-scale study. ar Xiv preprint ar Xiv:1711.10337, 2017. Mao, X., Li, Q., Xie, H., Lau, R. Y., Wang, Z., and Smolley, S. P. Least squares generative adversarial networks. ar Xiv preprint Ar Xiv:1611.04076, 2016. Mescheder, L., Nowozin, S., and Geiger, A. The numerics of GANs. In Advances in Neural Information Processing Systems, pp. 1825 1835, 2017. Mescheder, L., Geiger, A., and Nowozin, S. Which training methods for GANs do actually converge? In International Conference on Machine Learning, pp. 3478 3487, 2018. Miyato, T., Kataoka, T., Koyama, M., and Yoshida, Y. Spectral normalization for generative adversarial networks. ar Xiv preprint ar Xiv:1802.05957, 2018. Mroueh, Y. and Sercu, T. Fisher GAN. In Advances in Neural Information Processing Systems, pp. 2510 2520, 2017. Mroueh, Y., Li, C., Sercu, T., Raj, A., and Cheng, Y. Sobolev GAN. ar Xiv preprint ar Xiv:1711.04894, 2017. Nowozin, S., Cseke, B., and Tomioka, R. f-GAN: Training generative neural samplers using variational divergence minimization. In Advances in Neural Information Processing Systems, pp. 271 279, 2016. Odena, A., Buckman, J., Olsson, C., Brown, T. B., Olah, C., Raffel, C., and Goodfellow, I. Is generator conditioning causally related to GAN performance? ar Xiv preprint ar Xiv:1802.08768, 2018. Petzka, H., Fischer, A., and Lukovnicov, D. On the regularization of Wasserstein GANs. ar Xiv preprint ar Xiv:1709.08894, 2017. Lipschitz Generative Adversarial Nets Qi, G.-J. Loss-sensitive generative adversarial networks on lipschitz densities. ar Xiv preprint ar Xiv:1701.06264, 2017. Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., and Chen, X. Improved techniques for training GANs. In Advances in Neural Information Processing Systems, pp. 2226 2234, 2016. Unterthiner, T., Nessler, B., Klambauer, G., Heusel, M., Ramsauer, H., and Hochreiter, S. Coulomb GANs: Provably optimal nash equilibria via potential fields. ar Xiv preprint ar Xiv:1708.08819, 2017. Villani, C. Optimal Transport: Old and New, volume 338. Springer Science & Business Media, 2008. Yadav, A., Shah, S., Xu, Z., Jacobs, D., and Goldstein, T. Stabilizing adversarial nets with prediction methods. ar Xiv preprint ar Xiv:1705.07364, 2017. Zhang, H., Goodfellow, I., Metaxas, D., and Odena, A. Selfattention generative adversarial networks. ar Xiv preprint ar Xiv:1805.08318, 2018. Zhao, J., Mathieu, M., and Le Cun, Y. Energybased generative adversarial network. ar Xiv preprint ar Xiv:1609.03126, 2016.