# first_order_generative_adversarial_networks__fee94d71.pdf First Order Generative Adversarial Networks Calvin Seward 1 2 Thomas Unterthiner 2 Urs Bergmann 1 Nikolay Jetchev 1 Sepp Hochreiter 2 GANs excel at learning high dimensional distributions, but they can update generator parameters in directions that do not correspond to the steepest descent direction of the objective. Prominent examples of problematic update directions include those used in both Goodfellow s original GAN and the WGAN-GP. To formally describe an optimal update direction, we introduce a theoretical framework which allows the derivation of requirements on both the divergence and corresponding method for determining an update direction, with these requirements guaranteeing unbiased minibatch updates in the direction of steepest descent. We propose a novel divergence which approximates the Wasserstein distance while regularizing the critic s first order information. Together with an accompanying update direction, this divergence fulfills the requirements for unbiased steepest descent updates. We verify our method, the First Order GAN, with image generation on Celeb A, LSUN and CIFAR-10 and set a new state of the art on the One Billion Word language generation task. 1. Introduction Generative adversarial networks (GANs) (Goodfellow et al., 2014) excel at learning generative models of complex distributions, such as images (Radford et al., 2016; Ledig et al., 2017), textures (Jetchev et al., 2016; Bergmann et al., 2017; Jetchev et al., 2017), and even texts (Gulrajani et al., 2017; Heusel et al., 2017). GANs learn a generative model G that maps samples from multivariate random noise into a high dimensional space. The goal of GAN training is to update G such that the 1Zalando Research, Mühlenstraße 25, 10243 Berlin, Germany 2LIT AI Lab & Institute of Bioinformatics, Johannes Kepler University Linz, Austria. Correspondence to: Calvin Seward . Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s). generative model approximates a target probability distribution. In order to determine how close the generated and target distributions are, a class of divergences, the so-called adversarial divergences was defined and explored by (Liu et al., 2017). This class is broad enough to encompass most popular GAN methods such as the original GAN (Goodfellow et al., 2014), f-GANs (Nowozin et al., 2016), moment matching networks (Li et al., 2015), Wasserstein GANs (Arjovsky et al., 2017) and the tractable version thereof, the WGAN-GP (Gulrajani et al., 2017). GANs learn a generative model with distribution Q by minimizing an objective function τ(P Q) measuring the similarity between target and generated distributions P and Q. In most GAN settings, the objective function to be minimized is an adversarial divergence (Liu et al., 2017), where a critic function is learned that distinguishes between target and generated data. For example, in the classic GAN (Goodfellow et al., 2014) the critic f classifies data as real or generated, and the generator G is encouraged to generate samples that f will classify as real. Unfortunately in GAN training, the generated distribution often fails to converge to the target distribution. Many popular GAN methods are unsuccessful with toy examples, for example failing to generate all modes of a mixture of Gaussians (Srivastava et al., 2017; Metz et al., 2017) or failing to learn the distribution of data on a one-dimensional line in a high dimensional space (Fedus et al., 2017). In these situations, updates to the generator don t significantly reduce the divergence between generated and target distributions; if there always was a significant reduction in the divergence then the generated distribution would converge to the target. The key to successful neural network training lies in the ability to efficiently obtain unbiased estimates of the gradients of a network s parameters with respect to some loss. With GANs, this idea can be applied to the generative setting. There, the generator G is parameterized by some values θ Rm. If an unbiased estimate of the gradient of the divergence between target and generated distributions with respect to θ can be obtained during mini-batch learning, then SGD can be applied to learn G. In GAN learning, intuition would dictate updating the generated distribution by moving θ in the direction of steepest descent θτ(P Qθ). Unfortunately, θτ(P Qθ) is First Order Generative Adversarial Networks generally intractable, therefore θ is updated according to a tractable method; in most cases a critic f is learned and the gradient of the expected critic value θEQθ[f] is used as the update direction for θ. Usually, this update direction and the direction of steepest descent θτ(P Qθ), don t coincide and therefore learning isn t optimal. As we see later, popular methods such as WGAN-GP (Gulrajani et al., 2017) are affected by this issue. Therefore we set out to answer a simple but fundamental question: Is there an adversarial divergence and corresponding method that produces unbiased estimates of the direction of steepest descent in a mini-batch setting? In this paper, under reasonable assumptions, we identify a path to such an adversarial divergence and accompanying update method. Similar to the WGAN-GP, this divergence also penalizes a critic s gradients, and thereby ensures that the critic s first order information can be used directly to obtain an update direction in the direction of steepest descent. This program places four requirements on the adversarial divergence and the accompanying update rule for calculating the update direction that haven t to the best of our knowledge been formulated together. This paper will give rigorous definitions of these requirements, but for now we suffice with intuitive and informal definitions: A. the divergence used must decrease as the target and generated distributions approach each other. For example, if we define the trivial distance between two probability distribution to be 0 if the distributions are equal, and 1 otherwise, i.e. τtrivial(P Q) := ( 0 P = Q 1 otherwise then even as Q gets close to P, τtrivial(P Q) doesn t change. Without this requirement, θτ(P Qθ) = 0 and every direction is a direction of steepest descent, B. critic learning must be tractable, C. the gradient θτ(P Qθ) and the result of an update rule must be well defined, D. the optimal critic enables an update which is an estimate of θτ(P Qθ). In order to formalize these requirements, we define in Section 2 the notions of adversarial divergences and optimal critics. In Section 3 we will apply the adversarial divergence paradigm and begin to formalize the requirements above and better understand existing GAN methods. The last requirement is defined precisely in Section 4 where we explore criteria for an update rule guaranteeing a low variance unbiased estimate of the true gradient θτ(P Qθ). After stating these conditions, we devote Section 5 to defining a divergence, the Penalized Wasserstein Divergence that fulfills the first two basic requirements. In this setting, a critic is learned, that similarly to the WGAN-GP critic, pushes real and generated data as far apart as possible while being penalized if the critic violates a Lipschitz condition. As we will discover, an optimal critic for the Penalized Wasserstein Divergence between two distributions need not be unique. In fact, this divergence only specifies the values that the optimal critic assumes on the supports of generated and target distributions. Therefore, for many distributions, multiple critics with different gradients on the support of the generated distribution can all be optimal. We apply this insight in Section 6 and add a gradient penalty to define the First Order Penalized Wasserstein Divergence. This divergence enforces not just correct values for the critic, but also ensures that the critic s gradient, its first order information, assumes values that allow for an easy formulation of an update rule. Together, this divergence and update rule fulfill all four requirements. We hope that this gradient penalty trick will be applied to other popular GAN methods and ensure that they too return better generator updates. Indeed, (Fedus et al., 2017) improves existing GAN methods by adding a gradient penalty. Finally in Section 7, the effectiveness of our method is demonstrated by generating images and texts. 2. Notation, Definitions and Assumptions In (Liu et al., 2017) an adversarial divergence is defined: Definition 1 (Adversarial Divergence). Let X be a topological space, C(X2) the set of all continuous real valued functions over the Cartesian product of X with itself and set G C(X2), G = . An adversarial divergence τ( ) over X is a function P(X) P(X) R {+ } (P, Q) 7 τ(P Q) = sup g G EP Q[g]. The function class G C(X2) must be carefully selected if τ( ) is to be reasonable. For example, if G = C(X2) then the divergence between two Dirac distributions τ(δ0 δ1) = , and if G = {0}, i.e. G contains only the constant function which assumes zero everywhere, then τ( ) = 0. Many existing GAN procedures can be formulated as an adversarial divergence. For example, setting G = {x, y 7 log(u(x)) + log(1 u(y)) | u V} V = (0, 1)X C(X)1 1(0, 1)X denotes all functions mapping X to (0, 1). First Order Generative Adversarial Networks results in τG(P Q) = supg G EP Q[g], the divergence in Goodfellow s original GAN (Goodfellow et al., 2014). See (Liu et al., 2017) for further examples. For convenience, we ll restrict ourselves to analyzing a special case of the adversarial divergence (similar to Theorem 4 of (Liu et al., 2017)), and use the notation: Definition 2 (Critic Based Adversarial Divergence). Let X be a topological space, F C(X), F = . Further let f F, mf : X X R, mf : (x, y) 7 m1(f(x)) m2(f(y)) and rf C(X2). Then define τ : P(X) P(X) F R {+ } (P, Q, f) 7 τ(P Q; f) = EP Q[mf rf] (1) and set τ(P Q) = supf F τ(P Q; f). For example, the τG from above can be equivalently defined by setting F = (0, 1)X C(X), m1(x) = log(x), m2(x) = log(1 x) and rf = 0. Then τG(P Q) = sup f F EP[mf rf] (2) is a critic based adversarial divergence. An example with a non-zero rf is the WGAN-GP (Gulrajani et al., 2017), which is a critic based adversarial divergence when F = C1(X), the set of all differentiable real functions on X, m1(x) = m2(x) = x, λ > 0 and rf(x, y) = λEα U([0,1])[( zf(z)|αx+(1 α)y 1)2]. Then the WGAN-GP divergence τI(P Q) is: τI(P Q) = sup f F τI(P Q; f) = sup f F EP Q[mf rf]. (3) While Definition 1 is more general, Definition 2 is more in line with most GAN models. In most GAN settings, a critic in the simpler C(X) space is learned that separates real and generated data while reducing some penalty term rf which depends on both real and generated data. For this reason, we use exclusively the notation from Definition 2. One desirable property of an adversarial divergence is that τ(P P) obtains its infimum if and only if P = P, leading to the following definition adapted from (Liu et al., 2017): Definition 3 (Strict adversarial divergence). Let τ be an adversarial divergence over a topological space X. τ is called a strict adversarial divergence if for any P, P P(X), τ(P P) = inf P P(X) τ(P P ) P = P In order to analyze GANs that minimize a critic based adversarial divergence, we introduce the set of optimal critics. Definition 4 (Optimal Critic, OCτ(P, Q)). Let τ be a critic based adversarial divergence over a topological space X and P, Q P(X), F C(X), F = . Define OCτ(P, Q) to be the set of critics in F that maximize τ(P Q; ). That is OCτ(P, Q) := {f F | τ(P Q; f) = τ(P Q)}. Note that OCτ(P, Q) = is possible, (Arjovsky & Bottou, 2017). In this paper, we will always assume that if OCτ(P, Q) = , then an optimal critic f OCτ(P, Q) is known. Although is an unrealistic assumption, see (Bi nkowski et al., 2018), it is a good starting point for a rigorous GAN analysis. We hope further works can extend our insights to more realistic cases of approximate critics. Finally, we assume that generated data is distributed according to a probability distribution Qθ parameterized by θ Θ Rm satisfying the mild regularity Assumption 1. Furthermore, we assume that P and Q both have compact and disjoint support in Assumption 2. Although we conjecture that weaker assumptions can be made, we decide for the stronger assumptions to simplify the proofs. Assumption 1 (Adapted from (Arjovsky et al., 2017)). Let Θ Rm. We say Qθ P(X), θ Θ satisfies assumption 1 if there is a locally Lipschitz function g : Θ Rd X which is differentiable in the first argument and a distribution Z with bounded support in Rd such that for all θ Θ it holds Qθ g(θ, z) where z Z. Assumption 2 (Compact and Disjoint Distributions). Using Θ Rm from Assumption 1, we say that P and (Qθ)θ Θ satisfies Assumption 2 if for all θ Θ it holds that the supports of P and Qθ are compact and disjoint. 3. Requirements Derived From Related Work With the concept of an Adversarial Divergence now formally defined, we can investigate existing GAN methods from an Adversarial Divergence minimization standpoint. During the last few years, weaknesses in existing GAN frameworks have been highlighted and new frameworks have been proposed to mitigate or eliminate these weaknesses. In this section we ll trace this history and formalize requirements for adversarial divergences and optimal updates. Although using two competing neural networks for unsupervised learning isn t a new concept (Schmidhuber, 1992), recent interest in the field started when (Goodfellow et al., 2014) generated images with the divergence τG defined in Eq. 2. However, (Arjovsky & Bottou, 2017) shows if P, Qθ have compact disjoint support then θτG(P Qθ) = 0, preventing the use of gradient based learning methods. In response to this impediment, the Wasserstein GAN was proposed in (Arjovsky et al., 2017) with the divergence: τW (P Q) = sup f L 1 Ex P[f(x)] Ex Q[f(x )] First Order Generative Adversarial Networks where f L is the Lipschitz constant of f. The following example shows the advantage of τW . Consider a series of Dirac measures (δ 1 n )n>0. Then τW (δ0 δ 1 n ) = 1 n while τG(δ0 δ 1 n ) = 1. As δ 1 n approaches δ0, the Wasserstein divergence decreases while τG(δ0 δ 1 n ) remains constant. This issue is explored in (Liu et al., 2017) by creating a weak ordering, the so-called strength, of divergences. A divergence τ1 is said to be stronger than τ2 if for any sequence of probability measures (Pn)n N and any target probability measure P the convergence τ1(P Pn) n inf P P(X) τ1(P P) implies τ2(P Pn) n inf P P(X) τ2(P P). The divergences τ1 and τ2 are equivalent if τ1 is stronger than τ2 and τ2 is stronger than τ1. The Wasserstein distance τW is the weakest divergence in the class of strict adversarial divergences (Liu et al., 2017), leading to the following requirement: Requirement 1 (Equivalence to τW ). An adversarial divergence τ is said to fulfill Requirement 1 if τ is a strict adversarial divergence which is weaker than τW . The issue of the zero gradients was side stepped in (Goodfellow et al., 2014) (and the option more rigorously explored in (Fedus et al., 2017)) by not updating with θEx Qθ[log(1 f(x ))] but instead using the gradient θEx Qθ[f(x )]. As will be shown in Section 4, this update direction doesn t generally move θ in the direction of steepest descent. Although using the Wasserstein distance as a divergence between probability measures solves many theoretical problems, it requires that critics are Lipschitz continuous with Lipschitz constant 1. Unfortunately, no tractable algorithm has yet been found that is able to learn the optimal Lipschitz continuous critic (or a close approximation thereof). This is due in part to the fact that if the critic is parameterized by a neural network fϑ, ϑ ΘC Rc, then the set of admissible parameters {ϑ ΘC | fϑ L 1} is highly non-convex. Thus critic learning is a non-convex optimization problem (as is generally the case in neural network learning) with non-convex constraints on the parameters. Since neural network learning is generally an unconstrained optimization problem, adding complex nonconvex constraints makes learning intractable with current methods. Thus, finding an optimal Lipschitz continuous critic is a problem that can not yet be solved, leading to the second requirement: Requirement 2 (Convex Admissible Critic Parameter Set). Assume τ is a critic based adversarial divergence where critics are chosen from a set F. Assume further that in training, a parameterization ϑ Rc of the critic function fϑ is learned. The critic based adversarial divergence τ is said to fulfill requirement 2 if the set of admissible parameters {ϑ Rc | fϑ F} is convex. Table 1. Comparing existing GAN methods with regard to the four Requirements formulated in this paper. The methods compared are the classic GAN (Goodfellow et al., 2014), WGAN (Arjovsky et al., 2017), WGAN-GP (Gulrajani et al., 2017), WGAN-LP (Petzka et al., 2018), DRAGAN (Kodali et al., 2017), PWGAN (our method) and FOGAN (our method). Req. 1 Req. 2 Req. 3 Req. 4 GAN no no WGAN no WGAN-GP no no WGAN-LP no no DRAGAN no PWGAN no no FOGAN It was reasoned in (Gulrajani et al., 2017) that since a Wasserstein critic must have gradients of norm at most 1 everywhere, a reasonable strategy would be to transform the constrained optimization into an unconstrained optimization problem by penalizing the divergence when a critic has nonunit gradients. With this strategy, the so-called Improved Wasserstein GAN or WGAN-GP divergence defined in Eq. 3 is obtained. The generator parameters are updated by training an optimal critic f and updating with θEQθ[f ]. Although this method has impressive experimental results, it is not yet ideal. (Petzka et al., 2018) showed that an optimal critic for τI has undefined gradients on the support of the generated distribution Qθ. Thus, the update direction θEQθ[f ] is undefined; even if a direction was chosen from the subgradient field (meaning the update direction is defined but random) the update direction won t generally point in the direction of steepest gradient descent. This naturally leads to the next requirement: Requirement 3 (Well Defined Update Rule). An update rule is said to fulfill Requirement 3 on a target distribution P and a family of generated distributions (Qθ)θ Θ if for every θ Θ the update rule at P and Qθ is well defined. Note that kernel methods such as (Dziugaite et al., 2015) and (Li et al., 2015) provide exciting theoretical guarantees and may well fulfill all four requirements. Since these guarantees come at a cost in scalability, we won t analyze them further. 4. Correct Update Rule Requirement In the previous section, we stated a bare minimum requirement for an update rule (namely that it is well defined). In this section, we ll go further and explore criteria for a good update rule. For example in Lemma 8 in Section A of Appendix, it is shown that there exists a target P and a family First Order Generative Adversarial Networks of generated distributions (Qθ)θ Θ fulfilling Assumptions 1 and 2 such that for the optimal critic f θ0 OCτI(P, Qθ0) there is no γ R so that θτI(P Qθ)|θ0 = γ θEQθ[f θ0]|θ0 for all θ0 Θ if all terms are well defined. Thus, the update rule used in the WGAN-GP setting, although well defined for this specific P and Qθ0, isn t guaranteed to move θ in the direction of steepest descent. In fact, (Mescheder et al., 2018) shows that the WGAN-GP does not converge for specific classes of distributions. Therefore, the question arises what well defined update rule also moves θ in the direction of steepest descent? The most obvious candidate for an update rule is simply use the direction θτ(P Qθ), but since in the adversarial divergence setting τ(P Qθ) is the supremum over a set of infinitely many possible critics, calculating θτ(P Qθ) directly is generally intractable. One strategy to address this issue is to use an envelope theorem (Milgrom & Segal, 2002). Assuming all terms are well defined, then for every optimal critic f OCτ(P, Qθ0) it holds θτ(P Qθ)|θ0 = θτ(P Qθ; f )|θ0. This strategy is outlined in detail in (Arjovsky et al., 2017) when proving the Wasserstein GAN update rule, and explored in the context of the classic GAN divergence τG in (Arjovsky & Bottou, 2017). Yet in many GAN settings, (Goodfellow et al., 2014; Arjovsky et al., 2017; Salimans et al., 2016; Petzka et al., 2018), the update rule is to train an optimal critic f and then take a step in the direction of θEQθ[f ]. In the critic based adversarial divergence setting (Definition 2), a direct result of Eq. 1 together with Theorem 1 from (Milgrom & Segal, 2002) is that for every f OCτ(P, Qθ0) θτ(P Qθ)|θ0 = θτ(P Q; f ) = θ(EQθ[m2(f )] + EP Qθ[rf ])|θ0 (4) when all terms are well defined. Thus, the update direction θEQθ[f ] only points in the direction of steepest descent for special choices of m2 and rf. One such example is the Wasserstein GAN where m2(x) = x and rf = 0. Most popular GAN methods don t employ functions m2 and rf such that the update direction θEQθ[f ] points in the direction of steepest descent. For example, with the classic GAN, m2(x) = log(1 x) and rf = 0, so the update direction θEQθ[f ] clearly is not oriented in the direction of steepest descent θEQθ[log(1 f )]. The WGAN-GP is similar, since as we see in Lemma 8 in Appendix, Section A, θEP Qθ[rf ] is not generally a multiple of θEQθ[f ]. The question arises why this direction is used instead of directly calculating the direction of steepest descent? Using the correct update rule in Eq. 4 above involves estimating θEP Qθ[rf ], which requires sampling from both P and Qθ. GAN learning happens in mini-batches, therefore θEP Qθ[rf ] isn t calculated directly, but estimated based on samples which can lead to variance in the estimate. To analyze this issue, we use the notation from (Bellemare et al., 2017) where Xm := X1, X2, . . . , Xm are samples from P and the empirical distribution ˆPm is defined by ˆPm := ˆPm(Xm) := 1 m Pm i=1 δXi. Further let VXm P be the element-wise variance. Now with mini-batch learning we get2 VXm P[ θEˆPm Qθ[mf rf ]|θ0] = VXm P[ θ(EˆPm[m1(f )] EQθ[m2(f )] EˆPm Qθ[rf ])|θ0] = VXm P[ θEˆPm Qθ[rf ]|θ0]. Therefore, estimation of θEP Qθ[rf ] is an extra source of variance. Our solution to both these problems chooses the critic based adversarial divergence τ in such a way that there exists a γ R so that for all optimal critics f OCτ(P, Qθ0) it holds θEP Qθ[rf ]|θ0 γ θEQθ[m2(f )]|θ0. (5) In Theorem 2 we see conditions on P, Qθ such that equality holds. Now using Eq. 5 we see that θτ(P Qθ)|θ0 = θ(EQθ[m2(f )] + EP Qθ[rf ])|θ0 θ(EQθ[m2(f )] + γEQθ[m2(f )])|θ0 = (1 + γ) θEQθ[m2(f )]|θ0 making (1 + γ) θEQθ[m2(f )] a low variance update approximation of the direction of steepest descent. We re then able to have the best of both worlds. On the one hand, when rf serves as a penalty term, training of a critic neural network can happen in an unconstrained optimization fashion like with the WGAN-GP. At the same time, the direction of steepest descent can be approximated by calculating θEQθ[m2(f )], and as in the Wasserstein GAN we get reliable gradient update steps. With this motivation, Eq. 5 forms the basis of our final requirement: Requirement 4 (Low Variance Update Rule). An adversarial divergence τ is said to fulfill requirement 4 if τ is a critic based adversarial divergence and for every optimal critic f OCτ(P, Qθ0) fulfills Eq. 5. 2Because the first expectation doesn t depend on θ, θEˆPm[m1(f )] = 0. In the same way, because the second expectation doesn t depend on the mini-batch Xm sampled, VXm P[EQθ[m2(f )]] = 0. First Order Generative Adversarial Networks It should be noted that the WGAN-GP achieves impressive experimental results; we conjecture that in many cases θEQθ[f ] close enough to the true direction of steepest descent. Nevertheless, as the experiments in Section 7 show, our gradient estimates lead to better convergence in a challenging language modeling task. 5. Penalized Wasserstein Divergence We now attempt to find an adversarial divergence that fulfills all four requirements. We start by formulating an adversarial divergence τP and a corresponding update rule than can be shown to comply with Requirements 1 and 2. Subsequently in Section 6, τP will be refined to make its update rule practical and conform to all four requirements. The divergence τP is inspired by the Wasserstein distance, there for an optimal critic between two Dirac distributions f OCτ(δa, δb) it holds f(a) f(b) = |a b|. Now if we look at τsimple(δa δb) := sup f F f(a) f(b) (f(a) f(b))2 it s easy to calculate that τsimple(δa δb) = 1 4|a b|, which is the same up to a constant (in this simple setting) as the Wasserstein distance, without being a constrained optimization problem. See Figure 1 for an example. This has another intuitive explanation. Because Eq. 6 can be reformulated as τsimple(δa δb) = sup f F f(a) f(b) |a b| f(a) f(b) which is a tug of war between the objective f(a) f(b) and the squared Lipschitz penalty |f(a) f(b)| |a b| weighted by |a b|. This |a b| term is important (and missing from (Gulrajani et al., 2017), (Petzka et al., 2018)) because otherwise the slope of the optimal critic between a and b will depend on |a b|. The penalized Wasserstein divergence τP is a straightforward adaptation of τsimple to the multi dimensional case. Definition 5 (Penalized Wasserstein Divergence). Assume X Rn and P, Q P(X) are probability measures over X, λ > 0 and F = C1(X). Set τP (P Q; f) := Ex P[f(x)] Ex Q[f(x )] (f(x) f(x ))2 Define the penalized Wasserstein divergence as τP (P Q) = sup f F τP (P Q; f). This divergence is updated by picking an optimal critic f OCτP (P, Qθ0) and taking a step in the direction of θEQθ[f ]|θ0. This formulation is similar to the WGAN-GP (Gulrajani et al., 2017), restated here in Eq. 3. Theorem 1. Assume X Rn, and P, Qθ P(X) are probability measures over X fulfilling Assumptions 1 and 2. Then for every θ0 Θ the Penalized Wasserstein Divergence with it s corresponding update direction fulfills Requirements 1 and 2. Further, there exists an optimal critic f OCτP (P, Qθ0) that fulfills Eq. 5 and thus Requirement 4. Proof. See Appendix, Section A. Note that this theorem isn t unique to τP . For example, for the penalty in Eq. 8 of (Petzka et al., 2018) we conjecture that a similar result can be shown. The divergence τP is still very useful because, as will be shown in the next section, τP can be modified slightly to obtain a new critic τF , where every optimal critic fulfills Requirements 1 to 4. Since τP only constrains the value of a critic on the supports of P and Qθ, many different critics are optimal, and in general θEQθ[f ] depends on the optimal critic choice and is thus is not well defined. With this, Requirements 3 and 4 are not fulfilled. See Figure 1 for a simple example. In theory, τP s critic could be trained with a modified sampling procedure so that θEQθ[f ] is well defined and Eq. 5 holds, as is done in both (Kodali et al., 2017) and (Unterthiner et al., 2018). By using a method similar to (Bishop et al., 1998), one can minimize the divergence τP (P, ˆQθ) where ˆQθ is data equal to x + ϵ where x is sampled from Qθ and ϵ is some zero-mean uniform distributed noise. In this way the support of ˆQθ lives in the full space X and not the submanifold supp(Qθ). Unfortunately, while this method works in theory, the number of samples required for accurate gradient estimates scales with the dimensionality of the underlying space X, not with the dimensionality of data or generated submanifolds supp(P) or supp(Qθ). In response, we propose the First Order Penalized Wasserstein Divergence. 6. First Order Penalized Wasserstein Divergence As was seen in the last section, since τP only constrains the value of optimal critics on the supports of P and Qθ, the gradient θEQθ[f ] is not well defined. A natural method to refine τP to achieve a well defined gradient is to enforce two things: First Order Generative Adversarial Networks 0.00 0.25 0.50 0.75 1.00 θ d dθτP(δ0 δθ) 1 2 d dθEδθ[f θ0] (a) First order critic 0.00 0.25 0.50 0.75 1.00 θ d dθτP(δ0 δθ) 1 2 d dθEδθ[f θ0] (b) Normal critic Figure 1. Comparison of τP update rule given different optimal critics. Consider the simple example of divergence τP from Definition 5 between Dirac measures with update rule 1 2 d dθ Eδθ[f] (the update rule is from Lemma 7 in Appendix, Section A). Recall that τP (δ0 δθ; f) = f(θ) (f(θ))2 2θ , and that so d dθ τP (δ0 δθ) = 1 2. Let θ0 = 0.5; our goal is to calculate d dθ τP (δ0 δθ)|θ=θ0 via our update rule. Since multiple critics are optimal for τP (δ0 δθ), we will explore how the choice of optimal critic affects the update. In Subfigure 1a, we chose the first order optimal critic f θ0(x) = x( 4x2+4x 2), and d dθ τP (δ0 δθ)|θ=θ0 = 1 2 d dθ Eδθ[f θ0]θ=θ0 and the update rule is correct (see how the red, black and green lines all intersect in one point). In Subfigure 1b, the optimal critic is set to f ϑ0(x) = 2x2 which is not a first order critic resulting in the update rule calculating an incorrect update. f should be optimal on a larger manifold, namely the manifold Q θ that is created by stretching Qθ bit in the direction of P (the formal definition is below). The norm of the gradient of the optimal critic, xf (x) on supp(Q θ) should be equal to the norm of the maximal directional derivative in the support of Q θ (see Eq. 15 in Appendix). By enforcing these two points, we assure that xf (x) is well defined and points towards the real data P. Thus, the following definition emerges (see proof of Lemma 7 in Appendix, Section A for details). Definition 6 (First Order Penalized Wasserstein Divergence (FOGAN)). Assume X Rn and P, Q P(X) are probability measures over X. Set F = C1(X), λ, µ > 0 and τF (P Q; f) := Ex P[f(x)] Ex Q[f(x )] λ E x P,x Q (f(x) f(x ))2 µ E x P,x Q E x P[( x x ) f( x) f(x ) E x P[ 1 x x ] Define the First Order Penalized Wasserstein Divergence as τF (P Q) = sup f F τF (P Q; f). This divergence is updated by picking an optimal critic f OCτP (P, Qθ0) and taking a step in the direction of θEQθ[f ]|θ0. In order to define a GAN from the First Order Penalized Wasserstein Divergence, we must define a slight modification of the generated distribution Qθ to obtain Q θ. Similar to the WGAN-GP setting, samples from Q θ are obtained by x α(x x) where x P and x Qθ. The difference is that α U([0, ε]), with ε chosen small, making Qθ and Q θ quite similar. Therefore updates to θ that reduce τF (P Q θ) also reduce τF (P Qθ). Conveniently, as is shown in Lemma 5 in Appendix, Section A, any optimal critic for the First Order Penalized Wasserstein divergence is also an optimal critic for the Penalized Wasserstein Divergence. The key advantage to the First Order Penalized Wasserstein Divergence is that for any P, Qθ fulfilling Assumptions 1 and 2, τF (P Q θ) with its corresponding update rule θEQ θ[f ] on the slightly modified probability distribution Q θ fulfills requirements 3 and 4. Theorem 2. Assume X Rn, and P, Qθ P(X) are probability measures over X fulfilling Assumptions 1 and 2 and Q θ is Qθ modified using the method above. Then for every θ0 Θ there exists at least one optimal critic f OCτF (P, Q θ0) and τF combined with update direction θEQθ[f ]|θ0 fulfills Requirements 1 to 4. If P, Q θ are such that x, x supp(P), supp(Q θ) it holds f (x) f (x ) = c x x for some constant c, then equality holds for Eq. 5. Proof. See Appendix, Section A Note that adding a gradient penalty, other than being a necessary step for the WGAN-GP (Gulrajani et al., 2017), DRAGAN (Kodali et al., 2017) and Consensus Optimization GAN (Mescheder et al., 2017), has also been shown empirically to improve the performance the original GAN method (Eq. 2), see (Fedus et al., 2017). In addition, using stricter assumptions on the critic, (Nagarajan & Kolter, 2017) provides a theoretical justification for use of a gradient penalty in GAN learning. The analysis of Theorem 2 in Appendix, Section A provides a theoretical understanding why in the Penalized Wasserstein GAN setting adding a gradient penalty causes θEQ θ[f ] to be an update rule that points in the direction of steepest descent, and may provide a path for other GAN methods to make similar assurances. 7. Experimental Results 7.1. Image Generation We begin by testing the FOGAN on the Celeb A image generation task (Liu et al., 2015), training a generative model with the DCGAN architecture (Radford et al., 2016) and First Order Generative Adversarial Networks Table 2. Comparison of different GAN methods for image and text generation. We measure performance with respect to the FID on the image datasets and JSD between n-grams for text generation. Task BEGAN DCGAN Coulomb WGAN-GP FOGAN Celeb A 28.5 12.5 9.3 4.2 6.0 LSUN 112 57.5 31.2 9.5 11.4 CIFAR-10 - - 27.3 24.8 27.4 4-gram - - - .220 .006 .226 .006 6-gram - - - .573 .009 .556 .004 obtaining Fréchet Inception Distance (FID) scores (Heusel et al., 2017) competitive with state of the art methods without doing a tuning parameter search. Similarly, we show competitive results on LSUN (Yu et al., 2015) and CIFAR10 (Krizhevsky & Hinton, 2009). See Table 2, Appendix B.1 and released code 3. 7.2. One Billion Word Finally, we use the First Order Penalized Wasserstein Divergence to train a character level generative language model on the One Billion Word Benchmark (Chelba et al., 2013). In this setting, a 1D CNN deterministically transforms a latent vector into a 32 C matrix, where C is the number of possible characters. A softmax nonlinearity is applied to this output, and given to the critic. Real data is one-hot encoding of 32 character texts sampled from the true data. We conjecture this is an especially difficult task for GANs, since data in the target distribution lies in just a few corners of the 32 C dimensional unit hypercube. As the generator is updated, it must push mass from one corner to another, passing through the interior of the hypercube far from any real data. Methods other than Coulomb GAN (Unterthiner et al., 2018) WGAN-GP (Gulrajani et al., 2017; Heusel et al., 2017) and the Sobolev GAN (Mroueh et al., 2018) have not been shown to be successful at this task. We use the same setup as in both (Gulrajani et al., 2017; Heusel et al., 2017) with two differences. First, we train to minimize our divergence from Definition 6 with parameters λ = 0.1 and µ = 1.0 instead of the WGAN-GP divergence. Second, we use batch normalization in the generator, both for training our FOGAN method and the benchmark WGAN-GP; we do this because batch normalization improved performance and stability of both models. As with (Gulrajani et al., 2017; Heusel et al., 2017) we use the Jensen-Shannon-divergence (JSD) between n-grams from the model and the real world distribution as an evaluation metric. The JSD is estimated by sampling a finite number of 32 character vectors, and comparing the distributions of the n-grams from said samples and true data. This estimation is biased; smaller samples result in larger 3https://github.com/zalandoresearch/first_order_gan 0 50 100 150 200 250 mini-batch x 2K jsd6 on 100x64 samples WGAN-GP FOGAN 50 75 100 125 150 175 200 225 250 mini-batch x 2K jsd6 on 100x64 samples WGAN-GP FOGAN 10 15 20 25 30 wallclock time in hours jsd6 on 100x64 samples WGAN-GP FOGAN Figure 2. Five training runs of both WGAN-GP and FOGAN, with the average of all runs plotted in bold and the 2σ error margins denoted by shaded regions. For easy visualization, we plot the moving average of the last three n-gram JSD estimations. The first two plots both show training w.r.t. number of training iterations; the second plot starts at iteration 50. The last plot show training with respect to wall-clock time, starting after 6 hours of training. JSD estimations. A Bayes limit results from this bias; even when samples are drawn from real world data and compared with real world data, small sample sizes results in large JSD estimations. In order to detect performance difference when training with the FOGAN and WGAN-GP, a low Bayes limit is necessary. Thus, to compare the methods, we sampled 6400 32 character vectors in contrast with the 640 vectors sampled in past works. Therefore, the JSD values in those papers are higher than the results here. For our experiments we trained both models for 500, 000 iterations in 5 independent runs, estimating the JSD between 6-grams of generated and real world data every 2000 training steps, see Figure 2. The results are even more impressive when aligned with wall-clock time. Since in WGAN-GP training an extra point between real and generated distributions must be sampled, it is slower than the FOGAN training; see Figure 2 and observe the significant (2σ) drop in estimated JSD. First Order Generative Adversarial Networks Acknowledgements This work was supported by Zalando SE with Research Agreement 01/2016. Arjovsky, M. and Bottou, L. Towards principled methods for training generative adversarial networks. In International Conference of Learning Representations (ICLR), 2017. Arjovsky, M., Chintala, S., and Bottou, L. Wasserstein Generative Adversarial Networks. In Proceedings of the 34th International Conference on Machine Learning (ICML), 2017. Bellemare, M. G., Danihelka, I., Dabney, W., Mohamed, S., Lakshminarayanan, B., Hoyer, S., and Munos, R. The cramer distance as a solution to biased wasserstein gradients. ar Xiv preprint ar Xiv:1705.10743, 2017. Bergmann, U., Jetchev, N., and Vollgraf, R. Learning texture manifolds with the periodic spatial GAN. In Proceedings of the 34th International Conference on Machine Learning, (ICML), 2017. Bishop, C. M., Svensén, M., and Williams, C. K. GTM: The generative topographic mapping. Neural computation, 10 (1):215 234, 1998. Bi nkowski, M., Sutherland, D. J., Arbel, M., and Gretton, A. Demystifying MMD GANs. In International Conference on Learning Representations (ICLR), 2018. Chelba, C., Mikolov, T., Schuster, M., Ge, Q., Brants, T., Koehn, P., and Robinson, T. One billion word benchmark for measuring progress in statistical language modeling. ar Xiv preprint ar Xiv:1312.3005, 2013. Dziugaite, G. K., Roy, D. M., and Ghahramani, Z. Training generative neural networks via maximum mean discrepancy optimization. In Proceedings of the 31st Conference on Uncertainty in Artificial Intelligence (UAI), 2015. Fedus, W., Rosca, M., Lakshminarayanan, B., Dai, A. M., Mohamed, S., and Goodfellow, I. Many paths to equilibrium: GANs do not need to decrease a divergence at every step. ar Xiv preprint ar Xiv:1710.08446, 2017. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. In Advances in Neural Information Processing Systems 27 (NIPS), 2014. Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., and Courville, A. C. Improved Training of Wasserstein GANs. In Advances in Neural Information Processing Systems 30 (NIPS), 2017. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. GANs trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems 30 (NIPS), 2017. Jetchev, N., Bergmann, U., and Vollgraf, R. Texture synthesis with spatial generative adversarial networks. ar Xiv preprint ar Xiv:1611.08207, 2016. Jetchev, N., Bergmann, U., and Seward, C. GANosaic: Mosaic creation with generative texture manifolds. ar Xiv preprint ar Xiv:1712.00269, 2017. Kodali, N., Abernethy, J., Hays, J., and Kira, Z. How to train your DRAGAN. ar Xiv preprint ar Xiv:1705.07215, 2017. Krizhevsky, A. and Hinton, G. Learning multiple layers of features from tiny images. 2009. Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, Alejandro, Aitken, A., Tejani, A., Totz, J., Wang, Z., and Shi, W. Photo-realistic single image superresolution using a generative adversarial network. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 105 114, July 2017. doi: 10. 1109/CVPR.2017.19. Li, Y., Swersky, K., and Zemel, R. Generative moment matching networks. In Proceedings of the 32nd International Conference on Machine Learning (ICML), 2015. Liu, S., Bousquet, O., and Chaudhuri, K. Approximation and convergence properties of generative adversarial learning. In Advances in Neural Information Processing Systems 30 (NIPS), pp. 5545 5553, 2017. Liu, Z., Luo, P., Wang, X., and Tang, X. Deep learning face attributes in the wild. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3730 3738, 2015. Mescheder, L., Nowozin, S., and Geiger, A. The numerics of GANs. In Advances in Neural Information Processing Systems 30 (NIPS), 2017. Mescheder, L., Geiger, A., and Nowozin, S. Which training methods for gans do actually converge. ar Xiv preprint ar Xiv:1801.04406v2, 2018. Metz, L., Poole, B., Pfau, D., and Sohl-Dickstein, J. Unrolled generative adversarial networks. In International Conference of Learning Representations (ICLR), 2017. Milgrom, P. and Segal, I. Envelope theorems for arbitrary choice sets. Econometrica, 70(2):583 601, 2002. First Order Generative Adversarial Networks Mroueh, Y., Li, C.-L., Sercu, T., Raj, A., and Cheng, Y. Sobolev GAN. In International Conference on Learning Representations (ICLR), 2018. Nagarajan, V. and Kolter, J. Z. Gradient descent GAN optimization is locally stable. In Advances in Neural Information Processing Systems 30 (NIPS), 2017. Nowozin, S., Cseke, B., and Tomioka, R. F-GAN: Training generative neural samplers using variational divergence minimization. In Advances in Neural Information Processing Systems 29 (NIPS), 2016. Petzka, H., Fischer, A., and Lukovnicov, D. On the regularization of wasserstein GANs. In International Conference on Learning Representations (ICLR), 2018. Radford, A., Metz, L., and Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. In International Conference of Learning Representations (ICLR), 2016. Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., and Chen, X. Improved techniques for training GANs. In Advances in Neural Information Processing Systems 29 (NIPS), pp. 2234 2242, 2016. Schmidhuber, J. Learning factorial codes by predictability minimization. Neural Computation, 4(6):863 879, 1992. Sriperumbudur, B. K., Gretton, A., Fukumizu, K., Schölkopf, B., and Lanckriet, G. R. Hilbert space embeddings and metrics on probability measures. Journal of Machine Learning Research, 11(Apr):1517 1561, 2010. Srivastava, A., Valkov, L., Russell, C., Gutmann, M., and Sutton, C. VEEGAN: Reducing mode collapse in GANs using implicit variational learning. In Advances in Neural Information Processing Systems 30 (NIPS), 2017. Unterthiner, T., Nessler, B., Seward, C., Klambauer, G., Heusel, M., Ramsauer, H., and Hochreiter, S. Coulomb GANs: Provably optimal nash equilibria via potential fields. International Conference of Learning Representations (ICLR), 2018. Yu, F., Zhang, Y., Song, S., Seff, A., and Xiao, J. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. ar Xiv preprint ar Xiv:1506.03365, 2015.