# sobolev_gan__83458a10.pdf Published as a conference paper at ICLR 2018 SOBOLEV GAN Youssef Mroueh , Chun-Liang Li , , Tom Sercu , , Anant Raj , & Yu Cheng IBM Research AI Carnegie Mellon University Max Planck Institute for Intelligent Systems denotes Equal Contribution {mroueh,chengyu}@us.ibm.com, chunlial@cs.cmu.edu, tom.sercu1@ibm.com,anant.raj@tuebingen.mpg.de We propose a new Integral Probability Metric (IPM) between distributions: the Sobolev IPM. The Sobolev IPM compares the mean discrepancy of two distributions for functions (critic) restricted to a Sobolev ball defined with respect to a dominant measure µ. We show that the Sobolev IPM compares two distributions in high dimensions based on weighted conditional Cumulative Distribution Functions (CDF) of each coordinate on a leave one out basis. The Dominant measure µ plays a crucial role as it defines the support on which conditional CDFs are compared. Sobolev IPM can be seen as an extension of the one dimensional Von Mises Cram er statistics to high dimensional distributions. We show how Sobolev IPM can be used to train Generative Adversarial Networks (GANs). We then exploit the intrinsic conditioning implied by Sobolev IPM in text generation. Finally we show that a variant of Sobolev GAN achieves competitive results in semisupervised learning on CIFAR-10, thanks to the smoothness enforced on the critic by Sobolev GAN which relates to Laplacian regularization. 1 1 INTRODUCTION In order to learn Generative Adversarial Networks (Goodfellow et al., 2014), it is now well established that the generator should mimic the distribution of real data, in the sense of a certain discrepancy measure. Discrepancies between distributions that measure the goodness of the fit of the neural generator to the real data distribution has been the subject of many recent studies (Arjovsky & Bottou, 2017; Nowozin et al., 2016; Kaae Sønderby et al., 2017; Mao et al., 2017; Arjovsky et al., 2017; Gulrajani et al., 2017; Mroueh et al., 2017; Mroueh & Sercu, 2017; Li et al., 2017), most of which focus on training stability. In terms of data modalities, most success was booked in plausible natural image generation after the introduction of Deep Convolutional Generative Adversarial Networks (DCGAN) (Radford et al., 2015). This success is not only due to advances in training generative adversarial networks in terms of loss functions (Arjovsky et al., 2017) and stable algorithms, but also to the representation power of convolutional neural networks in modeling images and in finding sufficient statistics that capture the continuous density function of natural images. When moving to neural generators of discrete sequences generative adversarial networks theory and practice are still not very well understood. Maximum likelihood pre-training or augmentation, in conjunction with the use of reinforcement learning techniques were proposed in many recent works for training GAN for discrete sequences generation (Yu et al., 2016; Che et al., 2017; Hjelm et al., 2017; Rajeswar et al., 2017). Other methods included using the Gumbel Softmax trick (Kusner & Hern andez-Lobato, 2016) and the use of auto-encoders to generate adversarially discrete sequences from a continuous space (Zhao et al., 2017). End to end training of GANs for discrete sequence generation is still an open problem (Press et al., 2017). Empirical successes of end to end training have been reported within the framework of WGAN-GP (Gulrajani et al., 2017), using a proxy for the Wasserstein distance via a 1 Code for semi-supervised learning experiments is available on https://github.com/tomsercu/ Sobolev GAN-SSL Published as a conference paper at ICLR 2018 pointwise gradient penalty on the critic. Inspired by this success, we propose in this paper a new Integral Probability Metric (IPM) between distributions that we coin Sobolev IPM. Intuitively an IPM (M uller, 1997) between two probability distributions looks for a witness function f, called critic, that maximally discriminates between samples coming from the two distributions: sup f F Ex Pf(x) Ex Qf(x). Traditionally, the function f is defined over a function class F that is independent to the distributions at hand (Sriperumbudur et al., 2012). The Wasserstein-1 distance corresponds for instance to an IPM where the witness functions are defined over the space of Lipschitz functions; The MMD distance (Gretton et al., 2012) corresponds to witness functions defined over a ball in a Reproducing Kernel Hilbert Space (RKHS). We will revisit in this paper Fisher IPM defined in (Mroueh & Sercu, 2017), which extends the IPM definition to function classes defined with norms that depend on the distributions. Fisher IPM can be seen as restricting the critic to a Lebsegue ball defined with respect to a dominant measure µ. The Lebsegue norm is defined as follows: ˆ X f 2(x)µ(x)dx. where µ is a dominant measure of P and Q. In this paper we extend the IPM framework to critics bounded in the Sobolev norm: ˆ X xf(x) 2 2 µ(x)dx, In contrast to Fisher IPM, which compares joint probability density functions of all coordinates between two distributions, we will show that Sobolev IPM compares weighted (coordinate-wise) conditional Cumulative Distribution Functions for all coordinates on a leave on out basis. Matching conditional dependencies between coordinates is crucial for sequence modeling. Our analysis and empirical verification show that the modeling of the conditional dependencies can be built in to the metric used to learn GANs as in Sobolev IPM. For instance, this gives an advantage to Sobolev IPM in comparing sequences over Fisher IPM. Nevertheless, in sequence modeling when we parametrize the critic and the generator with a neural network, we find an interesting tradeoff between the metric used and the architectures used to parametrize the critic and the generator as well as the conditioning used in the generator. The burden of modeling the conditional long term dependencies can be handled by the IPM loss function as in Sobolev IPM (more accurately the choice of the data dependent function class of the critic) or by a simpler metric such as Fisher IPM together with a powerful architecture for the critic that models conditional long term dependencies such as LSTM or GRUs in conjunction with a curriculum conditioning of the generator as done in (Press et al., 2017). Highlighting those interesting tradeoffs between metrics, data dependent functions classes for the critic (Fisher or Sobolev) and architectures is crucial to advance sequence modeling and more broadly structured data generation using GANs. On the other hand, Sobolev norms have been widely used in manifold regularization in the so called Laplacian framework for semi-supervised learning (SSL) (Belkin et al., 2006). GANs have shown success in semi-supervised learning (Salimans et al., 2016; Dumoulin et al., 2017; Dai et al., 2017; Kumar et al., 2017). Nevertheless, many normalizations and additional tricks were needed. We show in this paper that a variant of Sobolev GAN achieves strong results in semi-supervised learning on CIFAR-10, without the need of any activation normalization in the critic. The main contributions of this paper can be summarized as follows: 1. We overview in Section 2 different metrics between distribution used in the GAN literature. We then generalize Fisher IPM in Section 3 with a general dominant measure µ and show how it compares distributions based on their PDFs. 2. We introduce Sobolev IPM in Section 4 by restricting the critic of an IPM to a Sobolev ball defined with respect to a dominant measure µ. We then show that Sobolev IPM defines a discrepancy between weighted (coordinate-wise) conditional CDFs of distributions. Published as a conference paper at ICLR 2018 3. The intrinsic conditioning and the CDF matching make Sobolev IPM suitable for discrete sequence matching and explain the success of the gradient pernalty in WGAN-GP and Sobolev GAN in discrete sequence generation. 4. We give in Section 5 an ALM (Augmented Lagrangian Multiplier) algorithm for training Sobolev GAN. Similar to Fisher GAN, this algorithm is stable and does not compromise the capacity of the critic. 5. We show in Appendix A that the critic of Sobolev IPM satisfies an elliptic Partial Differential Equation (PDE). We relate this diffusion to the Fokker-Planck equation and show the behavior of the gradient of the optimal Sobolev critic as a transportation plan between distributions. 6. We empirically study Sobolev GAN in character level text generation (Section 6.1). We validate that the conditioning implied by Sobolev GAN is crucial for the success and stability of GAN in text generation. As a take home message from this study, we see that text generation succeeds either by implicit conditioning i.e using Sobolev GAN (or WGANGP) together with convolutional critics and generators, or by explicit conditioning i.e using Fisher IPM together with recurrent critic and generator and curriculum learning. 7. We finally show in Section 6.2 that a variant of Sobolev GAN achieves competitive semisupervised learning results on CIFAR-10, thanks to the smoothness implied by the Sobolev regularizer. 2 OVERVIEW OF METRICS BETWEEN DISTRIBUTIONS In this Section, we review different representations of probability distributions and metrics for comparing distributions that use those representations. Those metrics are at the core of training GAN. In what follows, we consider probability measures with a positive weakly differentiable probability density functions (PDF). Let P and Q be two probability measures with PDFs P(x) and Q(x) defined on X Rd. Let FP and FQ be the Cumulative Distribution Functions (CDF) of P and Q respectively. For x = (x1, . . . , xd), we have: FP(x) = ˆ x1 P(u1, . . . ud)du1 . . . dud. The score function of a density function is defined as: s P(x) = x log(P(x)) Rd. In this work, we are interested in metrics between distributions that have a variational form and can be written as a suprema of mean discrepancies of functions defined on a specific function class. This type of metrics include ϕ-divergences as well as Integral Probability Metrics (Sriperumbudur et al., 2009) and have the following form: d F(P, Q) = sup f F | (f; P, Q)| , where F is a function class defined on X and is a mean discrepancy, : F R. The variational form given above leads in certain cases to closed form expressions in terms of the PDFs P, Q or in terms of the CDFs FP, FQ or the score functions s P, s Q. In Table 1, we give a comparison of different discrepancies and function spaces F used in the literature for GAN training together with our proposed Sobolev IPM. We see from Table 1 that Sobolev IPM, compared to Wasserstein Distance, imposes a tractable smoothness constraint on the critic on points sampled from a distribution µ, rather then imposing a Lipschitz constraint on all points in the space X . We also see that Sobolev IPM is the natural generalization of the Cram er Von-Mises Distance from one dimension to high dimensions. We note that the Energy Distance, a form of Maximum Mean Discrepancy for a special kernel, was used in (Bellemare et al., 2017b) as a generalization of the Cram er distance in GAN training but still needed a gradient penalty in its algorithmic counterpart leading to a mis-specified distance between distributions. Finally it is worth noting that when comparing Fisher IPM and Sobolev IPM we see that while Fisher IPM compares joint PDF of the distributions, Sobolev IPM compares weighted (coordinate-wise) conditional CDFs. As we will see later, this conditioning nature of the metric makes Sobolev IPM suitable for comparing sequences. Note that the Stein metric (Liu et al., 2016; Liu, 2017) uses the score function to match distributions. We will show later how Sobolev IPM relates to the Stein discrepancy (Appendix A). Published as a conference paper at ICLR 2018 (f; P, Q) F d F(P, Q) Function class Closed Form ϕ-Divergence Ex Pf(x) Ex Qϕ (f(x)) n f : X R, f domϕ o Ex Q h ϕ( P(x) (Goodfellow et al., 2014) (Nowozin et al., 2016) ϕ Fenchel Conjugate Wasserstein -1 Ex Pf(x) Ex Qf(x) n f : X R, f lip 1 o infπ Π(P,Q) X x y 1 dπ(x, y) (Arjovsky et al., 2017) Sinkhorn Divergence (Gulrajani et al., 2017) (Genevay et al., 2017) MMD Ex Pf(x) Ex Qf(x) n f : X R, f Hk 1 o Ex Pkx Ex Qkx Hk (Li et al., 2017) (Li et al., 2015) (Dziugaite et al., 2015) Stein Ex Q [T(P)f(x)] n f : X Rd NA in general Discrepancy T(P) = ( x log(P(x)) + x. f smooth with zero has a closed form (Wang & Liu, 2016) boundary condition o Cram er Ex Pf(x) Ex Qf(x) n f : X R, Ex P( df(x) Ex P FP(x) FQ(x) for d = 1 f smooth with zero x R (Bellemare et al., 2017a) boundary condition o µ-Fisher Ex Pf(x) Ex Qf(x) n f : X R, f L2(X , µ), Ex µ P(x) Q(x) IPM Ex µf 2(x) 1 o (Mroueh & Sercu, 2017) µ-Sobolev Ex Pf(x) Ex Qf(x) n f : X R, f W 1,2 0 (X , µ), 1 d Ex µ Pd i=1 φi(P) φi(Q) IPM Ex µ xf(x) 2 1, (This work) with zero boundary condition o where φi(P) = PX i(x i)FP[Xi|X i=x i](xi) x i = (x1, . . . xi 1, xi+1, . . . xd) Table 1: Comparison of different metrics between distributions used for GAN training. References are for papers using those metrics for GAN training. 3 GENERALIZING FISHER IPM: PDF COMPARISON Imposing data-independent constraints on the function class in the IPM framework, such as the Lipschitz constraint in the Wasserstein distance is computationally challenging and intractable for the general case. In this Section, we generalize the Fisher IPM introduced in (Mroueh & Sercu, 2017), where the function class is relaxed to a tractable data dependent constraint on the second order moment of the critic, in other words the critic is constrained to be in a Lebsegue ball. Fisher IPM. Let X Rd and P(X ) be the space of distributions defined on X . Let P, Q P(X ), and µ be a dominant measure of P and Q, in the sense that µ(x) = 0 = P(x) = 0 and Q(x) = 0. We assume µ to be also a distribution in P(X ), and assume µ(x) > 0, x X . Let L2(X , µ) be the space of µ-measurable functions. For f, g L2(X , µ), we define the following dot product and its corresponding norm: f, g L2(X ,µ) = ˆ X f(x)g(x)µ(x)dx, f L2(X ,µ) = X f 2(x)µ(x)dx. Note that L2(X , µ), can be formally defined as follows: L2(X , µ) = {f : X R s.t f L2(X ,µ) < }. Published as a conference paper at ICLR 2018 We define the unit Lebesgue ball as follows: B2(X , µ) = {f L2(X , µ), f L2(X ,µ) 1}. Fisher IPM defined in (Mroueh & Sercu, 2017), searches for the critic function in the Lebesgue Ball B2(X , µ) that maximizes the mean discrepancy between P and Q. Fisher GAN (Mroueh & Sercu, 2017) was originally formulated specifically for µ = 1 2(P + Q). We consider here a general µ as long as it dominates P and Q. We define Generalized Fisher IPM as follows: Fµ(P, Q) = sup f B2(X ,µ) Ex Pf(x) Ex Qf(x) (1) Ex Pf(x) Ex Qf(x) = f, P Q Hence Fisher IPM can be written as follows: Fµ(P, Q) = sup f B2(X ,µ) L2(X ,µ) (2) We have the following result: Theorem 1 (Generalized Fisher IPM). The Fisher distance and the optimal critic are as follows: 1. The Fisher distance is given by: Fµ(P, Q) = P Q 2. The optimal fχ achieving the Fisher distance Fµ(P, Q) is: fχ = 1 F(P, Q) P Q µ , µ almost surely. Proof of Theorem 1. From Equation (2), the optimal fχ belong to the intersection of the hyperplane that has normal n = P Q µ , and the ball B2(X , µ), hence fχ = n n L2(X ,µ) . Hence F(P, Q) = n L2(X ,µ). kfk L2(X ,µ) = 1 We see from Theorem 1 the role of the dominant measure µ: the optimal critic is defined with respect to this measure and the overall Fisher distance can be seen as an average weighted distance between probability density functions, where the average is taken on points sampled from µ. We give here some choices of µ: 1. For µ = 1 2(P + Q), we obtain the symmetric chi-squared distance as defined in (Mroueh & Sercu, 2017). 2. µGP , the implicit distribution defined by the interpolation lines between Pr and Qθ as in (Gulrajani et al., 2017). 3. When µ does not dominate P, and Q, we obtain a non symmetric divergence. For example for µ = P, F 2 P (P, Q) = X (P(x) Q(x))2 P(x) dx. We see here that for this particular choice we obtain the Pearson divergence. 4 SOBOLEV IPM In this Section, we introduce the Sobolev IPM. In a nutshell, the Sobolev IPM constrains the critic function to belong to a ball in the restricted Sobolev Space. In other words we constrain the norm of the gradient of the critic xf(x). We will show that by moving from a Lebesgue constraint as in Fisher IPM to a Sobolev constraint as in Sobolev IPM, the metric changes from a joint PDF matching to weighted (ccordinate-wise) conditional CDFs matching. The intrinsic conditioning built in to the Sobolev IPM and the comparison of cumulative distributions makes Sobolev IPM suitable for comparing discrete sequences. Published as a conference paper at ICLR 2018 4.1 DEFINITION AND EXPRESSION OF SOBOLEV IPM IN TERMS OF COORDINATE CONDITIONAL CDFS We will start by recalling some definitions on Sobolev Spaces. We assume in the following that X is compact and consider functions in the Sobolev space W 1,2(X , µ): W 1,2(X , µ) = f : X R, ˆ X xf(x) 2 µ(x)dx < , We restrict ourselves to functions in W 1,2(X , µ) vanishing at the boundary, and note this space W 1,2 0 (X , µ). Note that in this case: f W 1,2 0 (X ,µ) = X xf(x) 2 µ(x)dx defines a semi-norm. We can similarly define a dot product in W 1,2 0 (X , µ), for f, g W 1,2 0 (X , µ): f, g W 1,2 0 (X ,µ) = ˆ X xf(x), xg(x) Rd µ(x)dx. Hence we define the following Sobolev IPM, by restricting the critic of the mean discrepancy to the Sobolev unit ball : Sµ(P, Q) = sup f W 1,2 0 , f W 1,2 0 (X ,µ) 1 n Ex Pf(x) Ex Qf(x) o . (3) When compared to the Wasserstein distance, the Sobolev IPM given in Equation (3) uses a data dependent gradient constraint (depends on µ) rather than a data independent Lipchitz constraint. Let FP and FQ be the cumulative distribution functions of P and Q respectively. We have: x1 . . . xd FP(x), (4) and we define x1 . . . xi 1 xi+1 . . . xd , for i = 1 . . . d. D i computes the (d 1) high-order partial derivative excluding the variable i. Our main result is presented in Theorem 2. Additional theoretical results are given in Appendix A. All proofs are given in Appendix B. Theorem 2 (Sobolev IPM). Assume that FP, and FQ and its d derivatives exist and are continuous: FP and FQ Cd(X ). Define the differential operator D : D = (D 1, . . . D d). For x = (x1, . . . xi 1, xi, xi+1, . . . xd), let x i = (x1, . . . xi 1, xi+1, . . . xd). The Sobolev IPM given in Equation (3) has the following equivalent forms: 1. Sobolev IPM as comparison of high order partial derivatives of CDFs. The Sobolev IPM has the following form: Sµ(P, Q) = 1 Pd i=1(D i FP(x) D i FQ(x))2 2. Sobolev IPM as comparison of weighted (coordinate-wise) conditional CDFs. The Sobolev IPM can be written in the following equivalent form: S2 µ(P, Q) = 1 PX i(x i)FP[Xi|X i=x i](xi) QX i(x i)FQ[Xi|X i=x i](xi) Published as a conference paper at ICLR 2018 3. The optimal critic f satisfies the following identity: xf (x) = 1 d Sµ(P, Q) D FQ(x) D FP(x) µ(x) , µ almost surely. (6) Sobolev IPM Approximation. Learning in the whole Sobolev space W 1,2 0 is challenging hence we need to restrict our function class to a hypothesis class H , such as neural networks. We assume in the following that functions in H vanish on the boundary of X , and restrict the optimization to the function space H . H can be a Reproducing Kernel Hilbert Space as in the MMD case or parametrized by a neural network. Define the Sobolev IPM approximation in H : SH ,µ(P, Q) = sup f H , f W 1,2 0 1 n Ex Pf(x) Ex Qf(x) o (7) The following Lemma shows that the Sobolev IPM approximation in H is proportional to Sobolev IPM. The tightness of the approximation of the Sobolev IPM is governed by the tightness of the approximation of the optimal Sobolev Critic f in H . This approximation is measured in the Sobolev sense, using the Sobolev dot product. Lemma 1 (Sobolev IPM Approximation in a Hypothesis Class). Let H be a function space with functions vanishing at the boundary. For any f H and for f the optimal critic in W 1,2 0 , we have: SH ,µ(P, Q) = Sµ(P, Q) sup f H , f W 1,2 0 (X ,µ) 1 X xf(x), xf (x) Rd µ(x)dx. Note that this Lemma means that the Sobolev IPM is well approximated if the space H has an enough representation power to express xf (x). This is parallel to the Fisher IPM approximation (Mroueh & Sercu, 2017) where it is shown that the Fisher IPM approximation error is proportional to the critic approximation in the Lebesgue sense. Having in mind that the gradient of the critic is the information that is passed on to the generator, we see that this convergence in the Sobolev sense to the optimal critic is an important property for GAN training. Relation to Fokker-Planck Diffusion. We show in Appendix A that the optimal Sobolev critic is the solution of the following elliptic PDE (with zero boundary conditions): P Q Sµ(P, Q) = div(µ(x) xf(x)). (8) We further link the elliptic PDE given in Equation (8) and the Fokker-Planck diffusion. As we illustrate in Figure 2(b) the gradient of the critic defines a transportation plan for moving the distribution mass from Q to P. Discussion of Theorem 2. We make the following remarks on Theorem 2: 1. From Theorem 2, we see that the Sobolev IPM compares d higher order partial derivatives of the cumulative distributions FP and FQ, while Fisher IPM compares the probability density functions. 2. The dominant measure µ plays a similar role to Fisher: S2 µ(P, Q) = 1 D i FP(x) D i FQ(x) the average distance is defined with respect to points sampled from µ. Published as a conference paper at ICLR 2018 3. Comparison of coordinate-wise Conditional CDFs. We note in the following x i = (x1, . . . xi 1, xi+1, . . . xd). Note that we have: D i FP(x) = d 1 x1 . . . xi 1 xi+1 . . . xd P(u1 . . . ud)du1 . . . dud P(x1, . . . , xi 1, u, xi+1, . . . , xd)du = PX i(x1, . . . , xi 1, xi+1, . . . xd) ˆ xi P[Xi|X i=x i](u|x1, . . . , xi 1, xi+1, . . . xd)du (Using Bayes rule) = PX i(x i)FP[Xi|X i=x i](xi), Note that for each i, D i FP(x) is the cumulative distribution of the variable Xi given the other variables X i = x i, weighted by the density function of X i at x i. This leads us to the form given in Equation 5. We see that the Sobolev IPM compares for each dimension i the conditional cumulative distribution of each variable given the other variables, weighted by their density function. We refer to this as comparison of coordinate-wise CDFs on a leave one out basis. From this we see that we are comparing CDFs, which are better behaved on discrete distributions. Moreover, the conditioning built in to this metric will play a crucial role in comparing sequences as the conditioning is important in this context (See section 6.1). 4.2 ILLUSTRATIVE EXAMPLES Sobolev IPM / Cram er Distance and Wasserstein-1 in one Dimension. In one dimension, Sobolev IPM is the Cram er Distance (for µ uniform on X , we note this µ := 1). While Sobolev IPM in one dimension measures the discrepancy between CDFs, the one dimensional Wasserstein-p distance measures the discrepancy between inverse CDFs: S2 µ:=1(P, Q) = ˆ X (FP(x) FQ(x))2dx versus W p p (P, Q) = ˆ 1 0 |F 1 P (u) F 1 Q (u)|pdu, Recall also that the Fisher IPM for uniform µ is given by : F 2 µ:=1(P, Q) = ˆ X (P(x) Q(x))2dx. Consider for instance two point masses P = δa1 and Q = δa2 with a1, a2 R. The rationale behind using Wasserstein distance for GAN training is that since it is a weak metric, for far distributions Wasserstein distance provides some signal (Arjovsky et al., 2017). In this case, it is easy to see that W 1 1 (P, Q) = S2 µ:=1 = |a1 a2|, while F 2 µ:=1(P, Q) = 2. As we see from this simple example, CDF comparison is more suitable than PDF for comparing distributions on discrete spaces. See Figure 1, for a further discussion of this effect in the GAN context. Sobolev IPM between two 2D Gaussians. We consider P and Q to be two dimensional Gaussians with means µ1 and µ2 and covariances Σ1 and Σ2. Let (x, y) be the coordinates in 2D. We note FP and FQ the CDFs of P and Q respectively. We consider in this example µ = P+Q 2 . We know from Theorem 2 that the gradient of the Sobolev optimal critic is proportional to the following vector field: f (x, y) α 1 µ(x, y) y(FQ(x, y) FP(x, y)) x(FQ(x, y) FP(x, y)) In Figure 2 we consider µ1 = [1, 0], Σ1 = 1.9 0.8 0.8 1.3 µ2 = [1, 2], Σ2 = 1.9 0.8 0.8 1.3 In Figure 2(a) we plot the numerical solution of the PDE satisfied by the optimal Sobolev critic given in Equation (8), using MATLAB solver for elliptic PDEs (more accurately we solve div(µ(x) xf(x)) = P(x) Q(x), hence we obtain the solution of Equation (8) up to a normalization constant ( 1 Sµ(P,Q))). We numerically solve the PDE on a rectangle with zero boundary Published as a conference paper at ICLR 2018 10 5 0 5 10 0.0 10 5 0 5 10 0.0 Cumulative Density (a) Smoothed discrete densities: PDF versus CDF of smoothed discrete densities with non overlapping supports. 10 5 0 5 10 0.0 10 5 0 5 10 0.0 Cumulative Density (b) Smoothed Discrete and Continuous densities: PDF versus CDF of a smoothed discrete density and a continuous density with non overlapping supports. Figure 1: In the GAN context for example in text generation, we have to match a (smoothed) discrete real distribution and a continuous generator. In this case, the CDF matching enabled by Sobolev IPM gives non zero discrepancy between a (smoothed) discrete and a continuous density even if the densities have disjoint supports. This ensures non vanishing gradients of the critic. conditions. We see that the optimal Sobolev critic separates the two distributions well. In Figure 2(b) we then numerically compute the gradient of the optimal Sobolev critic on a 2D grid as given in Equation 9 (using numerical evaluation of the CDF and finite difference for the evaluation of the partial derivatives). We plot in Figure 2(b) the density functions of P and Q as well as the vector field of the gradient of the optimal Sobolev critic. As discussed in Section A.1, we see that the gradient of the critic (wrt to the input), defines on the support of µ = P+Q 2 a transportation plan for moving the distribution mass from Q to P. 5 SOBOLEV GAN Now we turn to the problem of learning GANs with Sobolev IPM. Given the real distribution Pr P(X ), our goal is to learn a generator gθ : Z Rnz X , such that for z pz, the distribution of gθ(z) is close to the real data distribution Pr, where pz is a fixed distribution on Z (for instance z N (0, Inz)). We note Qθ for the fake distribution of gθ(z), z pz. Consider {xi, i = 1 . . . N} Pr, {zi, i = 1 . . . N} N (0, Inz), and { xi, i = 1 . . . N} µ. We consider these choices for µ: 1. µ = Pr+Qθ 2 i.e x Pr or x = gθ(z), z pz with equal probability 1 2. µGP is the implicit distribution defined by the interpolation lines between Pr and Qθ as in (Gulrajani et al., 2017) i.e : x = ux + (1 u)y, x Pr, y = gθ(z), z pz and u Unif[0, 1]. Sobolev GAN can be written as follows: min gθ sup fp, 1 N PN i=1 xfp( xi) 2=1 ˆE (fp, gθ) = 1 i=1 fp(xi) 1 i=1 fp(gθ(zi)) For any choice of the parametric function class Hp , note the constraint by ˆΩS(fp, gθ) = 1 N PN i=1 xfp( xi) 2 . For example if µ = Pr+Qθ 2 , ˆΩS(fp, gθ) = 1 2N PN i=1 xfp(xi) 2 + Published as a conference paper at ICLR 2018 (a) Numerical solution of the PDE satisfied by the optimal Sobolev critic. (b) Optimal Sobolev Transport Vector Field xf (x) (arrows are the vector field xf (x) evaluated on the 2D grid. Magnitude of arrows was rescaled for visualization.) Figure 2: Numerical solution of the PDE satisfied by the optimal Sobolev critic and the transportation Plan induced by the gradient of Sobolev critic. The gradient of the critic (wrt to the input), defines on the support of µ = P+Q 2 a transportation plan for moving the distribution mass from Q to P. For a theoretical analysis of this transportation plan and its relation to Fokker-Planck diffusion the reader is invited to check Appendix A. 1 2N PN i=1 xfp(gθ(zi)) 2. Note that, since the optimal theoretical critic is achieved on the sphere, we impose a sphere constraint rather than a ball constraint. Similar to (Mroueh & Sercu, 2017) we define the Augmented Lagrangian corresponding to Sobolev GAN objective and constraint LS(p, θ, λ) = ˆE (fp, gθ) + λ(1 ˆΩS(fp, gθ)) ρ 2(ˆΩS(fp, gθ) 1)2 (10) where λ is the Lagrange multiplier and ρ > 0 is the quadratic penalty weight. We alternate between optimizing the critic and the generator. We impose the constraint when training the critic only. Given θ, we solve maxp minλ LS(p, θ, λ), for training the critic. Then given the critic parameters p we optimize the generator weights θ to minimize the objective minθ ˆE (fp, gθ). See Algorithm 1. Algorithm 1 Sobolev GAN Input: ρ penalty weight, η Learning rate, nc number of iterations for training the critic, N batch size Initialize p, θ, λ = 0 repeat for j = 1 to nc do Sample a minibatch xi, i = 1 . . . N, xi Pr Sample a minibatch zi, i = 1 . . . N, zi pz (gp, gλ) ( p LS, λLS)(p, θ, λ) p p + η ADAM (p, gp) λ λ ρgλ {SGD rule on λ with learning rate ρ} end for Sample zi, i = 1 . . . N, zi pz dθ θ ˆE (fp, gθ) = θ 1 N PN i=1 fp(gθ(zi)) θ θ η ADAM (θ, dθ) until θ converges Remark 1. Note that in Algorithm 1, we obtain a biased estimate since we are using same samples for the cost function and the constraint, but the incurred bias can be shown to be small and vanishing as the number of samples increases as shown and justified in (Shivaswamy & Jebara, 2010). Published as a conference paper at ICLR 2018 Relation to WGAN-GP. WGAN-GP can be written as follows: min gθ sup f, xfp( xi) =1, xi µGP ˆE (fp, gθ) = 1 i=1 fp(xi) 1 i=1 fp(gθ(zi)) The main difference between WGAN-GP and our setting, is that WGAN-GP enforces pointwise constraints on points drawn from µ = µGP via a point-wise quadratic penalty ( ˆE (fp, gθ) λ PN i=1(1 xf( xi) )2) while we enforce that constraint on average as a Sobolev norm, allowing us the coordinate weighted conditional CDF interpretation of the IPM. 6 APPLICATIONS OF SOBOLEV GAN Sobolev IPM has two important properties; The first stems from the conditioning built in to the metric through the weighted conditional CDF interpretation. The second stems from the diffusion properties that the critic of Sobolev IPM satisfies (Appendix A) that has theoretical and practical ties to the Laplacian regularizer and diffusion on manifolds used in semi-supervised learning (Belkin et al., 2006). In this Section, we exploit those two important properties in two applications of Sobolev GAN: Text generation and semi-supervised learning. First in text generation, which can be seen as a discrete sequence generation, Sobolev GAN (and WGAN-GP) enable training GANs without need to do explicit brute-force conditioning. We attribute this to the built-in conditioning in Sobolev IPM (for the sequence aspect) and to the CDF matching (for the discrete aspect). Secondly using GANs in semi-supervised learning is a promising avenue for learning using unlabeled data. We show that a variant of Sobolev GAN can achieve strong SSL results on the CIFAR-10 dataset, without the need of any form of activation normalization in the networks or any extra ad hoc tricks. 6.1 TEXT GENERATION WITH SOBOLEV GAN In this Section, we present an empirical study of Sobolev GAN in character level text generation. Our empirical study on end to end training of character-level GAN for text generation is articulated on four dimensions (loss, critic, generator, µ). (1) the loss used (GP: WGAN-GP (Gulrajani et al., 2017), S: Sobolev or F: Fisher) (2) the architecture of the critic (Resnets or RNN) (3) the architecture of the generator (Resnets or RNN or RNN with curriculum learning) (4) the sampling distribution µ in the constraint. Text Generation Experiments. We train a character-level GAN on Google Billion Word dataset and follow the same experimental setup used in (Gulrajani et al., 2017). The generated sequence length is 32 and the evaluation is based on Jensen-Shannon divergence on empirical 4-gram probabilities (JS-4) of validation data and generated data. JS-4 may not be an ideal evaluation criteria, but it is a reasonable metric for current character-level GAN results, which is still far from generating meaningful sentences. Annealed Smoothing of discrete Pr in the constraint µ. Since the generator distribution will always be defined on a continuous space, we can replace the discrete real distribution Pr with a smoothed version (Gaussian kernel smoothing) Pr N (0, σ2Id). This corresponds to doing the following sampling for Pr : x + ξ, x Pr, and ξ N (0, σ2Id). Note that we only inject noise to the real distribution with the goal of smoothing the support of the discrete distribution, as opposed to instance noise on both real and fake to stabilize the training, as introduced in (Kaae Sønderby et al., 2017; Arjovsky & Bottou, 2017). As it is common in optimization by continuation (Mobahi & III, 2015), we also anneal the noise level σ as the training progresses on a linear schedule. Sobolev GAN versus WGAN-GP with Resnets. In this setting, we compare (WGANGP,G=Resnet,D=Resnet,µ = µGP ) to (Sobolev,G=Resnet,D=Resnet,µ) where µ is one of: (1) µGP , (2) the noise smoothed µs(σ) = Pr N (0,σ2Id)+Qθ 2 or (3) noise smoothed with annealing µa s(σ0) with σ0 the initial noise level. We use the same architectures of Resnet with 1D convolution for the critic and the generator as in (Gulrajani et al., 2017) (4 resnet blocks with hidden layer size of 512). In order to implement the noise smoothing we transform the data into one-hot vectors. Each one hot vector x is transformed to a probability vector p with 0.9 replacing the one and 0.1/(dictsize 1) replacing the zero. We then sample ϵ from a Gaussian distribution N (0, σ2), and Published as a conference paper at ICLR 2018 0 5000 10000 15000 20000 25000 30000 gθ iterations (GP, D=res, G=res, µGP) (S, D=res, G=res, µGP) (a) Comparing Sobolev with µGP and WGAN-GP. The JS-4 are 0.3363 and 0.3302 respectively. 0 5000 10000 15000 20000 25000 30000 gθ iterations (GP, D=res, G=res, µGP) (S, D=res, G=res, ( r + θ)/2) (S, D=res, G=res, µa s (σ0 = 1. 0)) (S, D=res, G=res, µa s (σ0 = 1. 5)) (b) Comparing Sobolev with different µ dominant measures and WGAN-GP. The JS4 of µa s(σ0 = 1.5) is 0.3268. Figure 3: Result of Sobolev GAN for various dominating measure µ, for resnets as architectures of the critic and the generator. use softmax to normalize log p + ϵ. We use algorithm 1 for Sobolev GAN and fix the learning rate to 10 4 and ρ to 10 5. The noise level σ was annealed following a linear schedule starting from an initial noise level σ0 (at iteration i, σi = σ0(1 i Maxiter), Maxiter=30K). For WGAN-GP we used the open source implementation with the penalty λ = 10 as in (Gulrajani et al., 2017). Results are given in Figure 3(a) for the JS-4 evaluation of both WGAN-GP and Sobolev GAN for µ = µGP . In Figure 3(b) we show the JS-4 evaluation of Sobolev GAN with the annealed noise smoothing µa s(σ0), for various values of the initial noise level σ0. We see that the training succeeds in both cases. Sobolev GAN achieves slightly better results than WGAN-GP for the annealing that starts with high noise level σ0 = 1.5. We note that without smoothing and annealing i.e using µ = Pr+Qθ 2 , Sobolev GAN is behind. Annealed smoothing of Pr, helps the training as the real distribution is slowly going from a continuous distribution to a discrete distribution. See Appendix C (Figure 6) for a comparison between annealed and non annealed smoothing. We give in Appendix C a comparison of WGAN-GP and Sobolev GAN for a Resnet generator architecture and an RNN critic. The RNN has degraded performance due to optimization difficulties. Fisher GAN Curriculum Conditioning versus Sobolev GAN: Explicit versus Implicit conditioning. We analyze how Fisher GAN behaves under different architectures of generators and critics. We first fix the generator to be Res Net. We study 3 different architectures of critics: Res Net, GRU (we follow the experimental setup from (Press et al., 2017)), and hybrid Res Net+GRU (Reed et al., 2016). We notice that RNN is unstable, we need to clip the gradient values of critics in [ 0.5, 0.5], and the gradient of the Lagrange multiplier λF to [ 104, 104]. We fix ρF = 10 7 and we use µ = µGP . We search the value for the learning rate in [10 5, 10 4]. We see that for µ = µGP and G = Resnet for various critic architectures, Fisher GAN fails at the task of text generation (Figure 4 a-c). Nevertheless, when using RNN critics (Fig 4 b, c) a marginal improvement happens over the fully collapsed state when using a resnet critic (Fig 4 a). We hypothesize that RNN critics enable some conditioning and factoring of the distribution, which is lacking in Fisher IPM. Finally Figure 4 (d) shows the result of training with recurrent generator and critic. We follow (Press et al., 2017) in terms of GRU architecture, but differ by using Fisher GAN rather than WGAN-GP. We use µ = Pr+Qθ 2 i.e. without annealed noise smoothing. We train (F, D=RNN,G=RNN, Pr+Qθ 2 ) using curriculum conditioning of the generator for all lengths ℓas done in (Press et al., 2017): the generator is conditioned on 32 ℓcharacters and predicts the ℓremaining characters. We increment ℓ= 1 to 32 on a regular schedule (every 15k updates). JS-4 is only computed when ℓ> 4. We see in Figure 4 that under curriculum conditioning with recurrent critics and generators, the training of Fisher GAN succeeds and reaches similar levels of Sobolev GAN (and WGAN-GP). Note that the need of this explicit brute force conditioning for Fisher GAN, highlights the implicit conditioning induced by Sobolev GAN via the gradient regularizer, without the need for curriculum conditioning. Published as a conference paper at ICLR 2018 0 5000 10000 15000 20000 25000 30000 gθ iterations a: (F, D=res, G=res, µGP) b: (F, D=rnn, G=res, µGP) c: (F, D=res+rnn, G=res, µGP) d: (F, D=rnn, G=rnn+curr, ( r + θ)/2) Figure 4: Fisher GAN with different architectures for critics: (a-c) We see that for µ = µGP and G = Resnet for various critic architectures, Fisher GAN fails at the task of text generation. We notice small improvements for RNN critics (b-c) due to the conditioning and factoring of the distribution. (d) Fisher GAN with recurrent generator and critic, trained on a curriculum conditioning for increasing lengths ℓ, increments indicated by gridlines. In this curriculum conditioning setup, with recurrent critics and generators, the training of Fisher GAN succeeds and reaches similar levels of Sobolev GAN (and WGAN-GP). It is important to note that by doing this explicit curriculum conditioning for Fisher GAN, we highlight the implicit conditioning induced by Sobolev GAN, via the gradient regularizer. 6.2 SEMI-SUPERVISED LEARNING WITH SOBOLEV GAN A proper and promising framework for evaluating GANs consists in using it as a regularizer in the semi-supervised learning setting (Salimans et al., 2016; Dumoulin et al., 2017; Kumar et al., 2017). As mentioned before, the Sobolev norm as a regularizer for the Sobolev IPM draws connections with the Laplacian regularization in manifold learning (Belkin et al., 2006). In the Laplacian framework of semi-supervised learning, the classifier satisfies a smoothness constraint imposed by controlling its Sobolev norm: X xf(x) 2 µ2(x)dx (Alaoui et al., 2016). In this Section, we present a variant of Sobolev GAN that achieves competitive performance in semi-supervised learning on the CIFAR10 dataset Krizhevsky & Hinton (2009) without using any internal activation normalization in the critic, such as batch normalization (BN) (Ioffe & Szegedy, 2015), layer normalization (LN) (Ba et al., 2016), or weight normalization (Salimans & Kingma, 2016). In this setting, a convolutional neural network Φω : X Rm is shared between the cross entropy (CE) training of a K-class classifier (S RK m) and the critic of GAN (See Figure 5). We have the following training equations for the (critic + classifer) and the generator: Critic + Classifier: max S,Φω,f LD = LGAN alm (f, gθ) λCE X (x,y) lab CE(p(y|x), y) (11) Generator: max θ LG = ˆE (f, gθ) (12) where the main IPM objective with N samples: ˆE (f, gθ) = 1 x unl f(x) P z pz f(gθ(z)) . Following (Mroueh & Sercu, 2017) we use the following K +1 parametrization for the critic (See Figure 5) : y=1 p(y|x) Sy, Φω(x) | {z } f+: real critic v, Φω(x) | {z } f : fake critic Note that p(y|x) = Softmax( S, Φω(x) )y appears both in the critic formulation and in the Cross Entropy term in Equation (11). Intuitively this critic uses the K class directions of the classifier Sy to define the real direction, which competes with another K+1th direction v that indicates fake samples. This parametrization adapts the idea of (Salimans et al., 2016), which was formulated specifically for the classic KL / JSD based GANs, to IPM-based GANs. We saw consistently better results with the K + 1 formulation over the regular formulation where the classification layer S Published as a conference paper at ICLR 2018 doesn t interact with the critic direction v. We also note that when applying a gradient penalty based constraint (either WGAN-GP or Sobolev) on the full critic f = f+ f , it is impossible for the network to fit even the small labeled training set (underfitting), causing bad SSL performance. This leads us to the formulation below, where we apply the Sobolev constraint only on f . Throughout this Section we fix µ = Pr+Qθ We propose the following two schemes for constraining the K+1 critic f(x) = f+(x) f (x): 1) Fisher constraint on the critic: We restrict the critic to the following set: f = f+ f , ˆΩF (f, gθ) = 1 2N x unl f 2(x) + X z pz f 2(gθ(z)) This constraint translates to the following ALM objective in Equation (11): LGAN alm (f, gθ) = ˆE (f, gθ) + λF (1 ˆΩF (f, gθ)) ρF 2 (ˆΩF (f, gθ) 1)2, where the Fisher constraint ensures the stability of the training through an implicit whitened mean matching (Mroueh & Sercu, 2017). 2) Fisher+Sobolev constraint: We impose 2 constraints on the critic: Fisher on f & Sobolev on f f n f = f+ f , ˆΩF (f, gθ) = 1 and ˆΩS(f , gθ) = 1 o , where ˆΩS(f , gθ) = 1 2N P x unl xf (x) 2 + P z pz xf (gθ(z)) 2 . This constraint translates to the following ALM in Equation (11): LGAN alm (f, gθ) = ˆE (f, gθ) + λF (1 ˆΩF (f, gθ)) + λS(1 ˆΩS(f , gθ)) 2 (ˆΩF (f, gθ) 1)2 ρS 2 (ˆΩS(f , gθ) 1)2. Note that the fisher constraint on f ensures the stability of the training, and the Sobolev constraints on the fake critic f enforces smoothness of the fake critic and thus the shared CNN Φω(x). This is related to the classic Laplacian regularization in semi-supervised learning (Belkin et al., 2006). Table 2 shows results of SSL on CIFAR-10 comparing the two proposed formulations. Similar to the standard procedure in other GAN papers, we do hyperparameter and model selection on the validation set. We present baselines with a similar model architecture and leave out results with significantly larger convnets. G and D architectures and hyperparameters are in Appendix D. Φω is similar to (Salimans et al., 2016; Dumoulin et al., 2017; Mroueh & Sercu, 2017) in architecture, but note that we do not use any batch, layer, or weight normalization yet obtain strong competitive accuracies. We hypothesize that we don t need any normalization in the critic, because of the implicit whitening of the feature maps introduced by the Fisher and Sobolev constraints as explained in (Mroueh & Sercu, 2017). Softmax(h S, Φ!(x)i)y y=1 p(y|x) h Sy, Φ!(x)i f (x) = hv, Φ!(x)i real critic fake critic GAN critic f(x) = f+(x) f (x) Figure 5: K+1 parametrization of the critic for semi-supervised learning. Published as a conference paper at ICLR 2018 Table 2: CIFAR-10 error rates for varying number of labeled samples in the training set. Mean and standard deviation computed over 5 runs. We only use the K +1 formulation of the critic. Note that we achieve strong SSL performance without any additional tricks, and even though the critic does not have any batch, layer or weight normalization. Baselines with * use either additional models like Pixel CNN, or do data augmentation (translations and flips), or use a much larger model, either of which gives an advantage over our plain simple training method. is the result we achieved in our experimental setup under the same conditions but without K+1 critic (see Appendix D), since (Gulrajani et al., 2017) does not have SSL results. Number of labeled examples 1000 2000 4000 8000 Model Misclassification rate Cat GAN (Springenberg, 2015) 19.58 FM (Salimans et al., 2016) 21.83 2.01 19.61 2.09 18.63 2.32 17.72 1.82 ALI (Dumoulin et al., 2017) 19.98 0.3 19.09 0.15 17.99 0.54 17.05 0.50 Tangents Reg (Kumar et al., 2017) 20.06 0.5 16.78 0.6 Π-model (Laine & Aila, 2016) * 16.55 0.29 VAT (Miyato et al., 2017) 14.87 Bad Gan (Dai et al., 2017) * 14.41 0.30 VAT+Ent Min+Large (Miyato et al., 2017) * 13.15 Sajjadi (Sajjadi et al., 2016) * 11.29 WGAN-GP (Gulrajani et al., 2017) 44.85 0.28 37.62 0.56 32.66 0.48 30.38 0.22 Fisher, layer norm (Mroueh & Sercu, 2017) 19.74 0.21 17.87 0.38 16.13 0.53 14.81 0.16 Fisher, no norm (Mroueh & Sercu, 2017) 21.49 0.18 19.20 0.46 17.30 0.30 15.57 0.33 Sobolev + Fisher, no norm (This Work) 20.14 0.21 17.38 0.10 15.77 0.19 14.20 0.08 7 CONCLUSION We introduced the Sobolev IPM and showed that it amounts to a comparison between weighted (coordinate-wise) CDFs. We presented an ALM algorithm for training Sobolev GAN. The intrinsic conditioning implied by the Sobolev IPM explains the success of gradient regularization in Sobolev GAN and WGAN-GP on discrete sequence data, and particularly in text generation. We highlighted the important tradeoffs between the implicit conditioning introduced by the gradient regularizer in Sobolev IPM, and the explicit conditioning of Fisher IPM via recurrent critics and generators in conjunction with the curriculum conditioning. Both approaches succeed in text generation. We showed that Sobolev GAN achieves competitive semi-supervised learning results without the need of any normalization, thanks to the smoothness induced by the gradient regularizer. We think the Sobolev IPM point of view will open the door for designing new regularizers that induce different types of conditioning for general structured/discrete/graph data beyond sequences. Ahmed El Alaoui, Xiang Cheng, Aaditya Ramdas, Martin J. Wainwright, and Michael I. Jordan. Asymptotic behavior of p-based laplacian regularization in semi-supervised learning. Co RR, abs/1603.00564, 2016. Martin Arjovsky and L eon Bottou. Towards principled methods for training generative adversarial networks. In ICLR, 2017. Martin Arjovsky, Soumith Chintala, and L eon Bottou. Wasserstein gan. ICML, 2017. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. ar Xiv:1607.06450, 2016. Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. JMLR, 2006. Marc G. Bellemare, Ivo Danihelka, Will Dabney, Shakir Mohamed, Balaji Lakshminarayanan, Stephan Hoyer, and R emi Munos. The cramer distance as a solution to biased wasserstein gradients. Co RR, abs/1705.10743, 2017a. Published as a conference paper at ICLR 2018 Marc G Bellemare, Ivo Danihelka, Will Dabney, Shakir Mohamed, Balaji Lakshminarayanan, Stephan Hoyer, and R emi Munos. The cramer distance as a solution to biased wasserstein gradients. ar Xiv:1705.10743, 2017b. Tong Che, Yanran Li, Ruixiang Zhang, Devon R Hjelm, Wenjie Li, Yangqiu Song, and Yoshua Bengio. Maximum-likelihood augmented discrete generative adversarial networks. ar Xiv:1702.07983, 2017. Kacper Chwialkowski, Heiko Strathmann, and Arthur Gretton. A kernel test of goodness of fit. In ICML 2016, 2016. Zihang Dai, Zhilin Yang, Fan Yang, William W Cohen, and Ruslan Salakhutdinov. Good semisupervised learning that requires a bad gan. ar Xiv:1705.09783, 2017. Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and Aaron Courville. Adversarially learned inference. ICLR, 2017. Gintare Karolina Dziugaite, Daniel M. Roy, and Zoubin Ghahramani. Training generative neural networks via maximum mean discrepancy optimization. In UAI, 2015. I. Ekeland and T. Turnbull. Infinite-dimensional Optimization and Convexity. The University of Chicago Press, 1983. A. Genevay, G. Peyr e, and M. Cuturi. Learning generative models with sinkhorn divergences. Preprint 1706.00292, Arxiv, 2017. URL https://arxiv.org/abs/1706.00292. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014. Jackson Gorham and Lester W. Mackey. Measuring sample quality with stein s method. In NIPS, pp. 226 234, 2015. Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Sch olkopf, and Alexander Smola. A kernel two-sample test. JMLR, 2012. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved training of wasserstein gans. ar Xiv:1704.00028, 2017. R. Devon Hjelm, Athul Paul Jacob, Tong Che, Kyunghyun Cho, and Yoshua Bengio. Boundaryseeking generative adversarial networks. ar Xiv:1702.08431, 2017. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. Proc. ICML, 2015. Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Husz ar. Amortised map inference for image super-resolution. ICLR, 2017. A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Master s thesis, 2009. Abhishek Kumar, Prasanna Sattigeri, and P Thomas Fletcher. Improved semi-supervised learning with gans using manifold invariances. NIPS, 2017. Matt J. Kusner and Jos e Miguel Hern andez-Lobato. Gans for sequences of discrete elements with the gumbel-softmax distribution. ar Xiv:1611.04051, 2016. Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. ar Xiv:1610.02242, 2016. Chun-Liang Li, Wei-Cheng Chang, Yu Cheng, Yiming Yang, and Barnab as P oczos. MMD GAN: towards deeper understanding of moment matching network. NIPS, abs/1705.08584, 2017. Yujia Li, Kevin Swersky, and Richard Zemel. Generative moment matching networks. In ICML, 2015. Published as a conference paper at ICLR 2018 Qiang Liu. Stein variational descent as a gradient flow. NIPS, 2017. Qiang Liu and Dilin Wang. Stein variational gradient descent: A general purpose bayesian inference algorithm. In Advances in Neural Information Processing Systems 29. 2016. Qiang Liu, Jason D. Lee, and Michael I. Jordan. A kernelized stein discrepancy for goodness-of-fit tests. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, 2016. Xudong Mao, Qing Li, Haoran Xie, Raymond YK Lau, and Zhen Wang. Least squares generative adversarial networks. ar Xiv:1611.04076 ICCV, 2017. Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. ar Xiv:1704.03976, 2017. Hossein Mobahi and John W. Fisher III. A Theoretical Analysis of Optimization by Gaussian Continuation. In Proc. of 29th Conf. Artificial Intelligence (AAAI 15), 2015. Youssef Mroueh and Tom Sercu. Fisher gan. ar Xiv:1705.09675 NIPS, 2017. Youssef Mroueh, Tom Sercu, and Vaibhava Goel. Mcgan: Mean and covariance feature matching gan. ar Xiv:1702.08398 ICML, 2017. Alfred M uller. Integral probability metrics and their generating classes of functions. Advances in Applied Probability, 1997. Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. In NIPS, 2016. Chris J. Oates, Mark Girolami, and Nicolas Chopin. Control functionals for monte carlo integration. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 2017. Ofir Press, Amir Bar, Ben Bogin, Jonathan Berant, and Lior Wolf. Language generation with recurrent generative adversarial networks without pre-training. ar Xiv:1706.01399, 2017. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. ar Xiv:1511.06434, 2015. Sai Rajeswar, Sandeep Subramanian, Francis Dutil, Christopher Pal, and Aaron Courville. Adversarial generation of natural language. ar Xiv:1705.10929, 2017. Scott E Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, and Honglak Lee. Learning what and where to draw. In Advances In Neural Information Processing Systems, pp. 217 225, 2016. Mehdi Sajjadi, Mehran Javanmardi, and Tolga Tasdizen. Regularization with stochastic transformations and perturbations for deep semi-supervised learning. In Advances in Neural Information Processing Systems, pp. 1163 1171, 2016. Tim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Advances in Neural Information Processing Systems, pp. 901 901, 2016. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. NIPS, 2016. Pannagadatta K. Shivaswamy and Tony Jebara. Maximum relative margin and data-dependent regularization. Journal of Machine Learning Research, 11:747 788, 2010. doi: 10.1145/1756006. 1756031. URL http://doi.acm.org/10.1145/1756006.1756031. Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial networks. ar Xiv:1511.06390, 2015. Bharath K. Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard Scholkopf, and Gert R. G. Lanckriet. On integral probability metrics, φ-divergences and binary classification. 2009. Published as a conference paper at ICLR 2018 Bharath K. Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard Sch olkopf, and Gert R. G. Lanckriet. On the empirical estimation of integral probability metrics. Electronic Journal of Statistics, 2012. Dilin Wang and Qiang Liu. Learning to draw samples: With application to amortized MLE for generative adversarial learning. Co RR, abs/1611.01722, 2016. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. Seqgan: Sequence generative adversarial nets with policy gradient. Co RR, abs/1609.05473, 2016. Junbo Jake Zhao, Yoon Kim, Kelly Zhang, Alexander M. Rush, and Yann Le Cun. Adversarially regularized autoencoders for generating discrete structures. Co RR, 2017. Published as a conference paper at ICLR 2018 A THEORY: APPROXIMATION AND TRANSPORT INTERPRETATION In this Section we present the theoretical properties of Sobolev IPM and how it relates to distributions transport theory and other known metrics between distributions, notably the Stein distance. A.1 DISTRIBUTION TRANSPORT PERSPECTIVE ON SOBOLEV IPM In this Section, we characterize the optimal critic of the Sobolev IPM as a solution of a non linear PDE. The solution of the variational problem of the Sobolev IPM satisfies a non linear PDE that can be derived using standard tools from calculus of variations (Ekeland & Turnbull, 1983; Alaoui et al., 2016). Theorem 3 (PDE satisfied by the Sobolev Critic). The optimal critic of Sobolev IPM f satisfies the following PDE: f (x) + x log µ(x), xf (x) + P(x) Q(x) Sµ(P, Q)µ(x) = 0. (13) Define the Stein Operator: T(µ) g(x) = 1 2 x log(µ(x)), g(x) +div( g(x)) . Hence we have the following Transport Equation of P to Q: Q(x) = P(x) + 2Sµ(P, Q)µ(x)T(µ) xf (x). Recall the definition of Stein Discrepancy : S(Q, µ) = sup g |Ex Q [T(µ) g(x)]| , g : X Rd. Theorem 4 (Sobolev and Stein Discrepanices). The following inequality holds true: Ex Q 2 S(Q, µ) | {z } Stein Good fitness of the model Q w.r.t to µ Sµ(P, Q) | {z } Sobolev Distance Consider for example µ = P, and sequence Qn. If the Sobolev distance goes SP(P, Qn) 0, the ratio rn(x) = Qn(x) P(x) converges in expectation (w.r.t to Q) to 1. The speed of the convergence is given by the Stein Discrepancy S(Qn, P). Relation to Fokker-Planck Diffusion Equation and Particles dynamics. Note that PDE satisifed by the Sobolev critic given in Equation (13) can be equivalently written: P Q Sµ(P, Q) = div(µ(x) xf (x)), (15) written in this form, we draw a connection with the Fokker-Planck Equation for the evolution of a density function qt that is the density of particles Xt Rd evolving with a drift (a velocity field) V (x, t) : X [0, [ Rd: d Xt = V (Xt, t)dt, where the density of X0 is given by q0(x) = Q(x), The Fokker-Planck Equation states that the evolution of the particles density qt satisfies: dqt dt (x) = div(qt(x)V (x, t)) (16) Comparing Equation (15) and Equation (16), we identify then the gradient of Sobolev critic as a drift. This suggests that one can define Sobolev descent as the evolution of particles along the gradient flow: d Xt = xf t (Xt)dt, where the density of X0 is given by q0(x) = Q(x), where f t is the Sobolev critic between qt and P. One can show that the limit distribution of the particles is P. The analysis of Sobolev descent and its relation to Stein Descent (Liu & Wang, 2016; Liu, 2017) is beyond the scope of this paper and will be studied in a separate work. Hence we see that the gradient of the Sobolev critic defines a transportation plan to move particles whose distribution is Q to particles whose distribution is P (See Figure 2). This highlights the role of the gradient of the critic in the context of GAN training in term of transporting the distribution of the generator to the real distribution. Published as a conference paper at ICLR 2018 Proof of Theorem 2. Let FP and FQ, be the cumulative distribution functions of P and Q respectively. We have: x1 . . . xd FP(x), (17) We note D = d x1... xd , and D i = d 1 x1... xi 1 xi+1... xd , for i = 1 . . . d. D i computes the d 1 partial derivative excluding the variable i. In the following we assume that FP, and FQ and its d derivatives exist and are continuous meaning that FP and FQ Cd(X ). The objective function in Equation (3) can be written as follows: Ex Pf(x) Ex Qf(x) = ˆ X f(x)D FP(x) FQ(x) dx xi D i(FP(x) FQ(x))dx (for any i, since FP and FQ Cd(X )) f xi D i(FP(x) FQ(x))dx (f vanishes at the boundary in W 1,2 0 (X , µ) ) Let D = (D 1, . . . , D d) it follows that: Ex Pf(x) Ex Qf(x) = 1 d f xi D i(FQ(x) FP(x))dx xf(x), D (FQ(x) FP(x)) Let us define L2(X , µ) d the space of measurable functions from X Rd. For g, h L2(X , µ) d the dot product is defined as follows: g, h L2(X ,µ) d = ˆ X g(x), h(x) Rd µ(x)dx and the norm is given : g L2(X ,µ) d = ˆ X g 2 Rd µ(x)dx. We can write the objective in Equation (18) in term of the dot product in L2(X , µ) d : Ex Pf(x) Ex Qf(x) = 1 xf, D (FQ FP) L2(X ,µ) d . (19) On the other hand the constraint in Equation (3) can be written in terms of the norm in L2(X , µ) d: f W 1,2 0 (X ,µ) = xf L2(X ,µ) d (20) Published as a conference paper at ICLR 2018 Replacing the objective and constraint given in Equations (19) and (20) in Equation (3), we obtain: S(P, Q) = 1 d sup f, xf L2(X ,µ) d 1 xf, D (FQ FP) d sup g L2(X ,µ) d, g L2(X ,µ) d 1 g, D (FQ FP) By definition of . L2(X ,µ) d , g = D FQ(x) D FP(x) µ(x) 1 D (FQ FP) µ L2(X ,µ) d D FQ(x) D FP(x) 2 Hence we find also that the optimal critic f satisfies: xf (x) = D FQ(x) D FP(x) µ(x) 1 D (FQ FP) µ L2(X ,µ) d Proof of Lemma 1. Ex Pf(x) Ex Qf(x) = 1 d xf(x), D (FQ(x) FP(x)) = Sµ(P, Q) ˆ xf(x), D (FQ(x) FP(x)) µ(x)d Sµ(P, Q) = Sµ(P, Q) ˆ X xf(x), xf (x) µ(x)dx = Sµ(P, Q) f, f W 1,2 0 Hence we have: sup f H , f W 1,2 0 1 Ex Pf(x) Ex Qf(x) = Sµ(P, Q) sup f H , f W 1,2 0 1 f, f W 1,2 0 , It follows therefore that: SH (P, Q) = Sµ(P, Q) sup f H , f W 1,2 0 1 f, f W 1,2 0 We conclude that the Sobolev IPM can be approximated in arbitrary space as long as it has enough capacity to approximate the optimal critic. Interestingly the approximation error is measured now with the Sobolev semi-norm, while in Fisher it was measured with the Lebesgue norm. Approximations with Sobolev Semi-norms are stronger then Lebesgue norms as given by the Poincare inequality (||f||L2 C f W 1,2 0 ), meaning if the error goes to zero in Sobolev sense it also goes to zero in the Lebesgue sense , but the converse is not true. Proof of Theorem 3. The proof follows similar arguments in the proofs of the analysis of Laplacian regularization in semi-supervised learning studied by (Alaoui et al., 2016). Sµ(P, Q) = supf W 1,2 0 n Ex P [f(x)] Ex Q [f(x)] o s.t. Ex µ f(x) 2 2 1, (21) Published as a conference paper at ICLR 2018 Note that this problem is convex in f (Ekeland & Turnbull, 1983). Writing the lagrangian for equation (21) we get : L(f, λ) = Ex P [f(x)] Ex Q [f(x)] + λ 1 Ex µ xf(x) 2 2 X f(x) P(x) Q(x) dx + λ X xf(x) 2 2µ(x)dx X f(x) µ1(x) dx + λ X xf(x) 2 2 µ(x) dx We denote P(x) Q(x) as µ1(x).To get the optimal f, we need to apply KKT conditions on the above equation. L(f, λ) = ˆ X f(x) µ1(x) dx + λ X xf(x) 2 2 µ(x) dx From the calculus of variations: L(f + ϵh, λ) = ˆ X (f + ϵh)(x) µ1(x) dx + λ X x f + ϵh (x) 2 2 µ(x) dx X (f(x) + ϵh(x)) µ1(x) dx + λ x f + ϵh (x), x f + ϵh (x) µ(x) dx X (f(x) + ϵh(x)) µ1(x) dx xf(x) 2 2 + 2ϵ xf(x), xh(x) + O(ϵ2) µ(x)dx = L(f, λ) + ϵ ˆ X h(x) µ1(x) dx λϵ ˆ X xf(x), xh(x) µ(x) dx + O(ϵ2) = L(f, λ) + ϵ h ˆ X h(x) µ1(x) dx λ ˆ X xf(x), xh(x) µ(x) dx i + O(ϵ2) Now we apply integration by part and set h to be zero at boundary as in (Alaoui et al., 2016). We get : X xf(x), xh(x) µ(x) dx = ˆ X xf(x) µ(x), xh(x) dx h(x)µ(x) xf(x).n(x)d S(x) ˆ X div µ(x) xf(x) h(x) dx X div µ(x) xf(x) h(x) dx L(f + ϵh, λ) = L(f, λ) + ϵ h ˆ X µ1(x) h(x) dx + λ ˆ X div µ(x) xf(x) h(x) dx i + O(ϵ2) = L(f, λ) + ϵ ˆ µ1(x) + λ div µ(x) xf(x) h(x) dx + O(ϵ2) The functional derivative of L(f, λ), at any test function h vanishing on the boundary: ˆ f (x)h(x)dx = lim ϵ 0 L(f + ϵh, λ) L(f, λ) µ1(x) + λ div µ(x) xf(x) h(x) dx Published as a conference paper at ICLR 2018 Hence we have: L(f, λ) f (x) = µ1(x) + λ div µ(x) xf(x) For the optimal f , λ first order optimality condition gives us: µ1(x) + λ div µ(x) xf (x) = 0 (22) X xf (x) 2 µ(x)dx = 1 (23) Note that (See for example (Alaoui et al., 2016)) : div µ(x) xf (x) = µ(x) 2f (x) + xµ(x), xf (x) , since div( xf (x)) = 2f (x). Hence from equation (22) µ1(x) + λ div µ(x) xf (x) = 0 µ1(x) + λ µ(x) 2f (x) + xµ(x), xf (x) = 0 µ1(x) + λ µ(x) 2f (x) + λ xµ(x), xf (x) = 0 2f (x) + xµ(x) µ(x) , xf (x) + µ1(x) 2f (x) + x log µ(x), xf (x) + P(x) Q(x) Hence f , λ satisfies : 2f (x) + x log µ(x), xf (x) + P(x) Q(x) λ µ(x) = 0 (25) X xf (x) 2 µ(x)dx = 1. (26) Let us verify that the optimal critic as found in the geometric definition (Theorem 2) of Sobolev IPM that satisfies: if (x) = f (X) xi = D i FQ(x) D i FP(x) λ d µ(x) i [d], (27) satisfies indeed the PDE. From equation (27), we want to compute 2f(x) x2 i for all i: x2 i = 1 λ d xi (D i FQ(x) D i FP(x)) D i FQ(x) D i FP(x) iµ(X) " µ(x) Q(x) P(x) D i FQ(x) D i FP(x) iµ(X) µ2(x) = Q(x) P(x) λ d µ(x) iµ(x) µ(x) if (x) x2 i + iµ(x) µ(x) if(x) + λ d µ(x) = 0 (28) Adding equation (28) for all i [d], we get : x2 i + iµ(x) µ(x) if(x) + Published as a conference paper at ICLR 2018 As a result, the solution f of the partial differential equation given in equation (25) satisfies the following : f (x) xi = D i FQ(x) D i FP(x) λ d µ(x) i [d] Using the constraint in (26) we can get the value of λ : ˆ f (x) 2 µ(x) dx = 1 2 µ(x) dx = 1 ˆ D i FQ(x) D i FP(x) 2 µ(x) dx = Sµ(P, Q). Proof of Theorem 4. Define the Stein operator (Oates et al., 2017): T(µ)[ xf(x)] = 1 2 xf(x), x log µ(x) + 1 = 1 2 xf(x), x log µ(x) + 1 This operator was later used in defining the Stein discrepancy (Gorham & Mackey, 2015; Liu et al., 2016; Chwialkowski et al., 2016; Liu, 2017). Recall that Barbour generator theory provides us a way of constructing such operators that produce mean zero function under µ. It is easy to verify that: Ex µT(µ) xf(x) = 0. Recall that this operator arises from the overdamped Langevin diffusion, defined by the stochastic differential equation: 2 x log µ(xt) + d Wt where (Wt)t 0 is a Wiener process. This is related to plug and play networks for generating samples if the distribution is known, using the stochastic differential equation. From Theorem 3, it is easy to see that the PDE the Sobolev Critic (f , λ = Sµ(P, Q)) can be written in term of Stein Operator as follows: T(µ)[ xf ](x) = 1 2λ Q(x) P(x) Taking absolute values and the expectation with respect to Q: |Ex Q [T(µ) xf (x)]| = 1 2Sµ(P, Q) Recall that the definition of Stein Discrepancy : S(Q, µ) = sup g L2(X ,µ) d |Ex Q [T(µ) g(x)]| It follows that Sobolev IPM critic satisfies: |Ex Q [T(µ) xf (x)]| S(Q, µ), Hence we have the following inequality: Published as a conference paper at ICLR 2018 1 2Sµ(P, Q) This is equivalent to: Ex Q 2 S(Q, µ) | {z } Stein Good fitness of the model Q w.r.t to µ Sµ(P, Q) | {z } Sobolev Distance Similarly we obtain: Ex P 2 S(P, µ) | {z } Stein Good fitness of µ w.r.t to P Sµ(P, Q) | {z } Sobolev Distance For instance consider µ = P, we have therefore: 1 S(Q, P)SP(P, Q). Note that the left hand side of the inequality is not the total variation distance. Hence for a sequence Qn if the Sobolev distance goes SP(P, Qn) 0, the ratio rn(x) = Qn(x) P(x) converges in expectation (w.r.t to Q) to 1. The speed of the convergence is given by the Stein Discrepancy S(Qn, P). One important observation here is that convergence of PDF ratio is weaker than the conditional CDF as given by the Sobolev distance and of the good fitness of score function as given by Stein discrepancy. C TEXT EXPERIMENTS: ADDITIONAL PLOTS Comparison of annealed versus non annealed smoothing of Pr in Sobolev GAN. 0 5000 10000 15000 20000 25000 30000 gθ iterations (S, D=res, G=res, µs(σ = 0. 1)) (S, D=res, G=res, µa s (σ0 = 0. 1)) (S, D=res, G=res, µs(σ = 1. 0)) (S, D=res, G=res, µa s (σ0 = 1. 0)) (S, D=res, G=res, µs(σ = 1. 5)) (S, D=res, G=res, µa s (σ0 = 1. 5)) Figure 6: Comparison of annealed versus non annealed smoothing of Pr in Sobolev GAN. We see that annealed smoothing outperforms the non annealed smoothing experiments. Sobolev GAN versus WGAN-GP with RNN. We fix the generator architecture to Resnets. The experiments of using RNN (GRU) as the critic architecture for WGAN-GP and Sobolev is shown in Figure 7 where we used µ = µGP for both cases. We only apply gradient clipping to stabilize the performance without other tricks. We can observe that using RNN degrades the performance. We think that this is due to an optimization issue and a difficulty in training RNN under the GAN objective without any pre-training or conditioning. Published as a conference paper at ICLR 2018 0 5000 10000 15000 20000 25000 30000 gθ iterations (GP, D=res, G=res, µGP) (GP, D=rnn, G=res, µGP) (GP, D=rnn, G=rnn, µGP) (a) WGAN-GP 0 5000 10000 15000 20000 25000 30000 gθ iterations (S, D=res, G=res, µGP) (S, D=rnn, G=res, µGP) (S, D=rnn, G=rnn, µGP) (b) Sobolev Figure 7: Result of WGAN-GP and Sobolev with RNNs. That time out very came of their But it Gaylen was strosd of the The case had caurgr thing it las Gropate evong hould exficioul pa The See qust , so make starter S Cauntsrs of oprnnd accused there Compara Tizo is thene ano hastin With Earaie Ïpptaring very woutd When livht think not Braoph SPec The phan teiled " Policy , tor e Coydey GN11 ) -s pail is uniled That 's conpect d larce antin-iu But it 's familions a, IHican er Nit was bad a year hitoloy hodat And prenches gless fram Avers aa If 's might , comp-rime at overg Jeads years lead of gonguied to Asong he into get his ressson ' Nou projecti y bated with te de Cradfs Cuel sad out the Gutoor . (WGAN-GP,D=res,G=res,µGP ) The Loraia arnup to Nou ands in Nany tecalliexpeace in that veel " It not has allown ourn Ehough This bastly , suphoriation almo " The pasts of a nummers said Nh A loved the Cam feal switht with Apenole 's no. on walling any Cc Furaymand chainting suppinally s Larts Ginis , R-Ra tarkedment st It what wowed night a chiols as Overy shy really -- " Cyphil mad She wore will be also a marged w But Rere tained the sian hoy at Hends to Won) ).--u22y this he Felecter indoadoy is ne rtlayne Pet isser juastivus also first i But you had not of hiscered tnd Thoir Taray taked an intervatter She vagger conmisurespis herkied His juestor foy not ar oreeoon t Buile president up thepunsit an In ealled osficers in a rould a The mome of tot of not shodld ye It 'snsacopprctionialso muss y , (Sobolev GAN,D=res,G=res,µGP ) In reperted a "a Mametan 's Gegtn Sime Vmone onerge recighed a an Rechardty ) " Gush 's womes it l Catious paice of an sepurying or The vews dolerated badds to appe Orgarda of to the cheek-ng nees You all tnl torgave takely his e " Ancrops than , Mumine of the s " The Bontement is shouts will b One if the rops of the Cutlent h While paliless streaghal dustist Everyial with a Ecenbers are car It has an atton 1) (S): Linear (6144 -> 10) )