# mcgan_mean_and_covariance_feature_matching_gan__09a62e6b.pdf Mc Gan: Mean and Covariance Feature Matching GAN Youssef Mroueh * 1 2 Tom Sercu * 1 2 Vaibhava Goel 2 We introduce new families of Integral Probability Metrics (IPM) for training Generative Adversarial Networks (GAN). Our IPMs are based on matching statistics of distributions embedded in a finite dimensional feature space. Mean and covariance feature matching IPMs allow for stable training of GANs, which we will call Mc Gan. Mc Gan minimizes a meaningful loss between distributions. 1. Introduction Unsupervised learning of distributions is an important problem, in which we aim to learn underlying features that unveil the hidden the structure in the data. The classic approach to learning distributions is by explicitly parametrizing the data likelihood and fitting this model by maximizing the likelihood of the real data. An alternative recent approach is to learn a generative model of the data without explicit parametrization of the likelihood. Variational Auto Encoders (VAE) (Kingma & Welling, 2013) and Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) fall under this category. We focus on the GAN approach. In a nutshell GANs learn a generator of the data via a min-max game between the generator and a discriminator, which learns to distinguish between real and fake samples. In this work we focus on the objective function that is being minimized between the learned generator distribution Pθ and the real data distribution Pr. The original work of (Goodfellow et al., 2014) showed that in GAN this objective is the Jensen-Shannon divergence. (Nowozin et al., 2016) showed that other ϕ-divergences can be successfully used. The Maximum Mean Discrepancy objective (MMD) for GAN training was proposed in (Li *Equal contribution 1AI Foundations. IBM T.J. Watson Research Center, NY, USA 2Watson Multimodal Algorithms and Engines Group. IBM T.J. Watson Research Center, NY, USA. Correspondence to: Youssef Mroueh . Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by the author(s). et al., 2015; Dziugaite et al., 2015). As shown empirically in (Salimans et al., 2016), one can train the GAN discriminator using the objective of (Goodfellow et al., 2014) while training the generator using mean feature matching. An energy based objective for GANs was also developed recently (Zhao et al., 2017). Finally, closely related to our paper, the recent work Wasserstein GAN (WGAN) of (Arjovsky et al., 2017) proposed to use the Earth Moving distance (EM) as an objective for training GANs. Furthermore (Arjovsky et al., 2017) show that the EM objective has many advantages as the loss function correlates with the quality of the generated samples and the mode dropping problem is reduced in WGAN. In this paper, inspired by the MMD distance and the kernel mean embedding of distributions (Muandet et al., 2016) we propose to embed distributions in a finite dimensional feature space and to match them based on their mean and covariance feature statistics. Incorporating first and second order statistics has a better chance to capture the various modes of the distribution. While mean feature matching was empirically used in (Salimans et al., 2016), we show in this work that it is theoretically grounded: similarly to the EM distance in (Arjovsky et al., 2017), mean and covariance feature matching of two distributions can be written as a distance in the framework of Integral Probability Metrics (IPM) (Muller, 1997). To match the means, we can use any ℓq norm, hence we refer to mean matching IPM, as IPMµ,q. For matching covariances, in this paper we consider the Ky-Fan norm, which can be computed cheaply without explicitly constructing the full covariance matrices, and refer to the corresponding IPM as IPMΣ. Our technical contributions can be summarized as follows: a) We show in Section 3 that the ℓq mean feature matching IPMµ,q has two equivalent primal and dual formulations and can be used as an objective for GAN training in both formulations. b) We show in Section 3.3 that the parametrization used in Wasserstein GAN corresponds to ℓ1 mean feature matching GAN (IPMµ,1 GAN in our framework). c) We show in Section 4.2 that the covariance feature matching IPMΣ admits also two dual formulations, and can be used as an objective for GAN training. Mc Gan: Mean and Covariance Feature Matching GAN d) Similar to Wasserstein GAN, we show that mean feature matching and covariance matching GANs (Mc Gan) are stable to train, have a reduced mode dropping and the IPM loss correlates with the quality of the generated samples. 2. Integral Probability Metrics We define in this Section IPMs as a distance between distribution. Intuitively each IPM finds a critic f (Arjovsky et al., 2017) which maximally discriminates between the distributions. 2.1. IPM Definition Consider a compact space X in Rd. Let F be a set of measurable and bounded real valued functions on X . Let P(X ) be the set of measurable probability distributions on X . Given two probability distributions P, Q P(X ), the Integral probability metric (IPM) indexed by the function space F is defined as follows (Muller, 1997): d F(P, Q) = sup f F E x Pf(x) E x Qf(x) . In this paper we are interested in symmetric function spaces F, i.e f F, f F, hence we can write the IPM in that case without the absolute value: d F(P, Q) = sup f F n E x Pf(x) E x Qf(x) o . (1) It is easy to see that d F defines a pseudo-metric over P(X). (d F non-negative, symmetric and satisfies the triangle inequality. A pseudo metric means that d F(P, P) = 0 but d F(P, Q) = 0 does not necessarily imply P = Q). By choosing F appropriately (Sriperumbudur et al., 2012; 2009), various distances between probability measures can be defined. In the next subsection following (Arjovsky et al., 2017; Li et al., 2015; Dziugaite et al., 2015) we show how to use IPM to learn generative models of distributions, we then specify a special set of functions F that makes the learning tractable. 2.2. Learning Generative Models with IPM In order to learn a generative model of a distribution Pr P(X ), we learn a function gθ : Z Rnz X , such that for z pz, the distribution of gθ(z) is close to the real data distribution Pr, where pz is a fixed distribution on Z (for instance z N (0, Inz)). Let Pθ be the distribution of gθ(z), z pz. Using an IPM indexed by a function class F we shall solve therefore the following problem: min gθ d F(Pr, Pθ) (2) Hence this amounts to solving the following min-max problem: min gθ sup f F E x Prf(x) E z pzf(gθ(z)) Given samples {xi, 1 . . . N} from Pr and samples {zi, 1 . . . M} from pz we shall solve the following empirical problem: min gθ sup f F i=1 f(xi) 1 j=1 f(gθ(zj)), in the following we consider for simplicity M = N. 3. Mean Feature Matching GAN In this Section we introduce a class of functions F having the form v, Φω(x) , where vector v Rm and Φω : X Rm a non linear feature map (typically parametrized by a neural network). We show in this Section that the IPM defined by this function class corresponds to the distance between the mean of the distribution in the Φω space. 3.1. IPMµ,q: Mean Matching IPM More formally consider the following function space: Fv,ω,p = {f(x) = v, Φω(x) v Rm, v p 1, Φω : X Rm, ω Ω}, where . p is the ℓp norm. Fv,ω,p is the space of bounded linear functions defined in the non linear feature space induced by the parametric feature map Φω. Φω is typically a multi-layer neural network. The parameter space Ωis chosen so that the function space F is bounded. Note that for a given ω, Fv,ω,p is a finite dimensional Hilbert space. We recall here simple definitions on dual norms that will be necessary for the analysis in this Section. Let p, q [1, ], such that 1 q = 1. By duality of norms we have: x q = maxv, v p 1 v, x and the Holder inequality: x, y From Holder inequality we obtain the following bound: f(x) = v, Φωx v p Φω(x) q Φω(x) q . To ensure that f is bounded, it is enough to consider Ωsuch that Φω(x) q B, x X . Given that the space X is bounded it is sufficient to control the norm of the weights and biases of the neural network Φω by regularizing the ℓ (clamping) or ℓ2 norms (weight decay) to ensure the boundedness of Fv,ω,p. Mc Gan: Mean and Covariance Feature Matching GAN b) IPM : Level sets of f(x) = Pk j=1 huj, Φ!(x)i hvj, Φ!(x)i k = 3, uj, vj left and right singular vectors of w(P) w(Q). a) IPM µ,2: Level sets of f(x) = hv , Φ!(x)i v = µw(P) µw(Q) kµw(P) µw(Q)k2 . e) hu3, Φ!(x)i hv3, Φ!(x)i d) hu2, Φ!(x)i hv2, Φ!(x)i Level Sets of c) hu1, Φ!(x)i hv1, Φ!(x)i Figure 1. Motivating example on synthetic data in 2D, showing how different components in covariance matching can target different regions of the input space. Mean matching (a) is not able to capture the two modes of the bimodal real distribution P and assigns higher values to one of the modes. Covariance matching (b) is composed of the sum of three components (c)+(d)+(e), corresponding to the top three critic directions . Interestingly, the first direction (c) focuses on the fake data Q, the second direction (d) focuses on the real data, while the third direction (e) is mode selective. This suggests that using covariance matching would help reduce mode dropping in GAN. In this toy example Φω is a fixed random Fourier feature map (Rahimi & Recht, 2008) of a Gaussian kernel (i.e. a finite dimensional approximation). Now that we ensured the boundedness of Fv,ω,p , we look at its corresponding IPM: d Fv,ω,p(P, Q) = sup f Fv,ω,p E x Pf(x) E x Qf(x) = max ω Ω,v,||v||p 1 v, E x PΦω(x) E x QΦω(x) h max v,||v||p 1 v, E x PΦω(x) E x QΦω(x) i = max ω Ω µω(P) µω(Q) q , where we used the linearity of the function class and expectation in the first equality and the definition of the dual norm . q in the last equality and our definition of the mean feature embedding of a distribution P P(X ): µω(P) = E x P h Φω(x) i Rm. We see that the IPM indexed by Fv,ω,p, corresponds to the Maximum mean feature Discrepancy between the two distributions. Where the maximum is taken over the parameter set Ω, and the discrepancy is measured in the ℓq sense between the mean feature embedding of P and Q. In other words this IPM is equal to the worst case ℓq distance between mean feature embeddings of distributions. We refer in what follows to d Fv,ω,p as IPMµ,q. 3.2. Mean Feature Matching GAN We turn now to the problem of learning generative models with IPMµ,q. Setting F to Fv,ω,p in Equation (2) yields to the following min-max problem for learning generative models: min gθ max ω Ω max v,||v||p 1 Lµ(v, ω, θ), (3) Lµ(v, ω, θ) = v, E x PrΦω(x) E z pzΦω(gθ(z)) , or equivalently using the dual norm: min gθ max ω Ω µω(Pr) µω(Pθ) q , (4) where µω(Pθ) = E z pzΦω(gθ(z)). We refer to formulations (3) and (4) as primal and dual formulation respectively. The dual formulation in Equation (4) has a simple interpretation as an adversarial learning game: while the feature space Φω tries to map the mean feature embeddings of the real distribution Pr and the fake distribution Pθ to be far apart (maximize the ℓq distance between the mean embeddings), the generator gθ tries to put them close one to another. Hence we refer to this IPM as mean matching IPM. Mc Gan: Mean and Covariance Feature Matching GAN We devise empirical estimates of both formulations in Equations (3) and (4), given samples {xi, i = 1 . . . N} from Pr, and {zi, i = 1 . . . N} from pz. The primal formulation (3) is more amenable to stochastic gradient descent since the expectation operation appears in a linear way in the cost function of Equation (3), while it is non linear in the cost function of the dual formulation (4) (inside the norm). We give here the empirical estimate of the primal formulation by giving empirical estimates ˆ Lµ(v, ω, θ) of the primal cost function: (Pµ) : min gθ max ω Ω v,||v||p 1 i=1 Φω(xi) 1 i=1 Φω(gθ(zi)) An empirical estimate of the dual formulation can be also given as follows: (Dµ) : min gθ max ω Ω i=1 Φω(xi) 1 i=1 Φω(gθ(zi)) In what follows we refer to the problem given in (Pµ) and (Dµ) as ℓq Mean Feature Matching GAN. Note that while (Pµ) does not need real samples for optimizing the generator, (Dµ) does need samples from real and fake. Furthermore we will need a large minibatch of real data in order to get a good estimate of the expectation. This makes the primal formulation more appealing computationally. 3.3. Related Work We show in this Section that several previous works on GAN, can be written within the ℓq mean feature matching IPM (IPMµ,q) minimization framework: a) Wasserstein GAN (WGAN): (Arjovsky et al., 2017) recently introduced Wasserstein GAN. While the main motivation of this paper is to consider the IPM indexed by Lipchitz functions on X , we show that the particular parametrization considered in (Arjovsky et al., 2017) corresponds to a mean feature matching IPM. Indeed (Arjovsky et al., 2017) consider the function set parametrized by a convolutional neural network with a linear output layer and weight clipping. Written in our notation, the last linear layer corresponds to v, and the convolutional neural network below corresponds to Φω. Since v and ω are simultaneously clamped, this corresponds to restricting v to be in the ℓ unit ball, and to define in Ωconstraints on the ℓ norms of ω. In other words (Arjovsky et al., 2017) consider functions in Fv,ω,p, where p = . Setting p = in Equation (3), and q = 1 in Equation (4), we see that in WGAN we are minimizing d Fv,ω, , that corresponds to ℓ1 mean feature matching GAN. b) MMD GAN: Let H be a Reproducing Kernel Hilbert Space (RKHS) with k its reproducing kernel. For any valid PSD kernel k there exists an infinite dimensional feature map Φ : X H such that: k(x, y) = Φ(x), Φ(y) H . For an RKHS Φ is noted usually k(x, .) and satisfies the reproducing proprety: f(x) = f, Φ(x) H , for all f H . Setting F = f ||f||H 1 in Equation (1) the IPM d F has a simple expression: d F(P, Q) = sup f,||f||H 1 n D f, E x PΦ(x) E x QΦ(x) Eo = µ(P) µ(Q) H , (5) where µ(P) = E x PΦ(x) H is the so called kernel mean embedding (Muandet et al., 2016). d F in this case is the so called Maximum kernel Mean Discrepancy (MMD) (Gretton et al., 2012) . Using the reproducing property MMD has a closed form in term of the kernel k. Note that IPMµ,2 is a special case of MMD when the feature map is finite dimensional, with the main difference that the feature map is fixed in case of MMD and learned in the case of IPMµ,2. (Li et al., 2015; Dziugaite et al., 2015) showed that GANs can be learned using MMD with a fixed gaussian kernel. c) Improved GAN: Building on the pioneering work of (Goodfellow et al., 2014), (Salimans et al., 2016) suggested to learn the discriminator with the binary cross entropy criterium of GAN while learning the generator with ℓ2 mean feature matching. The main difference of our IPMµ,2 GAN is that both discriminator and generator are learned using the mean feature matching criterium, with additional constraints on Φω. 4. Covariance Feature Matching GAN 4.1. IPMΣ: Covariance Matching IPM As follows from our discussion of mean matching IPM comparing two distributions amounts to comparing a first order statistics, the mean of their feature embeddings. Here we ask the question how to incorporate second order statistics, i.e covariance information of feature embeddings. In this Section we will provide a function space F such that the IPM in Equation (1) captures second order information. Intuitively a distribution of points represented in a feature space can be approximately captured by its mean and its covariance. Commonly in unsupervised learning, this covariance is approximated by its first k principal components (PCA directions), which capture the directions of maximal variance in the data. Similarly, the metric we define in this Section will find k directions that maximize the discrimination between the two covariances. Adding second order information would enrich the discrimination power of the feature space (See Figure 1). Mc Gan: Mean and Covariance Feature Matching GAN This intuition motivates the following function space of bilinear functions in Φω : FU,V,ω = {f(x) = j=1 uj, Φω(x) vj, Φω(x) {uj}, {vj} Rm orthonormal j = 1 . . . k, ω Ω}. Note that the set FU,V,ω is symmetric and hence the IPM indexed by this set (Equation (1)) is well defined. It is easy to see that FU,V,ω can be written as: FU,V,ω = {f(x) = U Φω(x), V Φω(x) ) U, V Rm k, U U = Ik, V V = Ik, ω Ω} the parameter set Ωis such that the function space remains bounded. Let Σω(P) = E x PΦω(x)Φω(x) , be the uncentered feature covariance embedding of P. It is easy to see that E x Pf(x) can be written in terms of U, V, E x Pf(x) = E x P U Φ(x), V Φ(x) = Trace(U Σω(P)V ). For a matrix A Rm m, we note by σj(A) the singular value of A, j = 1 . . . m in descending order. The 1schatten norm or the nuclear norm is defined as the sum of singular values, A = Pm j=1 σj. We note by [A]k the k-th rank approximation of A. We note Om,k = {M Rm k|M M = Ik}. Consider the IPM induced by this function set. Let P, Q P(X ) we have: d FU,V,ω(P, Q) = sup f FU,V,ω E x Pf(x) E x Qf(x) = max ω Ω U,V Om,k E x Pf(x) E x Qf(x) = max ω Ω max U,V Om,k Trace U (Σω(P) Σω(Q))V j=1 σj (Σω(P) Σω(Q)) = max ω Ω [Σω(P) Σω(Q)]k , where we used the variational definition of singular values and the definition of the nuclear norm. Note that U, V are the left and right singular vectors of Σω(P) Σω(Q). Hence d FU,V,ω measures the worst case distance between the covariance feature embeddings of the two distributions, this distance is measured with the Ky Fan k-norm (nuclear norm of truncated covariance difference). Hence we call this IPM covariance matching IPM, IPMΣ. 4.2. Covariance Matching GAN Turning now to the problem of learning a generative model gθ of Pr P(X ) using IPMΣ we shall solve: min gθ d FU,V,ω(Pr, Pθ), this has the following primal formulation: min gθ max ω Ω,U,V Om,k Lσ(U, V, ω, θ), (6) where Lσ(U, V, ω, θ) = E x Pr U Φω(x), V Φω(x)) U Φω(gθ(z)), V Φω(gθ(z)) , or equivalently the following dual formulation: min gθ max ω Ω [Σω(Pr) Σω(Pθ)]k , (7) where Σω(Pθ) = Ez pzΦω(gθ(z))Φω(gθ(z)) . The dual formulation in Equation (7) shows that learning generative models with IPMΣ, consists in an adversarial game between the feature map and the generator, when the feature maps tries to maximize the distance between the feature covariance embeddings of the distributions, the generator tries to minimize this distance. Hence we call learning with IPMΣ, covariance matching GAN. We give here an empirical estimate of the primal formulation in Equation (6) which is amenable to stochastic gradient. The dual requires nuclear norm minimization and is more involved. Given {xi, xi Pr}, and {zj, zj pz}, the covariance matching GAN can be written as follows: min gθ max ω Ω,U,V Om,k ˆ Lσ(U, V, ω, θ), (8) where ˆ Lσ(U, V, ω, θ) = 1 U Φω(xi), V Φω(xi) U Φω(gθ(zj)), V Φω(gθ(zj)) . 4.3. Mean and Covariance Matching GAN In order to match first and second order statistics we propose the following simple extension: min gθ max ω Ω,v,||v||p 1 U,V Om,k Lµ(v, ω, θ) + Lσ(U, V, ω, θ), that has a simple dual adversarial game interpretation min gθ max ω Ω µω(P) µω(Pθ) q+ [Σω(Pr) Σω(Pθ)]k , where the discriminator finds a feature space that discriminates between means and variances of real and fake, and the generator tries to match the real statistics. We can also give empirical estimates of the primal formulation similar to expressions given in the paper. Mc Gan: Mean and Covariance Feature Matching GAN 5. Algorithms We present in this Section our algorithms for mean and covariance feature matching GAN (Mc Gan) with IPMµ,q and IPMΣ. Mean Matching GAN. Primal Pµ: We give in Algorithm 1 an algorithm for solving the primal IPMµ,q GAN (Pµ). Algorithm 1 is adapted from (Arjovsky et al., 2017) and corresponds to their algorithm for p = . The main difference is that we allow projection of v on different ℓp balls, and we maintain the clipping of ω to ensure boundedness of Φω. For example for p = 2, proj Bℓ2(v) = min(1, 1 v 2 )v. For p = we obtain the same clipping in (Arjovsky et al., 2017) proj Bℓ (v) = clip(v, c, c) for c = 1. Dual Dµ: We give in Algorithm 2 an algorithm for solving the dual formulation IPMµ,q GAN (Dµ). As mentioned earlier we need samples from real and fake for training both generator and the critic feature space. Covariance Matching GAN. Primal PΣ: We give in Algorithm 3 an algorithm for solving the primal of IPMΣ GAN (Equation (8)). The algorithm performs a stochastic gradient ascent on (ω, U, V ) and a descent on θ. We maintain clipping on ω to ensure boundedness of Φω, and perform a QR retraction on the Stiefel manifold Om,k (Absil et al., 2007), maintaining orthonormality of U and V . Algorithm 1 Mean Matching GAN - Primal (Pµ) Input: p to define the ball of v ,η Learning rate, nc number of iterations for training the critic, c clipping or weight decay parameter, N batch size Initialize v, ω, θ repeat for j = 1 to nc do Sample a minibatch xi, i = 1 . . . N, xi Pr Sample a minibatch zi, i = 1 . . . N, zi pz (gv, gω) ( v ˆ Lµ(v, ω, θ), ω ˆ Lµ(v, ω, θ)) (v, ω) (v, ω) + η RMSProp ((v, ω), (gv, gω)) {Project v on ℓp ball, Bℓp = {x, x p 1}} v proj Bℓp(v) ω clip(ω, c, c) {Ensure Φω is bounded} end for Sample zi, i = 1 . . . N, zi pz dθ θ D v, 1 N PN i=1 Φω(gθ(zi)) E θ θ η RMSProp (θ, dθ) until θ converges 6. Experiments We train Mc Gan for image generation with both Mean Matching and Covariance Matching objectives. We show generated images on the labeled faces in the wild (lfw) Algorithm 2 Mean Matching GAN - Dual (Dµ) Input: q the matching ℓq norm ,η Learning rate, nc number of iterations for training the critic, c clipping or weight decay parameter, N batch size Initialize v, ω, θ repeat for j = 1 to nc do Sample a minibatch xi, i = 1 . . . N, xi Pr Sample a minibatch zi, i = 1 . . . N, zi pz ω,θ 1 N PN i=1 Φω(xi) 1 N PN i=1 Φω(gθ(zi)) gω ω ω,θ q ω ω + η RMSProp (ω, gω) ω clip(ω, c, c) {Ensure Φω is bounded} end for Sample zi, i = 1 . . . N, zi pz Sample xi, i = 1 . . . M, xi Pr (M > N) ω,θ 1 M PM i=1 Φω(xi) 1 N PN i=1 Φω(gθ(zi)) dθ θ ω,θ q θ θ η RMSProp (θ, dθ) until θ converges Algorithm 3 Covariance Matching GAN - Primal (PΣ) Input: k the number of components ,η Learning rate, nc number of iterations for training the critic, c clipping or weight decay parameter, N batch size Initialize U, V, ω, θ repeat for j = 1 to nc do Sample a minibatch xi, i = 1 . . . N, xi Pr Sample a minibatch zi, i = 1 . . . N, zi pz G ( U, V , ω) ˆ Lσ(U, V, ω, θ) (U, V, ω) (U, V, ω)+η RMSProp ((U, V, ω), G) { Project U and V on the Stiefel manifold Om,k} Qu, Ru QR(U) su sign(diag(Ru)) Qv, Rv QR(V ) sv sign(diag(Rv)) U Qu Diag(su) V Qv Diag(sv) ω clip(ω, c, c) {Ensure Φω is bounded} end for Sample zi, i = 1 . . . N, zi pz dθ θ 1 N PN j=1 UΦω(gθ(zj)), V Φω(gθ(zj)) θ θ η RMSProp (θ, dθ) until θ converges (Huang et al., 2007), LSUN bedrooms (Yu et al., 2015), and cifar-10 (Krizhevsky & Hinton, 2009) datasets. It is well-established that evaluating generative models is hard (Theis et al., 2016). Many GAN papers rely on a combination of samples for quality evaluation, supplemented by a number of heuristic quantitative measures. We will mostly focus on training stability by showing plots of the Mc Gan: Mean and Covariance Feature Matching GAN loss function, and will provide generated samples to claim comparable sample quality between methods, but we will avoid claiming better sample quality. These samples are all generated at random and are not cherry-picked. The design of gθ and Φω are following DCGAN principles (Radford et al., 2015), with both gθ and Φω being a convolutional network with batch normalization (Ioffe & Szegedy, 2015) and Re LU activations. Φω has output size bs F 4 4. The inner product can then equivalently be implemented as conv(4x4, F->1) or flatten + Linear(4*4*F -> 1). We generate 64 64 images for lfw and LSUN and 32 32 images on cifar, and train with minibatches of size 64. We follow the experimental framework and implementation of (Arjovsky et al., 2017), where we ensure the boundedness of Φω by clipping the weights pointwise to the range [ 0.01, 0.01]. Primal versus dual form of mean matching. To illustrate the validity of both the primal and dual formulation, we trained mean matching GANs both in the primal and dual form, see respectively Algorithm 1 and 2. Samples are shown in Figure 2. Note that optimizing the dual form is less efficient and only feasible for mean matching, not for covariance matching. The primal formulation of IPMµ,1 GAN corresponds to clipping v, i.e. the original WGAN, while for IPMµ,2 we divide v by its ℓ2 norm if it becomes larger than 1. In the dual, for q = 2 we noticed little difference between maximizing the ℓ2 norm or its square. We observed that the default learning rates from WGAN (5e-5) are optimal for both primal and dual formulation. Figure 3 shows the loss (i.e. IPM estimate) dropping steadily for both the primal and dual formulation independently of the choice of the ℓq norm. We also observed that during the whole training process, samples generated from the same noise vector across iterations, remain similar in nature (face identity, bedroom style), while details and background will evolve. This qualitative observation indicates valuable stability of the training process. For the dual formulation (Algorithm 2), we confirmed the hypothesis that we need a good estimate of µω(Pr) in order to compute the gradient of the generator θ: we needed to increase the minibatch size of real threefold to 3 64. Covariance GAN. We now experimentally investigate the IPM defined by covariance matching. For this section and the following, we use only the primal formulation, i.e. with explicit uj and vj orthonormal (Algorithm 3). Figure 4 and 5 show samples and loss from lfw and LSUN training respectively. We use Algorithm 3 with k = 16 components. We obtain samples of comparable quality to the mean matching formulations (Figure 2), and we found training to be stable independent of hyperparameters like number of components k varying between 4 and 64. Figure 2. Samples generated with primal (left) and dual (right) formulation, in ℓ1 (top) and ℓ2 (bottom) norm. (A) lfw (B) LSUN. 0 2500 5000 7500 10000 12500 15000 17500 20000 gθ updates 1.75 Pµ,1 Pµ,2 Dµ,1 Dµ,2 Figure 3. Plot of the loss of Pµ,1 (i.e. WGAN), Pµ,2 Dµ,1 Dµ,2 during training of lfw, as a function of number of updates to gθ. Similar to the observation in (Arjovsky et al., 2017), training is stable and the loss is a useful metric of progress, across the different formulations. Mc Gan: Mean and Covariance Feature Matching GAN 0 10000 20000 30000 40000 50000 gθ updates Figure 4. lfw samples generated with covariance matching and plot of loss function (IPM estimate) ˆ Lσ(U, V, ω, θ). 0 50000 100000 150000 200000 250000 300000 350000 gθ updates Figure 5. LSUN samples generated with covariance matching and plot of loss function (IPM estimate) ˆ Lσ(U, V, ω, θ). Covariance GAN with labels and conditioning. Finally, we conduct experiments on the cifar-10 dataset, where we will leverage the additional label information by training a GAN with conditional generator gθ(z, y) with label y [1, K] supppplied as one-hot vector concatenated with noise z. Similar to Infogan (Chen et al., 2016) and AC-GAN (Odena et al., 2016), we add a new output layer, S RK m and will write the logits S, Φω(x) RK. We now optimize a combination of the IPM loss and the cross-entropy loss CE(x, y; S, Φω) = log [Softmax( S, Φω(x) )y]. The critic loss becomes LD = ˆ Lσ λD 1 (xi,yi) lab CE(xi, yi; S, Φω), with hyper-parameter λD. We now sample three minibatches for each critic update: a labeled batch for the CE term, and for the IPM a real unlabeled + generated batch. The generator loss (with hyper-param λG) becomes: LG = ˆ Lσ +λG 1 zi pz,yi py CE(gθ(zi, yi), yi; S, Φω) which still only requires a single minibatch to compute. Figure 6. Cifar-10: Class-conditioned generated samples. Within each column, the random noise z is shared, while within the rows the GAN is conditioned on the same class: from top to bottom airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck. Table 1. Cifar-10: inception score of our models and baselines. Cond (+L) Uncond (+L) Uncond (-L) L1+Sigma 7.11 0.04 6.93 0.07 6.42 0.09 L2+Sigma 7.27 0.04 6.69 0.08 6.35 0.04 Sigma 7.29 0.06 6.97 0.10 6.73 0.04 WGAN 3.24 0.02 5.21 0.07 6.39 0.07 BEGAN (Berthelot et al., 2017) 5.62 Impr. GAN -LS (Salimans et al., 2016) 6.83 0.06 Impr. GAN Best (Salimans et al., 2016) 8.09 0.07 We confirm the improved stability and sample quality of objectives including covariance matching with inception scores (Salimans et al., 2016) in Table 1. Samples corresponding to the best inception score (Sigma) are given in Figure 6. Using the code released with WGAN (Arjovsky et al., 2017), these scores come from the DCGAN model with n_extra_layers=3 (deeper generator and discriminator) . More samples are in appendix with combinations of Mean and Covariance Matching. Notice rows corresponding to recognizable classes, while the noise z (shared within each column) clearly determines other elements of the visual style like dominant color, across label conditioning. 7. Discussion We noticed the influence of clipping on the capacity of the critic: a higher number of feature maps was needed to compensate for clipping. The question remains what alternatives to clipping of Φω can ensure the boundedness. For example, we succesfully used an ℓ2 penalty on the weights of Φω. Other directions are to explore geodesic distances between the covariances (Arsigny et al., 2006), and extensions of the IPM framework to the multimodal setting (Isola et al., 2017). Mc Gan: Mean and Covariance Feature Matching GAN Absil, P.-A., Mahony, R., and Sepulchre, R. Optimization Algorithms on Matrix Manifolds. Princeton University Press, 2007. Arjovsky, Martin, Chintala, Soumith, and Bottou, Leon. Wasserstein gan. ICML, 2017. Arsigny, Vincent, Fillard, Pierre, Pennec, Xavier, and Ayache, Nicholas. Log-euclidean metrics for fast and simple calculus on diffusion tensors. In Magnetic Resonance in Medicine, 2006. Berthelot, David, Schumm, Tom, and Metz, Luke. Began: Boundary equilibrium generative adversarial networks. ar Xiv:1703.10717, 2017. Chen, Xi, Duan, Yan, Houthooft, Rein, Schulman, John, Sutskever, Ilya, and Abbeel, Pieter. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In NIPS, 2016. Dziugaite, Gintare Karolina, Roy, Daniel M., and Ghahramani, Zoubin. Training generative neural networks via maximum mean discrepancy optimization. In UAI, 2015. Goodfellow, Ian, Pouget-Abadie, Jean, Mirza, Mehdi, Xu, Bing, Warde-Farley, David, Ozair, Sherjil, Courville, Aaron, and Bengio, Yoshua. Generative adversarial nets. In NIPS. 2014. Gretton, Arthur, Borgwardt, Karsten M., Rasch, Malte J., Sch olkopf, Bernhard, and Smola, Alexander. A kernel two-sample test. JMLR, 2012. Huang, Gary B., Ramesh, Manu, Berg, Tamara, and Learned-Miller, Erik. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical report, 2007. Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. Proc. ICML, 2015. Isola, Phillip, Zhu, Jun-Yan, Zhou, Tinghui, and Efros, Alexei A. Image-to-image translation with conditional adversarial networks. CVPR, 2017. Kingma, Diederik P. and Welling, Max. Auto-encoding variational bayes. NIPS, 2013. Krizhevsky, A. and Hinton, G. Learning multiple layers of features from tiny images. Master s thesis, 2009. Li, Yujia, Swersky, Kevin, and Zemel, Richard. Generative moment matching networks. In ICML, 2015. Mirza, Mehdi and Osindero, Simon. Conditional generative adversarial nets. ar Xiv:1411.1784, 2014. Muandet, Krikamol, Fukumizu, Kenji, Sriperumbudur, Bharath, and Schlkopf, Bernhard. Kernel mean embedding of distributions: A review and beyond. ar Xiv:1605.09522, 2016. Muller, Alfred. Integral probability metrics and their generating classes of functions. Advances in Applied Probability, 1997. Nowozin, Sebastian, Cseke, Botond, and Tomioka, Ryota. f-gan: Training generative neural samplers using variational divergence minimization. In NIPS, 2016. Odena, Augustus, Olah, Christopher, and Shlens, Jonathon. Conditional image synthesis with auxiliary classifier gans. ar Xiv:1610.09585, 2016. Radford, Alec, Metz, Luke, and Chintala, Soumith. Unsupervised representation learning with deep convolutional generative adversarial networks. ar Xiv:1511.06434, 2015. Rahimi, Ali and Recht, Benjamin. Random features for large-scale kernel machines. In NIPS. 2008. Salimans, Tim, Goodfellow, Ian, Zaremba, Wojciech, Cheung, Vicki, Radford, Alec, Chen, Xi, and Chen, Xi. Improved techniques for training gans. In NIPS. 2016. Sriperumbudur, Bharath K., Fukumizu, Kenji, Gretton, Arthur, Schlkopf, Bernhard, and Lanckriet, Gert R. G. On integral probability metrics, phi -divergences and binary classification, 2009. Sriperumbudur, Bharath K., Fukumizu, Kenji, Gretton, Arthur, Schlkopf, Bernhard, and Lanckriet, Gert R. G. On the empirical estimation of integral probability metrics. Electronic Journal of Statistics, 2012. Theis, Lucas, Oord, A aron van den, and Bethge, Matthias. A note on the evaluation of generative models. ICLR, 2016. Yu, Fisher, Zhang, Yinda, Song, Shuran, Seff, Ari, and Xiao, Jianxiong. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. ar Xiv:1506.03365, 2015. Zhao, Junbo, Mathieu, Michael, and Lecun, Yann. Energy based generative adversarial networks. ICLR, 2017.