# a_neural_tangent_kernel_perspective_of_gans__360da3bc.pdf A Neural Tangent Kernel Perspective of GANs Jean-Yves Franceschi * 1 2 Emmanuel de Bézenac * 3 2 Ibrahim Ayed * 2 4 Mickaël Chen 5 Sylvain Lamprier 2 Patrick Gallinari 2 1 We propose a novel theoretical framework of analysis for Generative Adversarial Networks (GANs). We reveal a fundamental flaw of previous analyses which, by incorrectly modeling GANs training scheme, are subject to ill-defined discriminator gradients. We overcome this issue which impedes a principled study of GAN training, solving it within our framework by taking into account the discriminator s architecture. To this end, we leverage the theory of infinite-width neural networks for the discriminator via its Neural Tangent Kernel. We characterize the trained discriminator for a wide range of losses and establish general differentiability properties of the network. From this, we derive new insights about the convergence of the generated distribution, advancing our understanding of GANs training dynamics. We empirically corroborate these results via an analysis toolkit based on our framework, unveiling intuitions that are consistent with GAN practice. 1. Introduction Generative Adversarial Networks (GANs; Goodfellow et al., 2014) have become a canonical approach to generative modeling as they produce realistic samples for numerous data types, with a plethora of variants (Wang et al., 2021). These models are notoriously difficult to train and require extensive hyperparameter tuning (Brock et al., 2019; Karras et al., 2020; Liu et al., 2021). To alleviate these shortcomings, much effort has been put into better understanding their training process, resulting in a vast literature of theoretical *Equal contribution, listed in a randomly chosen order. 1Criteo AI Lab, Paris, France 2Sorbonne Université, CNRS, ISIR, F-75005 Paris, France 3Seminar for Applied Mathematics, D-MATH, ETH Zürich, Rämistrasse 101, Zürich-8092, Switzerland 4There SIS Lab, Thales, Palaiseau, France 5Valeo.ai, Paris, France. Correspondence to: Jean-Yves Franceschi , Emmanuel de Bézenac . Proceedings of the 39 th International Conference on Machine Learning, Baltimore, Maryland, USA, PMLR 162, 2022. Copyright 2022 by the author(s). analyses. Many study the various GAN models, found to optimize different losses like the Jensen-Shannon (JS) divergence (Goodfellow et al., 2014) and the earth mover s distance W1 (Arjovsky et al., 2017), to conclude about their comparative advantages. Yet, empirical evaluations (Lucic et al., 2018; Kurach et al., 2019) showed that they can yield approximately the same performance. This indicates that such theoretical works with an exclusive focus on the GAN formulation might not properly model practical settings. Importantly, GANs are trained in practice with alternating gradient descent-ascent of the generator and discriminator, which the vast majority of analyses do not model. Yet, this makes GAN training deviate from its formulation in prior works as a min-max problem: the networks are fixed w.r.t. to each other at each step in the former, while they depend on each other in the latter. Therefore, ignoring this ubiquitous procedure prevents those works from adequately explaining GANs empirical behavior, as it leads to two crucial problems. Firstly, it alters the true implicitly optimized loss, which consequently differs from the widely adopted JS and W1. Secondly, it compels accurate frameworks to take into account the discriminator parameterization as a neural network with inductive biases influencing the generator s loss landscape, which most previous studies do not, or otherwise be subject to ill-defined discriminator gradients. To solve these issues, we introduce the first framework of analysis for GANs modeling a wide range of discriminator architectures and GAN formulations, while encompassing alternating optimization. To this end, we leverage advances in deep learning theory driven by Neural Tangent Kernels (NTKs; Jacot et al., 2018) to model discriminator training. We develop theoretical results showing the relevance of our approach: we establish in our framework the differentiability of the discriminator, hence having well-defined gradients, by proving novel regularity results on its NTK. This more accurate formalization enables us to derive new knowledge about the generator. We formulate the dynamics of the generated distribution via the generator s NTK and link it to gradient flows on probability spaces, thereby helping us to discover its implicitly optimized loss. We deduce in particular that, for GANs under the Integral Probability Metric (IPM), the generated distribution minimizes its Max- A Neural Tangent Kernel Perspective of GANs imum Mean Discrepancy (MMD) given by the discriminator s NTK w.r.t. the target distribution. Moreover, we release an analysis toolkit based on our framework, GAN(TK)2, which we use to empirically validate our analysis and gather new empirical insights: for example, we study the singular performance of the Re LU activation in GAN architectures. 2. Related Work We introduce a framework advancing GAN knowledge, supported by prior and novel contributions in the NTK theory. Neural Tangent Kernels. NTKs were introduced by Jacot et al. (2018), who showed that a trained neural network in the infinite-width regime equates to a kernel method, thereby making its training dynamics tractable and amenable to theoretical study. This fundamental work has been followed by a thorough line of research generalizing and expanding its initial results (Arora et al., 2019; Bietti & Mairal, 2019; Lee et al., 2019; Liu et al., 2020; Sohl-Dickstein et al., 2020), developing means of computing NTKs (Novak et al., 2020; Yang, 2020), further analyzing these kernels (Fan & Wang, 2020; Bietti & Bach, 2021; Chen & Xu, 2021), studying and leveraging them in practice (Zhou et al., 2019; Arora et al., 2020; Lee et al., 2020; Littwin et al., 2020b; Tancik et al., 2020), and more broadly exploring infinitewidth networks (Littwin et al., 2020a; Yang & Hu, 2021; Alemohammad et al., 2021). These prior works validate that NTKs can encapsulate the characteristics of neural network architectures, providing a solid theoretical basis to understand the effect of architecture on learning problems. GAN theory. A first line of research, started by Goodfellow et al. (2014) and pursued by many others (Nowozin et al., 2016; Zhou et al., 2019; Sun et al., 2020), studies the loss minimized by the generator. Assuming that the discriminator is optimal and can take arbitrary values, different families of divergences can be recovered. However, as noted by Arjovsky & Bottou (2017), these divergences should be ill-suited to GAN training, contrary to empirical evidence. Our framework addresses this discrepancy, as it properly characterizes the generator s loss and gradient. Another line of work analyzes the impact of the networks architecture on the loss landscape of GANs. Some works, on one hand, only study the solution of the usual min-max formulation of GANs, without considering their usual optimization via alternating gradient descent-ascent (Liu et al., 2017; Bai et al., 2019; Sun et al., 2020; Biau et al., 2021; Sahiner et al., 2022). Not only are these results obtained under restrictive assumptions by focusing on a single GAN model like WGAN, or with discriminators and generators limited to shallow, linear or random features models , but overlooking alternating optimization hinders their ability to explain GANs empirical behavior, as detailed in Section 3. Some studies, on the other hand, deal with the dynamics and convergence of the generated distribution in this setting. Nonetheless, as these dynamics are highly non-linear, this approach typically requires strong simplifying assumptions: Mescheder et al. (2017) assume the existence of Nash equilibria to the considered optimization problem; Mescheder et al. (2018) reduce the generated distribution to a single datapoint; Domingo-Enrich et al. (2020) apply their zerosum games analysis to mean-field mixtures of generators and discriminators; Balaji et al. (2021) restrict generators and discriminators to be linear or shallow networks; Yang & E (2022) only work with random feature models as discriminators and a modified WGAN loss. In contrast to these works, our framework provides a more comprehensive optimization and architecture modeling as we establish generally applicable results about the influence of the discriminator s architecture on the generator s dynamics. GANs and NTKs. To the best of our knowledge, our contribution is the first to employ NTKs to comprehensively study GANs. Only Jacot et al. (2019) and Chu et al. (2020) have already studied GANs in the light of NTKs, but their studies had restrictive assumptions and limited scope. Jacot et al. (2019) explain, thanks to the generator s NTK, some GAN failure cases like generator collapse and identify normalization techniques to alleviate them, but without breaking down GANs training dynamics. Chu et al. (2020) frame the generator s training dynamics for both GANs and variational autoencoders (Kingma & Welling, 2014; Rezende et al., 2014) as a Stein gradient flow under the generator s NTK like in our Section 4.4, but under a strong assumption on generator injectivity which we do not require. Moreover, both works, focusing on the generator, fail to identify the consequences of the discriminator s parameterization on the generator s dynamics via alternating optimization which, encompassed in our framework, yields in Sections 4 and 5 novel results challenging standard GAN knowledge. Besides the generator, we thoroughly investigate for the first time in the literature the discriminator and its effect on generator optimization via its NTK. To this end, we derive novel results in NTK theory. In particular, while other works studied the regularity of NTKs (Bietti & Mairal, 2019; Yang & Salman, 2019; Basri et al., 2020), ours is, as far as we know, the first to state general differentiability results for NTKs and infinite-width networks. Furthermore, we discover the link between IPM optimization and the NTK MMD, independently of and concurrently with Cheng & Xie (2021), although in a different context: they use the NTK MMD for two-sample statistical testing, whereas we find that IPM GANs actually optimize this metric, thereby explaining the singular performance of NTKs within MMD gradient flows (Arbel et al., 2019). A Neural Tangent Kernel Perspective of GANs 3. Limits of Previous Studies We present in this section the usual GAN formulation and illustrate the limitations of prior analyses. First, let us introduce some notations. Let Ω Rn be a closed convex set, P(Ω) the set of probability distributions over Ω, and L2(µ) the set of square-integrable functions from the support supp µ of µ to R with respect to measure µ, with scalar product , L2(µ). If Λ Ω, we write L2(Λ) for L2(λ), with λ the Lebesgue measure on Λ. 3.1. Generative Adversarial Networks GAN algorithms seek to produce samples from an unknown target distribution β P(Ω). To this extent, a generator function g G: Rd Ωparameterized by θ is learned to map a latent variable z pz to the space of target samples such that the generated distribution αg and β are indistinguishable for a discriminator f F parameterized by ϑ. The generator and the discriminator are trained in an adversarial manner as they are assigned conflicting objectives. Many GAN models consist in solving the following optimization problem, with a, b, c: R R: Cf αg αg Ex αg h cf αg (x) i , (1) where cf = c f, and f αg is chosen to solve, or approximate, the following optimization problem: n Lαg(f) Ex αg af(x) Ey β bf(y) o . (2) For instance, Goodfellow et al. (2014) originally used a(x) = log 1 σ(x) , b(x) = c(x) = log σ(x) , σ being the sigmoid function; in LSGAN (Mao et al., 2017), a(x) = (x + 1)2, b(x) = (x 1)2, c(x) = x2; and for Integral Probability Metrics (Müller, 1997) used e.g. by Arjovsky et al. (2017), a = b = c = id. Many more fall under this formulation (Nowozin et al., 2016; Lim & Ye, 2017). Equation (1) is then solved using gradient descent on the generator s parameters, with at each step j N: θj+1 = θj ηEz pz θgθj(z) x cf αgθj (x) x=gθj (z) (3) This is obtained via the chain rule from the generator s loss Cf αg αg in Equation (1). However, we highlight that the gradient applied in Equation (3) differs from θCf αg αg : the terms taking into account the dependency of the optimal discriminator f αgθ on the generator s parameters are discarded. This is because the discriminator is, in practice, considered to be independent of the generator in the alternating optimization between the generator and the discriminator. Since xcf α(x) = xf α(x) c f α(x) , and as highlighted e.g. by Goodfellow et al. (2014) and Arjovsky & Bottou (2017), the gradient of the discriminator plays a crucial role in the convergence of GANs. For example, if this vector field is null on the training data when α = β, the generator s gradient is zero and convergence is impossible. For this reason, this paper is devoted to developing a better understanding of this gradient field and its consequences on generator optimization when the discriminator is a neural network. In order to characterize this gradient field, we must first study the discriminator itself. 3.2. Alternating Optimization and the Necessity of Modeling the Discriminator Parameterization For each GAN formulation, it is customary to elucidate the true generator loss C αg, β implemented by Equation (2), often assuming that F = L2(Ω), i.e. the discriminator can take arbitrary values. Under this assumption, C would have the form of a Jensen-Shannon divergence in the original GAN and of a Pearson χ2-divergence in LSGAN, for instance. However, as pointed out by Arora et al. (2017), the discriminator is trained in practice with a finite number of samples: both fake and target distributions are finite mixtures of Diracs, which we respectively denote as ˆαg and ˆβ. Let ˆγg = 1 2 ˆβ be the distribution of training samples. Assumption 1 (Finite training set). ˆγg P(Ω) is a finite mixture of Diracs. In this setting, the Jensen-Shannon and χ2 divergences are constant since ˆαg and ˆβ generally do not have the same support, which would imply that the generator could not be properly trained since it would receive null gradients. This is the theoretical reason given by Arjovsky & Bottou (2017) to introduce new losses and constraints for the discriminator such as in WGAN (Arjovsky et al., 2017). However, this is inconsistent with empirical results showing that GANs could already be trained adequately even without the latter losses and constraints (Radford et al., 2016). This entails that widely accepted theoretical frameworks miss a central ingredient in their modeling of constrained-free GANs. Uncovering the missing pieces and understanding how they affect training is one of the aims of the current work. In fact, in the alternating optimization setting as in Equation (3), the constancy of Lˆαg, or even of Cf αg , does not imply that xcf αg in Equation (3) is zero on these points. This stems from the gradient of Equation (3) ignoring the dependency of the optimal discriminator on the generator s parameters: while θCf αg αg might be null, the gradient of Equation (3) differs and may not be zero, thereby changing the actual loss C optimized by the generator. This fact is unaccounted for in many prior analyses, like the ones of A Neural Tangent Kernel Perspective of GANs Arjovsky et al. (2017) and Arora et al. (2017). We refer to Section 5.2 and Appendix B.2 for further discussion. Furthermore, in the previous theoretical frameworks where the discriminator can take arbitrary values, this gradient field is not even defined for any loss Lˆαg. Indeed, when the discriminator s loss Lˆαg(f) is only computed on the empirical distribution ˆγg (as in most GAN formulations), the discriminator optimization problem of Equation (2) never yields a unique optimal solution outside ˆγg. This is illustrated by the following straightforward result. Proposition 1 (Ill-Posed Problem in L2(Ω)). Suppose that F = L2(Ω), supp ˆγg Ω. Then, for all f, h F coinciding over supp ˆγg, Lˆαg(f) = Lˆαg(h) and Equation (2) has either no or infinitely many optimal solutions in F, all coinciding over supp ˆγg. In particular, the set of solutions, if non-empty, contains nondifferentiable discriminators as well as discriminators with null or non-informative gradients. This signifies that the loss alone does not impose any constraint on the values that fˆαg takes outside supp ˆγg, and more particularly on its gradients. Thus, this underspecification of the discriminator over Ω makes the gradient of the optimal discriminator in standard GAN analyses ill-defined. Therefore, an analysis beyond the loss function is necessary to precisely determine the learning problem and true loss C of the generator implicitly defined by the discriminator under alternating optimization. 4. NTK Analysis of GANs To tackle the aforementioned issues, we notice that, in practice, the inner optimization problem of Equation (2) is not solved exactly. Instead, using alternating optimization, a proxy neural discriminator is trained using several steps of gradient ascent for each generator update (Goodfellow, 2016). For a learning rate ε and a fixed generator g, this results in the optimization procedure, from i = 0 to N: ϑg i+1 = ϑg i + ε ϑLˆαg fϑg i , f ˆαg = fϑg N . (4) This training of the discriminator as a neural network solves the gradient indeterminacy of the previous section, but makes a theoretical analysis of its impact unattainable. We propose to facilitate it thanks to the theory of NTKs. We develop our framework modeling the discriminator using its NTK in Section 4.1. We confirm in Sections 4.2 and 4.3 that it is consistent by proving that the discriminator gradient is well-defined. We then leverage this accurate framework to analyze the dynamics of the generated distribution under alternating optimization via the generator s NTK in Section 4.4. We notably frame this dynamics as a gradient flow of the true generator loss C , which we deduce to be non-increasing during training. 4.1. Modeling Inductive Biases of the Discriminator in the Infinite-Width Limit We study the continuous-time version of Equation (4): tϑg t = ϑLˆαg fϑg t which we consider in the infinite-width limit of the discriminator, making its analysis more tractable. In the limit where the width of the hidden layers of ft fϑg t tends to infinity, Jacot et al. (2018) showed that its so-called NTK kϑg t remains constant during a gradient ascent such as Equation (5), i.e. there is a limiting kernel k such that: τ R+, x, y Rn, t [0, τ], kϑg t (x, y) ϑft(x) ϑft(y) = k(x, y). (6) In particular, k only depends on the architecture of f and the initialization distribution of its parameters. The constancy of the NTK of ft during gradient descent holds for many standard architectures, typically without bottleneck and ending with a linear layer (Liu et al., 2020), which is the case of most standard discriminators in the setting of Equation (2). We discuss the applicability of this approximation in Appendix B.1. We more particularly highlight that, under the same conditions, the discriminator s NTK remains constant over the whole GAN optimization process of Equation (3), and not only under a fixed generator. Assumption 2 (Kernel). k: Ω2 R is a symmetric positive semi-definite kernel with k L2 Ω2 . The constancy of the NTK simplifies the dynamics of training in the functional space. In order to express these dynamics, we must first introduce some preliminary definitions. Definition 1 (Functional gradient). Whenever a functional L: L2(µ) R has sufficient regularity, its gradient w.r.t. µ evaluated at f L2(µ) is defined in the usual way as the element µL(f) L2(µ) such that for all ψ L2(µ): lim ε 0 1 ε L(f + εψ) L(f) = µL(f), ψ Definition 2 (RKHS w.r.t. µ and kernel integral operator (Sriperumbudur et al., 2010)). If k follows Assumption 2 and µ P(Ω) is a finite mixture of Diracs, we define the Reproducing Kernel Hilbert Space (RKHS) Hµ k of k with respect to µ given by the Moore Aronszajn theorem as the linear span of functions k(x, ) for x supp µ. Its kernel integral operator from Mercer s theorem is defined as: Tk,µ: L2(µ) Hµ k, h 7 Z x k( , x)h(x) dµ(x). (8) Note that Tk,µ generates Hµ k, and elements of Hµ k are functions defined over all Ωas Hµ k L2(Ω). A Neural Tangent Kernel Perspective of GANs The results of Jacot et al. (2018) imply that the infinite-width discriminator ft trained by Equation (5) obeys the following differential equation in-between generator updates: tft = Tk,ˆγg ˆγg Lˆα(ft) . (9) Within the alternating optimization of GANs at generator step j, f0 would correspond to the previous discriminator step f αgθj f j, and f j+1 = fτ, with τ being the training time of the discriminator in-between generator updates. In the following Sections 4.2 and 4.3, we rely on this differential equation to assess under mild assumptions that the proposed framework is sound w.r.t. the aforementioned gradient indeterminacy issues. We first prove that Equation (9) uniquely defines the discriminator for any initial condition. We then conclude by proving the differentiability of the resulting trained network. These results are not GAN-specific but generalize to networks trained under empirical losses like Equation (2), e.g. for classification and regression. 4.2. Existence, Uniqueness and Characterization of the Discriminator The following is a positive result on the existence and uniqueness of the discriminator that also characterizes its general form, amenable to theoretical analysis. Presented in the context of a discrete distribution ˆγg but generalizable to broader distributions, this result is proved in Appendix A.2. Assumption 3 (Loss regularity). a and b from Equation (2) are differentiable with Lipschitz derivatives over R. Theorem 1 (Solution of gradient descent). Under Assumptions 1 to 3, Equation (9) with initial value f0 L2(Ω) admits a unique solution f : R+ L2(Ω). Moreover, the following holds for all t R+: t R+, ft = f0 + Z t 0 Tk,ˆγg ˆγg Lˆαg(fs) ds = f0 + Tk,ˆγg 0 ˆγg Lˆαg(fs) ds As for any given training time t, there exists a unique ft L2(Ω), defined over all of Ωand not only the training set, the aforementioned issue in Section 3.2 of determining the discriminator associated to ˆγg is now resolved. It is now possible to study the discriminator in its general form thanks to Equation (10). It involves two terms: the previous discriminator state f0 = f j, as well as the kernel operator of an integral. This integral is a function that is undefined outside supp ˆγg, as by definition ˆγg Lˆαg(fs) L2 ˆγg . Fortunately, the kernel operator behaves like a smoothing operator, as it not only defines the function on all of Ωbut embeds it in a highly structured space. Corollary 1 (Training and RKHS). Under Assumptions 1 to 3, ft f0 belongs to the RKHS Hˆγg k for all t R+. In our setting, this space is generated from the NTK k, which only depends on the discriminator architecture, and not on the loss function. This highlights the crucial role of the discriminator s implicit biases, and enables us to characterize its regularity for a given architecture. 4.3. Differentiability of the Discriminator and its NTK We study in this section the smoothness, i.e. infinite differentiability, of the discriminator, which we demonstrate in Appendix A.3. It mostly relies on the differentiability of the kernel k, by Equation (10), which is obtained by characterizing the regularity of the corresponding conjugate kernel (Lee et al., 2018). Therefore, we prove the differentiability of the NTKs of standard architectures, and then conclude about the differentiability of ft. Assumption 4 (Discriminator architecture). The discriminator is a standard architecture (fully connected, convolutional or residual). The activation can be any standard function: tanh, softplus, Re LU-like, sigmoid, Gaussian, etc. Assumption 5 (Discriminator regularity). The activation function is smooth. Assumption 6 (Discriminator bias). Linear layers have non-null bias terms. We first prove the differentiability of the NTK. Proposition 2 (Differentiability of k). Let k be the NTK of an infinite-width network from Assumption 4. For any y Ω, k( , y) is smooth everywhere over Ωunder Assumption 5, or almost everywhere if Assumption 6 holds instead. From Proposition 2, NTKs satisfy Assumption 2. Using Corollary 1, we thus conclude on the differentiability of ft. Theorem 2 (Differentiability of ft). Suppose that k is the NTK of an infinite-width network following Assumption 4. Then ft is smooth everywhere over Ωunder Assumption 5, or almost everywhere when Assumption 6 holds instead. Remark 1 (Bias-free Re LU networks). Re LU networks with hidden layers and no bias are not differentiable at 0. However, by introducing non-zero bias, this nondifferentiability at 0 disappears in the NTK and the infinitewidth discriminator. This observation explains some experimental results in Section 6. Note that Bietti & Mairal (2019) state that the bias-free Re LU kernel is not Lipschitz even outside 0. However, we find this result to be incorrect. We further discuss this matter in Appendix B.3. This result demonstrates that, for a wide range of GANs, e.g. vanilla GAN and LSGAN, the optimized discriminator indeed admits gradients, making the gradient flow given to A Neural Tangent Kernel Perspective of GANs the generator well-defined in our framework. This supports our motivation to bring the theory closer to the empirical evidence that many GAN models do work in practice while their theoretical interpretation until now has been stating the opposite (Arjovsky & Bottou, 2017). 4.4. Dynamics of the Generated Distribution By ensuring the existence of f ˆαg, the previous results allow us to study Equation (3). We consider it in continuoustime like Equation (5), with training time ℓas well as gℓ gθℓand αℓ αgℓ. NTKs enable us to describe the generated distribution s dynamics and uncover the true generated loss C in the following manner, as shown in Appendix A.4. Proposition 3 (Dynamics of αℓ). Under Assumptions 4 and 5, Equation (3) is well-posed and yields in continuoustime, with kgℓthe NTK of the generator gℓ: ℓgℓ= Tkgℓ,pz z 7 x cf ˆ αgℓ(x) x=gℓ(z) Equivalently, the following continuity equation holds for the joint distribution αz ℓof z, gℓ(z) under z pz: αz ℓTkgℓ,pz z 7 x cf ˆ αgℓ(x) x=gℓ(z) (12) where αℓis the marginalization of αz ℓover z pz. In its infinite-width limit, the generator s NTK is also constant: kgℓ= kg; let us study the latter proposition under this assumption. Suppose that there exists a functional C over L2(Ω) such that cf ˆ α = ˆαC (ˆα). Standard results in gradient flows theory see Ambrosio et al. (2008, Chapter 10) for a detailed exposition or Arbel et al. (2019, Appendix A.3) for a summary state that cf ˆ α is then the strong subdifferential of C (ˆα) for the Wasserstein geometry. When kg z, z = δz z In with δ a Dirac centered at 0, we have Tkg,pz = id. Then, from Equation (12), αz ℓfollows the Wasserstein gradient flow with C as potential. This implies that C (ˆαℓ) is decreasing w.r.t. the generator s training time ℓ. In other words, the generator g is trained to minimize C ˆαg . Hence, this result characterizes the implicit objective of the generator as C satisfying cf ˆ α = ˆαC (ˆα). In the general case, Tkg,pz introduces interactions between generated particles as a consequence of the neural parameterization of the generator. Then, Equation (12) amounts to following the same gradient flow as before, but in a Stein geometry (Duncan et al., 2019) instead of a Wasserstein geometry determined by the generator s integral operator, directly implying that in this case C (ˆαℓ) also decreases during training. This geometrical understanding opens interesting perspectives for theoretical analysis, e.g. we see that GAN training in this regime generalizes Stein variational gradient descent (Liu & Wang, 2016), with the Kullback Leibler minimization objective between generated and target distributions being replaced with C (ˆα). Improving our understanding of Equation (12) is fundamental in order to elucidate the open problem of the neural generator s convergence. Our study enables us to shed light on these dynamics and highlights the necessity of pursuing the study of GANs via NTKs to obtain a more comprehensive understanding of them, which is the purpose of the rest of this paper. In particular, the non-interacting case where Tkg,pz = id already yields particularly useful insights that we explore in Section 6. Moreover, we discuss in the following section standard GAN losses and determine the minimized functional C in these cases. 5. Study of Specific Losses Armed with the previous framework, we derive in this section more fine-grained results about the optimized loss C for standard GAN models. Proofs are detailed in Appendix A.6. 5.1. The IPM as an NTK MMD Minimizer We study the case of the IPM loss, with the following remarkable discriminator expression, from which we deduce the objective minimized by the generator. Proposition 4 (IPM discriminator). Under Assumptions 1 and 2, the solutions of Equation (9) for a = b = id are ft = f0 + tf ˆαg, where f ˆαg is the unnormalized MMD witness function (Gretton et al., 2012) with kernel k, yielding: f ˆαg = Ex ˆαg k(x, ) Ey ˆβ k(y, ) , Lˆαg(ft) = Lˆαg(f0) + t MMD2 k ˆαg, ˆβ . (13) The latter result signifies that the direction of the gradient given to the discriminator at each of its optimization step is optimal within the RKHS of its NTK, stemming from the linearity of the IPM loss. The connection with MMD is especially interesting as it has been thoroughly studied in the literature (Muandet et al., 2017). If k is characteristic, as discussed in Appendix B.5, then it defines a distance between distributions. Moreover, the statistical properties of the loss induced by the discriminator directly follow from those of the MMD: it is an unbiased estimator with a squared sample complexity that is independent of the dimension of the samples (Gretton et al., 2007). Suppose that the discriminator is reinitialized at every step of the generator, with f0 = 0 in Equation (9); this is possible with the initialization scheme of Zhang et al. (2020). Then, as c = id and from Proposition 4, cf ˆ α = τ f ˆαg, where τ is the training time of the discriminator. The latter gradient constitutes the gradient flow of the squared MMD, as shown A Neural Tangent Kernel Perspective of GANs by Arbel et al. (2019) with convergence guarantees and discretization properties in the absence of generator. This signifies that C (ˆα) = τMMD2 k ˆαg, ˆβ (see Section 4.4). Therefore, in the IPM case, we discover via Proposition 4 that the generator is actually trained to minimize the MMD between the empirical generated and target distributions, w.r.t. the NTK of the discriminator. This novel connection implies that prior MMD GAN convergence results, like the ones of Mroueh & Nguyen (2021) about the generator trained in such conditions, even though they were established without considering the discriminator s NTK, remarkably transfer to the general unconstrained IPM case. We further discuss our IPM results in the following remarks. Remark 2 (IPM and WGAN). Along with a constraint on the set of functions, the IPM is involved in the earth mover s distance W1 (Villani, 2009) used in WGAN and Style GAN (Karras et al., 2019), close to the hinge loss of Big GAN (Brock et al., 2019) , the MMD used in MMD GAN (Li et al., 2017) , the total variation, etc. In Proposition 4, we study the IPM with the sole constraint of having a neural discriminator. Our analysis implies that this suffices to ensure relevant gradients, given the aforementioned convergence results. This contradicts the recurring assertion that the Lipschitz constraint of WGAN (Arjovsky et al., 2017) is necessary to solve the gradient issues of prior approaches. Indeed, these issues originate from the analyses inadequacy, as shown in this work. Hence, while WGAN tackles them by changing the loss and adding a constraint, we fundamentally address them with a refined framework. A WGAN analysis, left for future work, would require combining the neural discriminator and Lipschitz constraints. Remark 3 (Instance smoothing). We show for IPMs that modeling the discriminator s architecture amounts to smoothing out the input distribution using the kernel integral operator Tk,ˆγg and can thus be seen as a generalization of the regularization technique for GANs called instance noise (Sønderby et al., 2017). This is discussed in Appendix B.4. Remark 4 (Regularization by training time). Proposition 4 highlights the importance of discriminator training time, which needs to be controlled to regularize its gradient magnitude. This corresponds to customary practices where the discriminator is trained for a small number of steps to avoid divergence issues, like in DCGAN (Radford et al., 2016). In the IPM case, we have, with Hˆγ k as the RKHS semi-norm: ft 2 Hˆγ k f0 2 Hˆγ k + t2 f ˆαg 2 Hˆγ k , (14) with equality when f0 = 0. This provides a simple criterion to control the discriminator norm by its training time. For example, assuming f0 = 0, setting t = f ˆαg Hˆγ k recovers the MMD dual constraint of a unit-norm discriminator, i.e. that ft Hˆγ k = 1, yielding Lˆαg(ft) = MMDk ˆαg, ˆβ . 5.2. LSGAN and New Divergences Optimality of the discriminator can be proved when assuming that its loss function is well-behaved. Let us consider the case of LSGAN, for which Equation (9) can be solved by adapting the results from Jacot et al. (2018) for regression. Proposition 5 (LSGAN discr.). Under Assumptions 1 and 2, the solutions of Equation (9) for a = (id + 1)2 and b = (id 1)2 are defined for all t R+ as: ft = exp 4t Tk,ˆγg (f0 ρ) + ρ, ρ = d( ˆβ ˆαg) d( ˆβ+ˆαg) . (15) In the previous result, ρ is the optimum of Lˆαg over L2 ˆγg . When k is positive definite over ˆγg (see Appendix B.5), ft tends to the optimum for Lˆαg as its limit is ρ over supp ˆγg. Nonetheless, unlike the discriminator with arbitrary values of Section 3.2, f is defined over all Ωthanks to the integral operator Tk,ˆγg. It is also the solution to the minimum norm interpolant problem in the RKHS (Jacot et al., 2018), therefore explaining why the discriminator does not overfit in scarce data regimes (see Section 6), and consequently has bounded gradients despite large training times. We also prove a generalization of this optimality conclusion for concave bounded losses in Appendix A.5. Following the discussion initiated in Section 3.2 and applying it to LSGAN using Proposition 5, similarly to the Jensen-Shannon, the resulting generator loss on discrete training data is constant when the discriminator is optimal. However, the gradients received by the generator are not necessarily null, e.g. in the empirical analysis of Section 6. This is because the learning problem of the generator induced by the discriminator makes the generator minimize another loss C , as explained in Section 4.4. This raises the question of determining C for LSGAN and other standard losses. Furthermore, the same problem arises in the case of incompletely trained discriminators ft. Unlike the IPM case for which the results of Arbel et al. (2019) who leveraged the theory of Ambrosio et al. (2008) led to a remarkable solution, this connection remains to be established for other adversarial losses. We leave this as future work. 6. Empirical Study We present a selection of empirical results for different losses and architectures to show the relevance of our framework, with more insights in Appendix C, by evaluating its adequacy and practical implications on GAN convergence. All experiments are performed with the proposed Generative Adversarial Neural Tangent Kernel Tool Kit GAN(TK)2 that we release at https://github.com/emited/ A Neural Tangent Kernel Perspective of GANs 3 2 1 0 1 2 3 x 1.5 64 128 256 512 3 2 1 0 1 2 3 x 0.0 LSGAN, width = 512 0.6 0.4 0.2 0.0 0.2 0.4 0.6 LSGAN, width = Figure 1. Values of cf for LSGAN and IPM, where f is a 3-layer Re LU MLP with bias and varying width trained on the dataset represented by M (real) and L (fake) markers, initialized at f0 = 0. The infinite-width network is trained for a time τ = 1 and the finite-width networks using 10 gradient descent steps with learning rate ε = 0.1, to make training times correspond. The gradients xcf are shown with white arrows on the two-dimensional plots for the fake distribution. gantk2 in the hope that the community leverages and expands it for principled GAN analyses. It is based on the JAX Neural Tangents library (Novak et al., 2020), and is convenient to evaluate architectures and losses based on different visualizations and analyses. For the sake of efficiency and for these experiments only, we choose f0 = 0 using the antisymmetrical initialization (Zhang et al., 2020). Indeed, in the analytical computations of the infinite-width regime, taking into account all previous discriminator states for each generator step is computationally infeasible. This choice also allows us to ignore residual gradients from the initialization, which introduce noise in the optimization process. Adequacy for fixed distributions. We first study the case where generated and target distributions are fixed. In this setting, we qualitatively study the similarity between the finiteand infinite-width regimes of the discriminator. Figure 1 shows cf and its gradients on oneand two-dimensional data for LSGAN and IPM losses with a Re LU MLP with 3 hidden layers of varying widths. We find the behavior of finite-width discriminators to be close to their infinite-width counterpart for standard widths, and converges rapidly to the given limit as the width becomes larger. In the rest of this section, we focus on the study of convergence of the generated distribution. Experimental setting. We consider a target distribution sampled from 8 Gaussians evenly distributed on a centered sphere (cf. Figure 2), in a setup similar to that of Metz et al. (2017), Srivastava et al. (2017) and Arjovsky et al. (2017). We alleviate the complexity of the analysis by following Equation (12) with Tkgℓ,pz = id, similarly to Mroueh et al. (2019) and Arbel et al. (2019), thereby modeling the generator s evolution by considering a finite number of samples, initially Gaussian. For IPM and LSGAN losses, we evaluate the convergence of the generated distributions for a discriminator with Re LU activations in the finiteand infinite-width regime, either with or without bias. We also comparatively evaluate the advantages of this architecture by considering the case where the infinite-width loss is not given by an NTK, but by the popular Radial Basis Function (RBF) kernel, which is characteristic and presents attractive properties (Muandet et al., 2017). We refer to Figure 2 for qualitative results and Table 1 in Appendix C for a numerical evaluation. Note that similar results for more datasets, including MNIST and Celeb A, and architectures are available in Appendix C. Adequacy. We observe that correlated performances between the finiteand infinite-width regimes, Re LU networks being considerably better in the latter. Remarkably, for the infinite-width IPM, generated and target distributions perfectly match. This can be explained by the high capacity of infinite-width networks; it has already been shown that NTKs benefit from low-data regimes (Arora et al., 2020). Impact of bias. The bias-free discriminator performs worse than with bias, for both regimes and both losses. This is in line with findings of e.g. Basri et al. (2020), and can be A Neural Tangent Kernel Perspective of GANs IPM, Re LU (infinite) IPM, Re LU, no bias (infinite) LSGAN, Re LU (infinite) 5.0 2.5 0.0 2.5 5.0 Initialization 5.0 2.5 0.0 2.5 5.0 IPM, Re LU (finite) 5.0 2.5 0.0 2.5 5.0 IPM, Re LU, no bias (finite) 5.0 2.5 0.0 2.5 5.0 LSGAN, Re LU (finite) Figure 2. Generator (G) and target ( ) samples for different methods. In the background, cf . explained in our theoretical framework by comparing their NTKs. Indeed, the NTK of a bias-free Re LU network is not characteristic, whereas its bias counterpart was proven to present powerful approximation properties (Ji et al., 2020). Furthermore, results of Section 4.3 state that the Re LU NTK with bias is differentiable at 0, whereas its bias-free version is not, which can disrupt optimization based on its gradients: note in Figure 2 the abrupt streaks of the discriminator directed towards 0 and their consequences on convergence. NTK vs. RBF. We observe the superiority of NTKs over the RBF kernel. This highlights that the gradients of a Re LU network with bias are particularly well adapted to GANs. Visualizations of these gradients in the infinite-width limit are available in Appendix C.4 and further corroborate these findings. More generally, we believe that the NTK of Re LU networks could be of particular interest for kernel methods requiring the computation of a spatial gradient, like Stein variational gradient descent (Liu & Wang, 2016). 7. Conclusion Leveraging the theory of infinite-width neural networks, we propose a framework of analysis for GANs explicitly modeling a large variety of discriminator architectures under the alternating optimization setting. We show that the proposed framework more accurately models GAN training compared to prior approaches by deriving properties of the trained discriminator. We demonstrate the analysis opportunities of the proposed modeling by studying the generated distribution that we find to follow a gradient flow on probability spaces minimizing some functional that we characterize. We further study the latter for specific GAN losses and architectures, both theoretically and empirically, notably using our public GAN analysis toolkit. We believe that this work will serve as a basis for more elaborate analyses, thus leading to more principled, better GAN models. Acknowledgements We would like to thank all members of the MLIA team from the ISIR laboratory of Sorbonne Université for helpful discussions and comments. We acknowledge financial support from the DEEPNUM ANR project (ANR-21-CE23-0017-02), the ETH Foundations of Data Science, and the European Union s Horizon 2020 research and innovation programme under grant agreement 825619 (AI4EU). This work was granted access to the HPC resources of IDRIS under allocations 2020-AD011011360 and 2021-AD011011360R1 made by GENCI (Grand Equipement National de Calcul Intensif). Patrick Gallinari is additionally funded by the 2019 ANR AI Chairs program via the DL4CLIM project. A Neural Tangent Kernel Perspective of GANs Adler, R. J. The Geometry Of Random Fields. Society for Industrial and Applied Mathematics, December 1981. Adler, R. J. An introduction to continuity, extrema, and related topics for general gaussian processes. Lecture Notes-Monograph Series, 12:i 155, 1990. Alemohammad, S., Wang, Z., Balestriero, R., and Baraniuk, R. G. The recurrent neural tangent kernel. In International Conference on Learning Representations, 2021. Allen-Zhu, Z., Li, Y., and Song, Z. A convergence theory for deep learning via over-parameterization. In Chaudhuri, K. and Salakhutdinov, R. (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 242 252. PMLR, June 2019. Ambrosio, L. and Crippa, G. Continuity equations and ODE flows with non-smooth velocity. Proceedings of the Royal Society of Edinburgh: Section A Mathematics, 144 (6):1191 1244, 2014. Ambrosio, L., Gigli, N., and Savaré, G. Gradient Flows. Birkhäuser Basel, Basel, Switzerland, 2008. Arbel, M., Korba, A., Salim, A., and Gretton, A. Maximum mean discrepancy gradient flow. In Wallach, H., Larochelle, H., Beygelzimer, A., d Alché Buc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 32, pp. 6484 6494. Curran Associates, Inc., 2019. Arjovsky, M. and Bottou, L. Towards principled methods for training generative adversarial networks. In International Conference on Learning Representations, 2017. Arjovsky, M., Chintala, S., and Bottou, L. Wasserstein generative adversarial networks. In Precup, D. and Teh, Y. W. (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 214 223. PMLR, August 2017. Arora, S., Ge, R., Liang, Y., Ma, T., and Zhang, Y. Generalization and equilibrium in generative adversarial nets (GANs). In Precup, D. and Teh, Y. W. (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 224 232. PMLR, August 2017. Arora, S., Du, S. S., Hu, W., Li, Z., Salakhutdinov, R., and Wang, R. On exact computation with an infinitely wide neural net. In Wallach, H., Larochelle, H., Beygelzimer, A., d Alché Buc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 32, pp. 8141 8150. Curran Associates, Inc., 2019. Arora, S., Du, S. S., Li, Z., Salakhutdinov, R., Wang, R., and Yu, D. Harnessing the power of infinitely wide deep nets on small-data tasks. In International Conference on Learning Representations, 2020. Bai, Y., Ma, T., and Risteski, A. Approximability of discriminators implies diversity in GANs. In International Conference on Learning Representations, 2019. Balaji, Y., Sajedi, M., Kalibhat, N. M., Ding, M., Stöger, D., Soltanolkotabi, M., and Feizi, S. Understanding overparameterization in generative adversarial networks. In International Conference on Learning Representations, 2021. Basri, R., Galun, M., Geifman, A., Jacobs, D., Kasten, Y., and Kritchman, S. Frequency bias in neural networks for input of non-uniform density. In Daumé, III, H. and Singh, A. (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 685 694. PMLR, July 2020. Biau, G., Sangnier, M., and Tanielian, U. Some theoretical insights into wasserstein GANs. Journal of Machine Learning Research, 22(119):1 45, 2021. Bietti, A. and Bach, F. Deep equals shallow for Re LU networks in kernel regimes. In International Conference on Learning Representations, 2021. Bietti, A. and Mairal, J. On the inductive bias of neural tangent kernels. In Wallach, H., Larochelle, H., Beygelzimer, A., d Alché Buc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 32, pp. 12893 12904. Curran Associates, Inc., 2019. Bradbury, J., Frostig, R., Hawkins, P., Johnson, M. J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., Vander Plas, J., Wanderman-Milne, S., and Zhang, Q. JAX: composable transformations of Python+Num Py programs, 2018. URL http://github.com/google/jax. Brock, A., Donahue, J., and Simonyan, K. Large scale GAN training for high fidelity natural image synthesis. In International Conference on Learning Representations, 2019. Chen, L. and Xu, S. Deep neural tangent kernel and Laplace kernel have the same RKHS. In International Conference on Learning Representations, 2021. Cheng, X. and Xie, Y. Neural tangent kernel maximum mean discrepancy. In Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P. S., and Wortman Vaughan, J. (eds.), Advances in Neural Information Processing Systems, volume 34, pp. 6658 6670. Curran Associates, Inc., 2021. A Neural Tangent Kernel Perspective of GANs Chu, C., Minami, K., and Fukumizu, K. The equivalence between Stein variational gradient descent and black-box variational inference. ar Xiv preprint ar Xiv:2004.01822, 2020. Corless, R. M., Gonnet, G. H., Hare, D. E. G., Jeffrey, D. J., and Knuth, D. E. On the Lambert W function. Advances in Computational Mathematics, 5(1):329 359, December 1996. Corless, R. M., Ding, H., Higham, N. J., and Jeffrey, D. J. The solution of S exp(S) = A is not always the Lambert W function of A. In Proceedings of the 2007 International Symposium on Symbolic and Algebraic Computation, ISSAC 07, pp. 116 121, New York, NY, USA, 2007. Association for Computing Machinery. Domingo-Enrich, C., Jelassi, S., Mensch, A., Rotskoff, G., and Bruna, J. A mean-field analysis of two-player zerosum games. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.-F., and Lin, H.-T. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 20215 20226. Curran Associates, Inc., 2020. Duncan, A., Nüsken, N., and Szpruch, L. On the geometry of Stein variational gradient descent. ar Xiv preprint ar Xiv:1912.00894, 2019. Fan, Z. and Wang, Z. Spectra of the conjugate kernel and neural tangent kernel for linear-width neural networks. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.-F., and Lin, H.-T. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 7710 7721. Curran Associates, Inc., 2020. Farkas, B. and Wegner, S.-A. Variations on Barb alat s lemma. The American Mathematical Monthly, 123(8): 825 830, 2016. Feydy, J., Séjourné, T., Vialard, F.-X., Amari, S.-i., Trouve, A., and Peyré, G. Interpolating between optimal transport and MMD using Sinkhorn divergences. In Chaudhuri, K. and Sugiyama, M. (eds.), Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, volume 89 of Proceedings of Machine Learning Research, pp. 2681 2690. PMLR, April 2019. Geiger, M., Spigler, S., Jacot, A., and Wyart, M. Disentangling feature and lazy training in deep neural networks. Journal of Statistical Mechanics: Theory and Experiment, 2020(11), November 2020. Goodfellow, I. NIPS 2016 tutorial: Generative adversarial networks. ar Xiv preprint ar Xiv:1701.00160, 2016. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. In Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N. D., and Weinberger, K. Q. (eds.), Advances in Neural Information Processing Systems, volume 27, pp. 2672 2680. Curran Associates, Inc., 2014. Gretton, A., Borgwardt, K. M., Rasch, M., Schölkopf, B., and Smola, A. A kernel method for the two-sampleproblem. In Schölkopf, B., Platt, J. C., and Hoffman, T. (eds.), Advances in Neural Information Processing Systems, volume 19, pp. 513 520. MIT Press, 2007. Gretton, A., Borgwardt, K. M., Rasch, M. J., Schölkopf, B., and Smola, A. A kernel two-sample test. Journal of Machine Learning Research, 13(25):723 773, 2012. He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770 778, June 2016. Higham, N. J. Functions of matrices: theory and computation. Society for Industrial and Applied Mathematics, 2008. Hornik, K., Stinchcombe, M., and White, H. Multilayer feedforward networks are universal approximators. Neural Networks, 2(5):359 366, 1989. Hron, J., Bahri, Y., Sohl-Dickstein, J., and Novak, R. Infinite attention: NNGP and NTK for deep attention networks. In Daumé, III, H. and Singh, A. (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 4376 4386. PMLR, July 2020. Huang, K., Wang, Y., Tao, M., and Zhao, T. Why do deep residual networks generalize better than deep feedforward networks? a neural tangent kernel perspective. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.-F., and Lin, H.-T. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 2698 2709. Curran Associates, Inc., 2020. Iacono, R. and Boyd, J. P. New approximations to the principal real-valued branch of the Lambert W-function. Advances in Computational Mathematics, 43(6):1403 1436, 2017. Jacot, A., Gabriel, F., and Hongler, C. Neural tangent kernel: Convergence and generalization in neural networks. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 31, pp. 8580 8589. Curran Associates, Inc., 2018. A Neural Tangent Kernel Perspective of GANs Jacot, A., Gabriel, F., Ged, F., and Hongler, C. Order and chaos: NTK views on DNN normalization, checkerboard and boundary artifacts. ar Xiv preprint ar Xiv:1907.05715, 2019. Jain, N., Olmo, A., Sengupta, S., Manikonda, L., and Kambhampati, S. Imperfect ima GANation: Implications of GANs exacerbating biases on facial data augmentation and Snapchat selfie lenses. ar Xiv preprint ar Xiv:2001.09528, 2020. Ji, Z., Telgarsky, M., and Xian, R. Neural tangent kernels, transportation mappings, and universal approximation. In International Conference on Learning Representations, 2020. Karras, T., Laine, S., and Aila, T. A style-based generator architecture for generative adversarial networks. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4396 4405, June 2019. Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., and Aila, T. Analyzing and improving the image quality of Style GAN. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8107 8116, June 2020. Kingma, D. P. and Welling, M. Auto-encoding variational Bayes. In International Conference on Learning Representations, 2014. Kurach, K., Lucic, M., Zhai, X., Michalski, M., and Gelly, S. A large-scale study on regularization and normalization in GANs. In Chaudhuri, K. and Salakhutdinov, R. (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 3581 3590. PMLR, June 2019. Le Cun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradientbased learning applied to document recognition. Proceedings of the IEEE, 86(11):2278 2324, November 1998. Lee, J., Bahri, Y., Novak, R., Schoenholz, S. S., Pennington, J., and Sohl-Dickstein, J. Deep neural networks as Gaussian processes. In International Conference on Learning Representations, 2018. Lee, J., Xiao, L., Schoenholz, S. S., Bahri, Y., Novak, R., Sohl-Dickstein, J., and Pennington, J. Wide neural networks of any depth evolve as linear models under gradient descent. In Wallach, H., Larochelle, H., Beygelzimer, A., d Alché Buc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 32, pp. 8572 8583. Curran Associates, Inc., 2019. Lee, J., Schoenholz, S. S., Pennington, J., Adlam, B., Xiao, L., Novak, R., and Sohl-Dickstein, J. Finite versus infinite neural networks: an empirical study. In Wallach, H., Larochelle, H., Beygelzimer, A., d Alché Buc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 15156 15172. Curran Associates, Inc., 2020. Leipnik, R. B. and Pearce, C. E. M. The multivariate Faà di Bruno formula and multivariate Taylor expansions with explicit integral remainder term. The ANZIAM Journal, 48(3):327 341, 2007. Leshno, M., Lin, V. Y., Pinkus, A., and Schocken, S. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Networks, 6(6):861 867, 1993. Li, C.-L., Chang, W.-C., Cheng, Y., Yang, Y., and Páczos, B. MMD GAN: Towards deeper understanding of moment matching network. In Guyon, I., von Luxburg, U., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S. V. N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 30, pp. 2200 2210. Curran Associates, Inc., 2017. Lim, J. H. and Ye, J. C. Geometric GAN. ar Xiv preprint ar Xiv:1705.02894, 2017. Littwin, E., Galanti, T., Wolf, L., and Yang, G. On infinite-width hypernetworks. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.-F., and Lin, H.-T. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 13226 13237. Curran Associates, Inc., 2020a. Littwin, E., Myara, B., Sabah, S., Susskind, J., Zhai, S., and Golan, O. Collegial ensembles. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.-F., and Lin, H.-T. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 18738 18748. Curran Associates, Inc., 2020b. Liu, C., Zhu, L., and Belkin, M. On the linearity of large non-linear models: when and why the tangent kernel is constant. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.-F., and Lin, H.-T. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 15954 15964. Curran Associates, Inc., 2020. Liu, M.-Y., Huang, X., Yu, J., Wang, T.-C., and Mallya, A. Generative adversarial networks for image and video synthesis: Algorithms and applications. Proceedings of the IEEE, 109(5):839 862, 2021. Liu, Q. and Wang, D. Stein variational gradient descent: A general purpose Bayesian inference algorithm. In Lee, D. D., Sugiyama, M., von Luxburg, U., Guyon, I., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 29, pp. 2378 2386. Curran Associates, Inc., 2016. A Neural Tangent Kernel Perspective of GANs Liu, S., Bousquet, O., and Chaudhuri, K. Approximation and convergence properties of generative adversarial learning. In Guyon, I., von Luxburg, U., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S. V. N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 30, pp. 5551 5559. Curran Associates, Inc., 2017. Liu, Z., Luo, P., Wang, X., and Tang, X. Deep learning face attributes in the wild. In IEEE International Conference on Computer Vision (ICCV), pp. 3730 3738, December 2015. Lucic, M., Kurach, K., Michalski, M., Gelly, S., and Bousquet, O. Are GANs created equal? a large-scale study. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 31, pp. 698 707. Curran Associates, Inc., 2018. Mao, X., Li, Q., Xie, H., Lau, R. Y. K., Wang, Z., and Paul Smolley, S. Least squares generative adversarial networks. In IEEE International Conference on Computer Vision (ICCV), pp. 2813 2821, October 2017. Mescheder, L., Nowozin, S., and Geiger, A. The numerics of GANs. In Guyon, I., von Luxburg, U., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S. V. N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 30, pp. 1823 1833. Curran Associates, Inc., 2017. Mescheder, L., Geiger, A., and Nowozin, S. Which training methods for GANs do actually converge? In Dy, J. and Krause, A. (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 3481 3490. PMLR, July 2018. Metz, L., Poole, B., Pfau, D., and Sohl-Dickstein, J. Unrolled generative adversarial networks. In International Conference on Learning Representations, 2017. Mroueh, Y. and Nguyen, T. On the convergence of gradient descent in GANs: MMD GAN as a gradient flow. In Banerjee, A. and Fukumizu, K. (eds.), Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, volume 130 of Proceedings of Machine Learning Research, pp. 1720 1728. PMLR, April 2021. Mroueh, Y., Sercu, T., and Raj, A. Sobolev descent. In Chaudhuri, K. and Sugiyama, M. (eds.), Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, volume 89 of Proceedings of Machine Learning Research, pp. 2976 2985. PMLR, April 2019. Muandet, K., Fukumizu, K., Sriperumbudur, B., and Schölkopf, B. Kernel mean embedding of distributions: A review and beyond. Foundations and Trends R in Machine Learning, 10(1 2):1 141, 2017. Müller, A. Integral probability metrics and their generating classes of functions. Advances in Applied Probability, 29 (2):429 443, 1997. Novak, R., Xiao, L., Hron, J., Lee, J., Alemi, A. A., Sohl-Dickstein, J., and Schoenholz, S. S. Neural Tangents: Fast and easy infinite neural networks in Python. In International Conference on Learning Representations, 2020. URL https://github.com/google/ neural-tangents. Nowozin, S., Cseke, B., and Tomioka, R. f-GAN: Training generative neural samplers using variational divergence minimization. In Lee, D. D., Sugiyama, M., von Luxburg, U., Guyon, I., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 29, pp. 271 279. Curran Associates, Inc., 2016. Radford, A., Metz, L., and Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. In International Conference on Learning Representations, 2016. Rezende, D. J., Mohamed, S., and Wierstra, D. Stochastic backpropagation and approximate inference in deep generative models. In Xing, E. P. and Jebara, T. (eds.), Proceedings of the 31st International Conference on Machine Learning, volume 32 of Proceedings of Machine Learning Research, pp. 1278 1286, Beijing, China, June 2014. PMLR. Sahiner, A., Ergen, T., Ozturkler, B., Bartan, B., Pauly, J. M., Mardani, M., and Pilanci, M. Hidden convexity of wasserstein GANs: Interpretable generative models with closed-form solutions. In International Conference on Learning Representations, 2022. Scheuerer, M. A Comparison of Models and Methods for Spatial Interpolation in Statistics and Numerical Analysis. Ph D thesis, Georg-August Universität Göttingen, October 2009. URL https: //ediss.uni-goettingen.de/handle/ 11858/00-1735-0000-0006-B3D5-1. Sohl-Dickstein, J., Novak, R., Schoenholz, S. S., and Lee, J. On the infinite width limit of neural networks with a standard parameterization. ar Xiv preprint ar Xiv:2001.07301, 2020. Sriperumbudur, B. K., Gretton, A., Fukumizu, K., Schölkopf, B., and Lanckriet, G. R. G. Hilbert space A Neural Tangent Kernel Perspective of GANs embeddings and metrics on probability measures. Journal of Machine Learning Research, 11(50):1517 1561, 2010. Sriperumbudur, B. K., Fukumizu, K., and Lanckriet, G. R. G. Universality, characteristic kernels and RKHS embedding of measures. Journal of Machine Learning Research, 12 (70):2389 2410, 2011. Srivastava, A., Valkov, L., Russell, C., Gutmann, M. U., and Sutton, C. VEEGAN: Reducing mode collapse in GANs using implicit variational learning. In Guyon, I., von Luxburg, U., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S. V. N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 30, pp. 3310 3320. Curran Associates, Inc., 2017. Steinwart, I. On the influence of the kernel on the consistency of support vector machines. Journal of Machine Learning Research, 2:67 93, November 2001. Sun, R., Fang, T., and Schwing, A. Towards a better global loss landscape of GANs. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.-F., and Lin, H.-T. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 10186 10198. Curran Associates, Inc., 2020. Sønderby, C. K., Caballero, J., Theis, L., Shi, W., and Huszár, F. Amortised MAP inference for image superresolution. In International Conference on Learning Representations, 2017. Tancik, M., Srinivasan, P. P., Mildenhall, B., Fridovich-Keil, S., Raghavan, N., Singhal, U., Ramamoorthi, R., Barron, J. T., and Ng, R. Fourier features let networks learn high frequency functions in low dimensional domains. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.-F., and Lin, H.-T. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 7537 7547. Curran Associates, Inc., 2020. Tolosana, R., Vera-Rodriguez, R., Fierrez, J., Morales, A., and Ortega-Garcia, J. Deep Fakes and beyond: A survey of face manipulation and fake detection. Information Fusion, 64:131 148, 2020. Villani, C. The Wasserstein distances, pp. 93 111. Grundlehren der mathematischen Wissenschaften. Springer Berlin Heidelberg, Berlin - Heidelberg, Germany, 2009. Wang, Z., She, Q., and Ward, T. E. Generative adversarial networks in computer vision: A survey and taxonomy. ACM Computing Surveys, 54(2), April 2021. Yang, G. Tensor programs II: Neural tangent kernel for any architecture. ar Xiv preprint ar Xiv:2006.14548, 2020. Yang, G. and Hu, E. J. Tensor programs iv: Feature learning in infinite-width neural networks. In Meila, M. and Zhang, T. (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 11727 11737. PMLR, July 2021. Yang, G. and Salman, H. A fine-grained spectral perspective on neural networks. ar Xiv preprint ar Xiv:1907.10599, 2019. Yang, H. and E, W. Generalization error of GAN from the discriminator s perspective. Research in the Mathematical Sciences, 9(8), 2022. Zhang, Y., Xu, Z.-Q. J., Luo, T., and Ma, Z. A type of generalization error induced by initialization in deep neural networks. In Lu, J. and Ward, R. (eds.), Proceedings of The First Mathematical and Scientific Machine Learning Conference, volume 107 of Proceedings of Machine Learning Research, pp. 144 164, Princeton University, Princeton, NJ, USA, July 2020. PMLR. Zhou, Z., Liang, J., Song, Y., Yu, L., Wang, H., Zhang, W., Yu, Y., and Zhang, Z. Lipschitz generative adversarial nets. In Chaudhuri, K. and Salakhutdinov, R. (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 7584 7593, Long Beach, California, USA, June 2019. PMLR. A Neural Tangent Kernel Perspective of GANs Table of Contents A Proofs of Theoretical Results and Additional Results 16 A.1 Recall of Assumptions in the Paper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.2 On the Solutions of Equation (9) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.3 Differentiability of Infinite-Width Networks and their NTKs . . . . . . . . . . . . . . . . . . . . . . . . 21 A.4 Dynamics of the Generated Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 A.5 Optimality in Concave Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 A.6 Case Studies of Discriminator Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 B Discussions and Remarks 33 B.1 From Finite to Infinite-Width Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 B.2 Loss of the Generator and its Gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 B.3 Differentiability of the Bias-Free Re LU Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 B.4 Integral Operator and Instance Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 B.5 Positive Definite NTKs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 B.6 Societal Impact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 C GAN(TK)2 and Further Empirical Analyses 37 C.1 Two-Dimensional Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 C.2 Re LU vs. Sigmoid Activations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 C.3 Qualitative MNIST and Celeb A Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 C.4 Visualizing the Gradient Field Induced by the Discriminator . . . . . . . . . . . . . . . . . . . . . . . . 40 D Experimental Details 43 D.1 GAN(TK)2 Specifications and Computing Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 D.2 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 D.3 Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 A Neural Tangent Kernel Perspective of GANs In the course of this appendix, we drop the subscript g for ˆγg, ˆαg and other notations when the dependency on a fixed generator g is clear and indicated in the main paper, for the sake of clarity. A. Proofs of Theoretical Results and Additional Results We prove in this section all theoretical results mentioned in Sections 4 and 5. Appendix A.2 is devoted to the proof of Theorem 1, Appendix A.3 focuses on proving the differentiability results skimmed in Section 4.3, Appendix A.4 contains the demonstration of Proposition 3, and Appendices A.5 and A.6 develop the results presented in Section 5. We will need in the course of these proofs the following standard definition. For any measurable function T and measure µ, T µ denotes the push-forward measure which is defined as T µ(B) = µ T 1(B) , for any measurable set B. A.1. Recall of Assumptions in the Paper Assumption 1 (Finite training set). ˆγ P(Ω) is a finite mixture of Diracs. Assumption 2 (Kernel). k: Ω2 R is a symmetric positive semi-definite kernel with k L2 Ω2 . Assumption 3 (Loss regularity). a and b from Equation (2) are differentiable with Lipschitz derivatives over R. Assumption 4 (Discriminator architecture). The discriminator is a standard architecture (fully connected, convolutional or residual). Any activation φ in the network satisfies the following properties: φ is smooth everywhere except on a finite set D; for all j N, there exist scalars λ(j) 1 and λ(j) 2 such that: x R \ D, φ(j)(x) λ(j) 1 |x| + λ(j) 2 , (16) where φ(j) is the j-th derivative of φ. Assumption 5 (Discriminator regularity). D = , i.e. φ is smooth. Assumption 6 (Discriminator bias). Linear layers have non-null bias terms. Moreover, for all x, y R such that x = y, the following holds: Eε N(0,1)φ(xε)2 = Eε N(0,1)φ(yε)2. (17) Remark 5 (Typical activations). Assumptions 4 to 6 cover multiple standard activation functions, including tanh, softplus, Re LU, leaky Re LU and sigmoid. A.2. On the Solutions of Equation (9) The methods used in this section are adaptations to our setting of standard methods of proof. In particular, they can be easily adapted to slightly different contexts, the main ingredient being the structure of the kernel integral operator. Moreover, it is also worth noting that, although we relied on Assumption 1 for ˆγ, the results are essentially unchanged if we take a compactly supported measure γ instead. We decompose the proof into several intermediate results. Theorem 3 and Proposition 6, stated and demonstrated in this section, correspond when combined to Theorem 1. Let us first prove the following two intermediate lemmas. Lemma 1. Let δT > 0 and FδT = C [0, δT], BL2(ˆγ)(f0, 1) endowed with the norm: u FδT , u = sup t [0,δT ] ut L2(ˆγ). (18) Then FδT is complete. Proof. Let (un)n be a Cauchy sequence in FδT . For a fixed t [0, δT]: n, m, un t um t L2(ˆγ) un um , (19) A Neural Tangent Kernel Perspective of GANs which shows that (un t )n is a Cauchy sequence in L2(ˆγ). L2(ˆγ) being complete, (un t )n converges to a u t L2(ˆγ). Moreover, for ε > 0, because (un) is Cauchy, we can choose N such that: n, m N, un um ε. (20) We thus have that: t, n, m N, un t um t L2(ˆγ) ε. (21) Then, by taking m to , by continuity of the L2(ˆγ) norm: t, n N, un t u t L2(ˆγ) ε, (22) which means that: n N, un u ε. (23) so that (un)n tends to u . Moreover, as: n, un t L2(ˆγ) 1, (24) we have that u t L2(ˆγ) 1. Finally, let us consider s, t [0, δT]. We have that: n, u t u s L2(ˆγ) u t un t L2(ˆγ) + un t un s L2(ˆγ) + u s un s L2(ˆγ). (25) The first and the third terms can then be taken as small as needed by definition of u by taking n high enough, while the second can be made to tend to 0 as t tends to s by continuity of un. This proves the continuity of u and shows that u FδT . Lemma 2. For any F L2(ˆγ), we have that F L2(ˆα) and F L2 ˆβ with: 2 F L2(ˆγ) and F L2( ˆβ) 2 F L2(ˆγ). (26) Proof. For any F L2(ˆγ), we have that F 2 L2(ˆγ) = 1 2 F 2 L2(ˆα) + 1 2 F 2 L2( ˆβ), (27) so that F L2(ˆα) and F L2 ˆβ with: F 2 L2(ˆα) = 2 F 2 L2(ˆγ) F 2 L2( ˆβ) 2 F 2 L2(ˆγ), F 2 L2( ˆβ) = 2 F L2(ˆγ) F L2(ˆα) 2 F 2 L2(ˆγ), (28) which allows us to conclude. From this, we can prove the existence and uniqueness of the initial value problem from Equation (9). Theorem 3 (Existence and Uniqueness). Under Assumptions 1 to 3, Equation (9) with initial value f0 admits a unique solution f : R+ L2(Ω). A Neural Tangent Kernel Perspective of GANs A few inequalities. We start this proof by proving a few inequalities. Let f, g L2(ˆγ). We have, by the Cauchy-Schwarz inequality, for all z Ω: Tk,ˆγ ˆγLˆα(f) Tk,ˆγ ˆγLˆα(g) (z) k(z, ) L2(ˆγ) ˆγLˆα(f) ˆγLˆα(g) L2(ˆγ). (29) Moreover, by definition: D ˆγLˆα(f) ˆγLˆα(g), h E L2(ˆγ) = Z a f a g h dˆα Z b f b g h dˆβ, (30) ˆγLˆα(f) ˆγLˆα(g) 2 L2(ˆγ) ˆγLˆα(f) ˆγLˆα(g) L2(ˆγ) a f a g L2(ˆα) + b f b g L2( ˆβ) and then, along with Lemma 2: ˆγLˆα(f) ˆγLˆα(g) L2(ˆγ) a f a g L2(ˆα) + b f b g L2( ˆβ) 2 a f a g L2(ˆγ) + b f b g L2(ˆγ) By Assumption 3, we know that a and b are Lipschitz with constants that we denote K1 and K2. We can then write for all x: a f(x) a g(x) K1 f(x) g(x) , b f(x) b g(x) K2 f(x) g(x) , (33) so that: a f a g L2(ˆγ) K1 f g L2(ˆγ), b f b g L2(ˆγ) K2 f g L2(ˆγ). (34) Finally, we can now write, for all z Ω: Tk,ˆγ ˆγLˆα(f) Tk,ˆγ ˆγLˆα(g) (z) 2(K1 + K2) f g L2(ˆγ) k(z, ) L2(ˆγ), (A) and then: Tk,ˆγ ˆγLˆα(f) Tk,ˆγ ˆγLˆα(g) L2(ˆγ) K f g L2(ˆγ), (B) 2(K1 + K2) q R k(z, ) 2 L2(ˆγ) dˆγ(z) is finite as a finite sum of finite terms from Assumptions 1 and 2. In particular, putting g = 0 and using the triangular inequality also gives us: Tk,ˆγ ˆγLˆα(f) L2(ˆγ) K f L2(ˆγ) + M, (B ) where M = Tk,ˆγ ˆγLˆα(0) L2(ˆγ). Existence and uniqueness in L2(ˆγ). We now adapt the standard fixed point proof to prove existence and uniqueness of a solution to the studied equation in L2(ˆγ). We consider the family of spaces FδT = C [0, δT], BL2(ˆγ)(f0, 1) . FδT is defined, for δT > 0, as the space of continuous functions from [0, δT] to the closed ball of radius 1 centered around f0 in L2(ˆγ) which we endow with the norm: u FδT , u = sup t [0,δT ] ut L2(ˆγ). (35) A Neural Tangent Kernel Perspective of GANs We now define the application Φ where Φ(u) is defined as, for any u FδT : Φ(u)t = f0 + Z t 0 Tk,ˆγ ˆγLˆα(us) ds. (36) We have, using Equation (B ): Φ(u)t f0 L2(ˆγ) Z t K us L2(ˆγ) + M ds (K + M)δT. (37) Thus, taking δT = 2(K + M) 1 makes Φ an application from FδT into itself. Moreover, we have: u, v FδT , Φ(u) Φ(v) 1 2 u v , (38) which means that Φ is a contraction of FδT . Lemma 1 and the Banach-Picard theorem then tell us that Φ has a unique fixed point in FδT . It is then obvious that such a fixed point is a solution of Equation (9) over [0, δT]. Let us now consider the maximal T > 0 such that a solution ft of Equation (9) is defined over [0, T). We have, using Equation (B ): t [0, T), ft L2(ˆγ) f0 L2(ˆγ) + Z t fs L2(ˆγ) + M ds, (39) which, using Grönwall s lemma, gives: t [0, T), ft L2(ˆγ) f0 L2(ˆγ)e KT + M e KT 1 . (40) Define gn = f T 1 n . We have, again using Equation (B ): m n, gn gm L2(ˆγ) Z T 1 n (K fs + M) ds 1 f0 L2(ˆγ)e KT + M e KT 1 , (41) which shows that (gn)n is a Cauchy sequence. L2(ˆγ) being complete, we can thus consider its limit g . Clearly, ft tends to g in L2(ˆγ). By considering the initial value problem associated with Equation (9) starting from g , we can thus extend the solution ft to [0, T + δT), thus contradicting the maximality of T, which proves that the solution can be extended to R+. Existence and uniqueness in L2(Ω). We now conclude the proof by extending the previous solution to L2(Ω). We keep the same notations as above and, in particular, f is the unique solution of Equation (9) with initial value f0. Let us define f as: t, x, ft(x) = f0(x) + Z t 0 Tk,ˆγ ˆγLˆα(fs) (x) ds, (42) where the r.h.s. only depends on f and is thus well-defined. By remarking that f is equal to f on supp ˆγ and that, for every s, ˆγLˆα fs = Tk,ˆγ = Tk,ˆγ ˆγLˆα(fs) , (43) we see that f is solution to Equation (9). Moreover, from Assumption 2, we know that, for any z Ω, R k(z, x)2 dΩ(x) is finite and, from Assumption 1, that k(z, ) 2 L2(ˆγ) is a finite sum of terms k(z, xi)2 which shows that R k(z, ) 2 L2(ˆγ) dΩ(z) is finite, again from Assumption 2. We can then say that fs L2(Ω) for any s by using the above with Equation (A) taken for g = 0. Finally, suppose h is a solution to Equation (9) with initial value f0. We know that h|supp ˆγ coincides with f and thus with f supp ˆγ in L2(ˆγ) as we already proved uniqueness in the latter space. Thus, we have that hs|supp ˆγ fs supp ˆγ A Neural Tangent Kernel Perspective of GANs for any s. Now, we have: Tk,ˆγ ˆγLˆα(hs) Tk,ˆγ ˆγLˆα hs|supp ˆγ Tk,ˆγ by Equation (A). This shows that t f h = 0 and, given that h0 = f0 = f0, we have h = f which concludes the proof. There only remains to prove for Theorem 1 the inversion between the integral over time and the integral operator. We first prove an intermediate lemma and then conclude with the proof of the inversion. Lemma 3. Under Assumptions 1 to 3, R T 0 a L2((fs) ˆα) + b L2((fs) ˆβ) ds is finite for any T > 0. Proof. Let T > 0. We have, by Assumption 3 and the triangular inequality: x, a f(x) K1 f(x) + M1, (45) where M1 = a (0) . We can then write, using Lemma 2 and the inequality from Equation (40): s T, a L2((fs) ˆα) K1 2 fs L2(ˆγ) + M1 K1 2 f0 L2(ˆγ)e KT + M e KT 1 + M1, (46) the latter being constant in s and thus integrable on [0, T]. We can then bound b L2((fs) ˆβ) similarly, which concludes the Proposition 6 (Integral inversion). Under Assumptions 1 to 3, the following integral inversion holds: ft = f0 + Z t 0 Tkf ,ˆγ ˆγLˆα, ˆβ(fs) ds = f0 + Tkf ,ˆγ 0 ˆγLˆα, ˆβ(fs) ds Proof. By definition, a straightforward computation gives, for any function h L2(ˆγ): D ˆγLˆα(f), h E L2(ˆγ) = d Lˆα(f)[h] = Z a fh dˆα Z b fh dˆβ. (48) We can then write: ˆγLˆα(ft) 2 L2(ˆγ) = D ˆγLˆα(ft), ˆγLˆα(ft) E L2(ˆγ) = Z a ft ˆγLˆα(ft) dˆα Z b ft ˆγLˆα(ft) dˆβ, (49) so that, with the Cauchy-Schwarz inequality and Lemma 2: ˆγLˆα(ft) 2 L2(ˆγ) Z a ft ˆγLˆα(ft) dˆα + Z b ft ˆγLˆα(ft) dˆβ a ft L2(ˆα) ˆγLˆα(ft) L2(ˆα) + b ft L2( ˆβ) ˆγLˆα(ft) L2( ˆβ) 2 ˆγLˆα(ft) L2(ˆγ) " a ft L2(ˆα) + b ft L2( ˆβ) A Neural Tangent Kernel Perspective of GANs which then gives us: ˆγLˆα(ft) L2(ˆγ) 2 a L2((ft) ˆα) + b L2((ft) ˆβ) By the Cauchy-Schwarz inequality and Equation (51), we then have for all z: Z t k(z, x) ˆγLˆα(fs)(x) dˆγ(x) ds Z t k(z, ) L2(ˆγ) ˆγLˆα(fs) L2(ˆγ) ds 2 k(z, ) L2(ˆγ) a L2((fs) ˆα) + b L2((fs) ˆβ) The latter being finite by Lemma 3, we can now use Fubini s theorem to conclude that: Z t 0 Tkf ,ˆγ ˆγLˆα(fs) ds = Z t x k( , x) ˆγLˆα(fs)(x) dˆγ(x) ds 0 ˆγLˆα(fs)(x) ds 0 ˆγLˆα(fs)(x) ds A.3. Differentiability of Infinite-Width Networks and their NTKs Given Theorem 1, establishing the desired differentiability of ft can be done by separately proving similar results on both ft f0 and f0. In both cases, this involves the differentiability of the following activation kernel Kφ(A) given another differentiable kernel A: Kφ(A): x, y 7 Ef GP(0,A) h φ f(x) φ f(y) i , (54) where GP(0, A) is a univariate centered Gaussian Process (GP) with covariance function A. Indeed, the kernel-transforming operator Kφ is central in the recursive computation of the neural network conjugate kernel sss which determines the NTK (involved in ft f0 Hˆγg k ) as well as the behavior of the network at initialization (which follows a GP with the conjugate kernel as covariance). Hence, our proof of Theorem 2 relies on the preservation of kernel smoothness through Kφ, proved in Appendix A.3.1, which ensures the smoothness of the conjugate kernel, the NTK and, in turn, of ft as addressed in Appendix A.3.2 which concludes the overall proof. Before developing these two main steps, we first need to state the following lemma showing the regularity of samples of a GP from the regularity of the corresponding kernel. Lemma 4 (GP regularity). Let A: Rn Rn R be a symmetric kernel. Let V an open set such that A is C on V V . Then the GP induced by the kernel A has a.s. C sample paths on V . Proof. Because A is C on V V , we know, from Theorem 2.2.2 of Adler (1981) for example, that the corresponding GP f is mean-square smooth on V . If we take α a k-th order multi-index, we also know, again from Adler (1981), that αf is also a GP with covariance kernel αA. As A is C , αA then is differentiable and αf has partial derivatives which are mean-square continuous. Then, by the Corollary 5.3.12 of Scheuerer (2009), we can say that αf has continuous sample paths a.s. which means that f Ck(V ). This proves the lemma. A.3.1. Kφ PRESERVES KERNEL DIFFERENTIABILITY Given the definition of Kφ(A) in Equation (54), we choose to prove its differentiability via the dominated convergence theorem and Leibniz integral rule. This requires to derive separate proofs depending on whether φ is smooth everywhere or almost everywhere. A Neural Tangent Kernel Perspective of GANs The former case allows us to apply strong GP regularity results leading to Kφ preserving kernel smoothness without additional hypothesis in Lemma 5. The latter case requires a careful decomposition of the expectation of Equation (54) via two-dimensional Gaussian sampling to circumvent the non-differentiability points of φ, yielding additional constraints on kernels A for Kφ to preserve their smoothness in Lemma 6; these constraints are typically verified in the case of neural networks with bias (cf. Appendix A.3.2). In any case, we emphasize that these differentiability constraints may not be tight and are only sufficient conditions ensuring the smoothness of Kφ(A). Lemma 5 (Kφ with smooth φ). Let A: Rn Rn R be a symmetric positive semi-definite kernel and φ: R R. We suppose that φ is an activation function following Assumptions 4 and 5; in particular, φ is smooth. Let y Rn and U be an open subset of Rn such that x 7 A(x, x) and x 7 A(x, y) are infinitely differentiable over U. Then, x 7 Kφ(A)(x, x) and x 7 Kφ(A)(x, y) are infinitely differentiable over U as well. Proof. In order to prove the smoothness results over the open set U, it suffices to prove them on any open bounded subset of U. Let then V U be an open bounded set. Without loss of generality, we can assume that its closure cl V is also included in U. We define B1 and B2 from Equation (54) as follows, for all x V : B1(x) Kφ(A)(x, y) = Ef GP(0,A) h φ f(x) φ f(y) i , B2(x) Kφ(A)(x, x) = Ef GP(0,A) h φ f(x) 2i . (55) In the previous expressions, Lemma 4 tells us that we can take f to be C over cl V with probability one. Hence, B1 and B2 are expectations of smooth functions over V . We seek to apply the dominated convergence theorem to prove that B1 and B2 are, in turn, smooth over V . To this end, we prove in the following the integrability of the derivatives of their integrands. Let α = (α1, . . . , αn) Nn. Using the usual notations for multi-indexed partial derivatives, via a multivariate Faà di Bruno formula (Leipnik & Pearce, 2007), we can write the derivatives α(ψ f) at x V for ψ φ, φ2 as a weighted sum of terms of the form: ψ(j) f(x) g1(x) g N(x), (56) where the gis are partial derivatives of f at x. As A is C over V , each of the gis is thus a GP with a C covariance function by Lemma 4. We can also write for all x V : ψ(j) f(x) g1(x) g N(x) sup z cl V ψ(j) f(z) g1(z) g N(z) sup z0 cl V ψ(j) f(z0) sup z1 cl V g1(z1) sup z N cl V g N(z N) . (57) For each i, because the covariance function of gi is smooth over the compact set cl V , its variance admits a maximum in cl V and we take σ2 i the double of its value. We then know from Adler (1990), that there is an Mi such that: m N, Ef GP(0,A) sup zi cl V M m i E|Yi|m, (58) where Yi is a Gaussian distribution which variance is σ2 i , the right-hand side thus being finite. We also have, by Assumption 4 from Appendix A.1, that: φ(j) f(z) 2 sup z cl V λ(j) 1 f(z) + λ(j) 2 2 , (59) which is shown to be integrable over f by the same arguments as for the gis. Moreover, the Faà di Bruno formula decomposes ψ(j) when ψ = φ2 as a weighted sum of terms of the form φ(l)φ(l ) with l, l N. Therefore, thanks to similar arguments, for any ψ φ, φ2 : ψ(j) f(z) 2 # A Neural Tangent Kernel Perspective of GANs Now, by using the Cauchy-Schwarz inequality, we have that: sup z0 cl V ψ(j) f(z0) sup z1 cl V g1(z1) sup z N cl V sup z0 cl V ψ(j) f(z0) 2 #v u u t E sup z1 cl V g1(z1) 2 sup z N cl V g N(z N) 2 # By iterated applications of the Cauchy-Schwarz inequality and using the previous arguments, we can then show that: sup z0 cl V ψ(j) f(z0) sup z1 cl V g1(z1) sup z N cl V g N(z N) (62) is integrable over f. Additionally, note that by the same arguments for the case of ψ = φ, a multiplication by φ f(y) preserves this integrability. We can then write for all x V , by a standard corollary of the dominated convergence theorem: αB1(x) = Ef GP(0,A) h α (φ f) xφ f(y) i , αB2(x) = Ef GP(0,A) which shows that B1 and B2 are C over V . This in turn means that B1 and B2 are C over U. Lemma 6 (Kφ with piecewise smooth φ). Let A: Rn Rn R be a symmetric positive semi-definite kernel and φ: R R. We suppose that φ is an activation function following Assumptions 4 and 6 (cf. Appendix A.1). Let us define the matrix Σx,y A as: Σx,y A A(x, x) A(x, y) A(x, y) A(y, y) Let y Rn and U be an open subset of Rn such that x 7 A(x, x) and x 7 A(x, y) are infinitely differentiable over U. Then, x 7 Kφ(A)(x, x) and x 7 Kφ(A)(x, y) are infinitely differentiable over U x U Σx,y A is invertible . Proof. Since det Σx,y A is smooth over U and U = x U det Σx,y A > 0 , U is an open subset of U. Hence, similarly to the proof of Lemma 5, it suffices to prove the smoothness of B1 and B2 defined in Equation (55) on any open bounded subset of U . Let then V Rn be an open bounded set such that cl V U . Note that det Σx,y A > 0 implies that A(x, x) > 0 and A(y, y) > 0. We will conduct in the following the proof that B1 is smooth over V . Like in the proof of Lemma 5, the smoothness of B2 follows the same reasoning with little adaptation; in particular, it relies on the fact that A(x, x) > 0 for all x U , making its square root smooth over cl V . Since the dominated convergence theorem cannot be directly applied from Equation (55) because of φ s potential nondifferentiability points D, let us decompose its expression for all x U : B1(x) = Ef GP(0,A) h φ f(x) φ f(y) i = E(z,z ) N((0,0),Σx,y A ) h φ(z)φ z i (65) = Ez N(0,A(y,y)) φ z E z N A(x,y) A(y,y) z ,A(x,x) A(x,y)2 = Ez N(0,A(y,y)) h φ z h z , x i , (67) where h is defined as: h z , x Z + φ(z) 1 σ(x) dz, µ(x) = A(x, y) A(y, y), σ(x) = det Σx,y A A(y, y) . (68) A Neural Tangent Kernel Perspective of GANs Now, if D = {c1, . . . , c L} with L N and c1 < < c L, the cls constitute the non-differentiability points of φ; we can then decompose the integration of φ in Equation (68) as a sum of L + 1 integrals with differentiable integrands, using c0 = and c L+1 = + : h(ε, x) = 1 φ(z) σ(x)e 1 Therefore, it remains to show the smoothness of all applications B1,l for l J0, LK defined as: B1,j(x) = Ez N(0,A(y,y)) The rest of this proof unfolds similarly to the one of Lemma 5. Indeed, the integrand of Equation (70) is smooth over cl V . There remains to show that all derivatives of this integrand are dominated by an integrable function of z and z . Consider the following integrand: ι z, z , x = φ z φ(z) By applying the multivariate Faà di Bruno formula and noticing that σ and µ are smooth over the closed set cl V , we know that the derivatives of ι z, z , x with respect to x for any derivation order are weighted sums of terms of the form: zkz k κ(x)φ z φ(z) where κ is a smooth function over cl V and k, k N. Moreover, because σ, µ and κ are smooth over the closed set cl V with positive values for σ, there are constants a1, a2 and a3 such that: zkz k κ(x)φ z φ(z) 2 zkz k φ z φ(z) a3e 1 which is integrable over z via Assumption 4 and Equation (16). Finally, let us notice that for some constants b1, b2 and b3: zkz k φ z φ(z) a3e 1 b1Ez N(b2z ,b3) zkz k φ z φ(z) , (74) which is also integrable with respect to Ez N(0,A(y,y)) by similar arguments (see also the integrability of Equation (58) in Lemma 5). This concludes the proof of integrability required to apply the dominated convergence theorem, allowing us to conclude about the smoothness of all B1,j and, in turn, of B1 over U . Remark 6 (Relaxed condition for smoothness). The invertibility condition of Lemma 6 is actually stronger than needed: it suffices to assume that the rank of Σx,y A remains constant in a neighborhood of x. A.3.2. DIFFERENTIABILITY OF CONJUGATE KERNELS, NTKS AND DISCRIMINATORS From the previous lemmas, we can then prove the results of Section 4.3. We start by demonstrating the smoothness of the conjugate kernel for dense networks, and conclude in consequence about the smoothness of the NTK and trained network. Lemma 7 (Differentiability of the conjugate kernel). Let kc be the conjugate kernel (Lee et al., 2018) of an infinite-width dense non-residual architecture such as in Assumption 4. For any y Rn, the following holds for A kc, Kφ (kc) : if Assumption 5 holds, then x 7 A(x, x) and x 7 A(x, y) are smooth everywhere over Rn; if Assumption 6 holds, then x 7 A(x, x) and x 7 A(x, y) are smooth over an open set whose complement has null Lebesgue measure. A Neural Tangent Kernel Perspective of GANs Proof. We define the following kernel: Cφ L(x, y) = Ef GP 0,Cφ L 1 h φ f(x) φ f(y) i + β2 = Kφ Cφ L 1 + β2, (75) with: Cφ 0 (x, y) = 1 nx y + β2. (76) We have that kc = Cφ L, with L being the number of hidden layers in the network. Therefore, Lemma 5 ensures the smoothness result under Assumption 5. Let us now consider Assumption 6 (cf. the detailed assumption in Appendix A.1); in particular, β > 0. We prove by induction over L in the following that: B1: x 7 Cφ L(x, y) is smooth over U = x Rn x = y ; B2: x 7 Cφ L(x, x) is smooth; for all x, x Rn with x = x , B2(x) = B2 x . The result is immediate for L = 0. We now suppose that it holds for some L N and prove that it also holds for L + 1 hidden layers. Let us express B2: B2(x) = Eε N(0,1) Cφ L(x, x) + β2 2# Using Lemma 6 and Remark 6, the fact that β > 0 and the induction hypothesis ensures that B2 is smooth. Moreover, Assumption 6, in particular Equation (17), allows us to assert that x = x implies B2(x) = B2 x . Finally, in order to apply Lemma 6 to prove the smoothness of B1 over U, there remains to show that the following matrix is invertible: Cφ L(x, x) + β2 Cφ L(x, y) + β2 Cφ L(x, y) + β2 Cφ L(y, y) + β2 Let us compute its determinant: det Σx,y β = Cφ L(x, x) + β2 Cφ L(y, y) + β2 Cφ L(x, y) + β2 2 = det Σx,y 0 + β2 Cφ L(x, x) + Cφ L(y, y) 2Cφ L(x, y) . (79) Cφ L is a symmetric positive semi-definite kernel, thus: det Σx,y β det Σx,y 0 = β2 1 1 Σx,y 0 Hence, if det Σx,y 0 > 0, then det Σx,y β > 0. Besides, if det Σx,y 0 = 0, then: det Σx,y β = β2 p B2(y) 2 > 0, (81) for all x U. This proves that B1 is indeed smooth over U, and concludes the induction. Note that U is indeed an open set whose complement in Rn has null Lebesgue measure. Overall, the result is thus proved for A = kc; a similar reasoning using the previous induction result also transfers the result to A = Kφ (kc). Proposition 2 (Differentiability of k). Let k be the NTK of an infinite-width architecture following Assumption 4. For any y Rn: A Neural Tangent Kernel Perspective of GANs if Assumption 5 holds, then k( , y) is smooth everywhere over Rn; if Assumption 6 holds, then k( , y) is smooth almost everywhere over Rn, in particular over an open set whose complement has null Lebesgue measure. Proof. According to the definitions of Jacot et al. (2018), Arora et al. (2019) and Huang et al. (2020), the smoothness of the kernel is guaranteed whenever the conjugate kernel kc and its transform Kφ (kc) are smooth; the result of Lemma 7 then applies. In the case of residual networks, there is a slight adaptation of the formula which does not change its regularity. Regarding convolutional networks, their conjugate kernels and NTKs involve finite combinations of such dense conjugate kernels and NTKs, thereby preserving their smoothness almost everywhere. Theorem 2 (Differentiability of ft). Let ft be a solution to Equation (9) under Assumptions 1 and 3 by Theorem 1, with k the NTK of an infinite-width neural network and f0 an initialization of the latter. Then, under Assumptions 4 and 5, ft is smooth everywhere. Under Assumptions 4 and 6, ft is smooth almost everywhere, in particular over an open set whose complement has null Lebesgue measure. Proof. From Theorem 1, we have: ft f0 = Tk,ˆγ 0 ˆγLˆα(fs) ds We observe that Tk,ˆγ(h) has, for any h L2(ˆγ), a regularity which only depends on the regularity of k( , y) for y supp ˆγ. Indeed, if k( , y) is smooth in a certain neighborhood V for every such y, we can bound αk( , y) over V for every y and any multi-index α and then use dominated convergence to prove that Tk,ˆγ(h)( ) is smooth over V . Therefore, the regularity of k( , y) transfers to ft f0. Given Proposition 2, there remains to prove the same result for f0. The theorem then follows from the fact that f0 has the same regularity as its conjugate kernel kc thanks to Lemma 4 because f0 is a sample from the GP with kernel kc. Lemma 7 shows the smoothness almost everywhere over an open set of applications x 7 kc(x, y); to apply Lemma 4 and concludes this proof, this result must be generalized to prove the smoothness of kc with respect to both its inputs. This can be done by generalizing the proofs of Lemmas 5 and 6 to show the smoothness of kernels with respect to both x and y, with the same arguments than for x alone. Remark 7. In the previous theorem, f0 is considered to be the initialization of the network. However, we highlight that, without loss of generality, this theorem encompasses the change of training distribution ˆγ during GAN training. Indeed, as explained in Section 4.1, f0 after j steps of generator training can actually be decomposed as, for some hk L2(ˆγk), k J1, j K: k=1 Tk,ˆγk(hk), (83) by taking into account the updates of the discriminators over the whole GAN optimization process. The proof of Theorem 2 can then be applied similarly in this case by showing the differentiability of f0 f 0 on the one hand and of f 0, being the initialization of the discriminator at the very beginning of GAN training, on the other hand. A.4. Dynamics of the Generated Distribution We derive in this proposition the differential equation governing the dynamics of the generated distribution. Proposition 3 (Dynamics of αℓ). Under Assumptions 4 and 5, Equation (3) is well-posed. Let us consider its continuoustime version with discriminators trained on discrete distributions as described above: θgℓ(z) x cf ˆ αgℓ(x) x=gℓ(z) This yields, with kgℓthe NTK of the generator gℓ: ℓgℓ= Tkgℓ,pz z 7 x cf ˆ αgℓ(x) x=gℓ(z) A Neural Tangent Kernel Perspective of GANs Equivalently, the following continuity equation holds for the joint distribution αz ℓ (id, gℓ) pz: αz ℓTkgℓ,pz z 7 x cf ˆ αgℓ(x) x=gℓ(z) Proof. Assumptions 4 and 5 ensure, via Proposition 2 and Theorem 2 that the trained discriminator is differentiable everywhere at all times, whatever the state of the generator. Therefore, Equation (3) is well-posed. By following Mroueh et al. (2019, Equation (5)) s reasoning on a similar equation, Equation (84) yields the following generator dynamics for all inputs z Rd: ℓgℓ(z) = Ez pz θℓgℓ(z) θℓgℓ z x cf ˆ αgℓ(x) x=gℓ(z ) We recognize the NTK kgℓof the generator as: kgℓ z, z θℓgℓ(z) θℓgℓ z . (88) From this, we obtain the dynamics of the generator: ℓgℓ= Tkgℓ,pz z 7 x cf ˆ αgℓ(x) x=gℓ(z) In other words, the transported particles z, gℓ(z) have trajectories Xℓwhich are solutions of the Ordinary Differential Equation (ODE): d Xℓ dℓ = 0, vℓ(Xℓ) , (90) vℓ= Tkgℓ,pz z 7 x cf ˆ αgℓ(x) x=gℓ(z) Then, because αz ℓ (id, gℓ) pz is the induced transported density, following Ambrosio & Crippa (2014), whenever the ODE above is well-defined and has unique solutions (which is necessarily the case for any trained g), αz ℓverifies the continuity equation with the velocity field vℓ: z 7 x cf ˆ αgℓ(x) x=gℓ(z) αz ℓTkgℓ,pz z 7 x cf ˆ αgℓ(x) x=gℓ(z) This yields the desired result. A.5. Optimality in Concave Setting We derive an optimality result for concave bounded loss functions of the discriminator and positive definite kernels. A.5.1. ASSUMPTIONS We first assume that the NTK is positive definite over the training dataset. Assumption 7 (Positive definite kernel). k is positive definite over ˆγ. A Neural Tangent Kernel Perspective of GANs This positive definiteness property equates for finite datasets to the invertibility of the mapping Tk,ˆγ supp ˆγ: L2(ˆγ) L2(ˆγ) h 7 Tk,ˆγ(h) supp ˆγ , (93) that can be seen as a multiplication by the invertible Gram matrix of k over ˆγ. We further discuss this hypothesis in Appendix B.5. We also assume the following properties on the discriminator loss function. Assumption 8 (Concave loss). Lˆα is concave and bounded from above, and its supremum is reached on a unique point y Moreover, we need for the sake of the proof a uniform continuity assumption on the solution to Equation (9). Assumption 9 (Solution continuity). t 7 ft|supp ˆγ is uniformly continuous over R+. Note that these assumptions are verified in the case of LSGAN, which is the typical application of the optimality results that we prove in the following. A.5.2. OPTIMALITY RESULT Proposition 7 (Asymptotic optimality). Under Assumptions 1 to 3 and 7 to 9, ft converges pointwise when t , and: Lˆα(ft) t Lˆα(y ), f = f0 + Tk,ˆγ Tk,ˆγ 1 supp ˆγ y f0|supp ˆγ , f |supp ˆγ = y , (94) where we recall that: y = arg max y L2(ˆγ) Lˆα(y). (95) This result ensures that, for concave losses such as LSGAN, the optimum for Lˆα in L2(Ω) is reached for infinite training times by neural network training in the infinite-width regime when the NTK of the discriminator is positive definite. However, this also provides the expression of the optimal network outside supp ˆγ thanks to the smoothing of ˆγ. In order to prove this proposition, we need the following intermediate results: the first one about the functional gradient of Lˆα on the solution ft; the second one about a direct application of positive definite kernels showing that one can retrieve f Hˆγ k over all Ωfrom its restriction to supp ˆγ. Lemma 8. Under Assumptions 1 to 3 and 7 to 9, ˆγLˆα(ft) 0 when t . Since supp ˆγ is finite, this limit can be interpreted pointwise. Proof. Assumptions 1 to 3 ensure the existence and uniqueness of ft, by Theorem 1. t 7 ˆft ft|supp ˆγ and Lˆα being differentiable, t 7 Lˆα(ft) is differentiable, and: t Lˆα(ft) = D ˆγLˆα(ft), t ˆft E L2(ˆγ) = ˆγLˆα(ft), Tk,ˆγ ˆγLˆα(ft) L2(ˆγ) , (96) using Equation (9). This equates to: t Lˆα(ft) = Tk,ˆγ ˆγLˆα(ft) Hˆγ k 0, (97) where Hˆγ k is the semi-norm associated to the RKHS Hˆγ k. Note that this semi-norm is dependent on the restriction of its input to supp ˆγ only. Therefore, t 7 Lˆα(ft) is increasing. Since Lˆα is bounded from above, t 7 Lˆα(ft) admits a limit when t . We now aim at proving from the latter fact that t Lˆα(ft) 0 when t . We notice that 2 Hˆγ k is uniformly continuous over L2(ˆγ) since supp ˆγ is finite, ˆγLˆα is uniformly continuous over L2(ˆγ) since a and b are Lipschitz-continuous, A Neural Tangent Kernel Perspective of GANs Tk,ˆγ supp ˆγ is uniformly continuous as it amounts to a finite matrix multiplication, and Assumption 9 gives that t 7 ft|supp ˆγ is uniformly continuous over R+. Therefore, their composition t 7 t Lˆα(ft) (from Equation (97)) is uniformly continuous over R+. Using Barb alat s Lemma (Farkas & Wegner, 2016), we conclude that t Lˆα(ft) 0 when t . Furthermore, k is positive definite over ˆγ by Assumption 7, so Hˆγ k is actually a norm. Therefore, since supp ˆγ is finite, the following pointwise convergence holds: ˆγLˆα(ft) t 0. (98) Lemma 9 (Hˆγ k determined by supp ˆγ). Under Assumptions 1, 2 and 7, for all f Hˆγ k, the following holds: Tk,ˆγ 1 supp ˆγ f|supp ˆγ . (99) Proof. Since k is positive definite by Assumption 7, then Tk,ˆγ supp ˆγ from Equation (93) is invertible. Let f Hˆγ k. Then, by definition of the RKHS in Definition 2, there exists h L2(ˆγ) such that f = Tk,ˆγ(h). In particular, f|supp ˆγ = Tk,ˆγ supp ˆγ(h), hence h = Tk,ˆγ 1 supp ˆγ f|supp ˆγ . We can now prove the desired proposition. Proof of Proposition 7. Let us first show that ft converges to the optimum y in L2(ˆγ). By applying Lemma 8, we know that ˆγLˆα(ft) 0 when t . Given that the supremum of the differentiable concave function Lˆα: L2(ˆγ) R is achieved at a unique point y L2(ˆγ) with finite supp ˆγ, then the latter convergence result implies that ˆft ft|supp ˆγ converges pointwise to y when t . Given this convergence in L2(ˆγ), we can deduce convergence on the whole domain Ωby noticing that ft f0 Hˆγ k, from Corollary 1. Thus, using Lemma 9: ft f0 = Tk,ˆγ Tk,ˆγ 1 supp ˆγ (ft f0) supp ˆγ Again, since supp ˆγ is finite, and Tk,ˆγ 1 supp ˆγ can be expressed as a matrix multiplication, the fact that ft converges to y over supp ˆγ implies that: Tk,ˆγ 1 supp ˆγ (ft f0) supp ˆγ t Tk,ˆγ 1 supp ˆγ y f0|supp ˆγ . (101) Finally, using the definition of the integral operator in Definition 2, the latter convergence implies the following desired pointwise convergence: ft t f0 + Tk,ˆγ Tk,ˆγ 1 supp ˆγ y f0|supp ˆγ . (102) We showed at the beginning of this proof that ft converges to the optimum y in L2(ˆγ), so Lˆα(ft) Lˆα(y ) by continuity of Lˆα as claimed in the proposition. A.6. Case Studies of Discriminator Dynamics We study in the rest of this section the expression of the discriminators in the case of the IPM loss and LSGAN, as described in Section 5, and of the original GAN formulation. A Neural Tangent Kernel Perspective of GANs A.6.1. PRELIMINARIES We first need to introduce some definitions. The presented solutions to Equation (9) leverage a notion of functions of linear operators, similarly to functions of matrices (Higham, 2008). We define such functions in the simplified case of non-negative symmetric compact operators with a finite number of eigenvalues, such as Tk,ˆγ. Definition 3 (Linear operator). Let A: L2(ˆγ) L2(Ω) be a non-negative symmetric compact linear operator with a finite number of eigenvalues, for which the spectral theorem guarantees the existence of a countable orthonormal basis of eigenfunctions with non-negative eigenvalues. If ϕ: R+ R, we define ϕ(A) as the linear operator with the same eigenspaces as A, with their respective eigenvalues mapped by ϕ; in other words, if λ is an eigenvalue of A, then ϕ(A) admits the eigenvalue ϕ(λ) with the same eigenspace. In the case where A is a matrix, this amounts to diagonalizing A and transforming its diagonalization elementwise using ϕ. Note that Tk,ˆγ has a finite number of eigenvalues since it is generated by a finite linear combination of linear operators (see Definition 2). We also need to define the following Radon Nikodym derivatives with inputs in supp ˆγ: ρ = d ˆβ ˆα d ˆβ + ˆα , ρ1 = dˆα dˆγ , ρ2 = d ˆβ dˆγ , (103) knowing that 2(ρ2 ρ1), ρ1 + ρ2 = 2. (104) These functions help us to compute the functional gradient of Lˆα, as follows. Lemma 10 (Loss derivative). Under Assumption 3: ˆγLˆα(f) = ρ1a f ρ2b f = ρ1 a f ρ2 b f . (105) Proof. We have from Equation (2): Lˆα(f) = Ex ˆα af(x) Ey ˆβ bf(y) = ρ1, af L2(ˆγ) ρ2, bf L2(ˆγ), (106) hence by composition: ˆγLˆα(f) = ρ1 a f ρ2 b f = ρ1a f ρ2b f. (107) A.6.2. LSGAN Proposition 5 (LSGAN discriminator). Under Assumptions 1 and 2, the solutions of Equation (9) for a = (id + 1)2 and b = (id 1)2 are the functions defined for all t R+ as: ft = exp 4t Tk,ˆγ (f0 ρ) + ρ = f0 + ϕt Tk,ˆγ (f0 ρ), (108) where: ϕt: x 7 e 4tx 1. (109) Proof. Assumptions 1 and 2 are already assumed and Assumption 3 holds for the given a and b in LSGAN. Thus, Theorem 1 applies, and there exists a unique solution t 7 ft to Equation (9) over R+ in L2(Ω) for a given initial condition f0. Therefore, there remains to prove that, for a given initial condition f0, g: t 7 gt = f0 + ϕt Tk,ˆγ (f0 ρ) (110) A Neural Tangent Kernel Perspective of GANs is a solution to Equation (9) with g0 = f0 and gt L2(Ω) for all t R+. Let us first express the gradient of Lˆα. We have from Lemma 10, with af = (f + 1)2 and bf = (f 1)2: ˆγLˆα(f) = ρ1a f ρ2b f = 2ρ1(f + 1) 2ρ2(f 1) = 4ρ 4f. (111) So Equation (9) equates to: tft = 4Tk,ˆγ(ρ ft). (112) Now let us prove that gt is a solution to Equation (112). We have: tgt = 4 Tk,ˆγ exp 4t Tk,ˆγ (f0 ρ) = 4 Tk,ˆγ exp 4t Tk,ˆγ (f0 ρ). (113) Restricted to supp ˆγ, we can write from Equation (110): gt = f0 + exp 4t Tk,ˆγ supp ˆγ (f0 ρ), (114) and plugging this in Equation (113): tgt = 4Tk,ˆγ(gt ρ), (115) where we retrieve the differential equation of Equation (112). Therefore, gt is a solution to Equation (112). It is clear that g0 = f0. Moreover, Tk,ˆγ being decomposable in a finite orthonormal basis of elements of operators over L2(Ω), its exponential has values in L2(Ω) as well, making gt belong to L2(Ω) for all t. With this, the proof is complete. A.6.3. IPMS Proposition 4 (IPM discriminator). Under Assumptions 1 and 2, the solutions of Equation (9) for a = b = id are the functions of the form ft = f0 + tf ˆα, where f ˆα is the unnormalized MMD witness function, yielding: f ˆα = Ex ˆα k(x, ) Ey ˆβ k(y, ) , Lˆα(ft) = Lˆα(f0) + t MMD2 k ˆα, ˆβ . (116) Proof. Assumptions 1 and 2 are already assumed and Assumption 3 holds for the given a and b of the IPM loss. Thus, Theorem 1 applies, and there exists a unique solution t 7 ft to Equation (9) over R+ in L2(Ω) for a given initial condition f0. Therefore, in order to find the solution of Equation (9), there remains to prove that, for a given initial condition f0, g: t 7 gt = f0 + tf ˆα (117) is a solution to Equation (9) with g0 = f0 and gt L2(Ω) for all t R+. Let us first express the gradient of Lˆα. We have from Lemma 10, with af = bf = f: ˆγLˆα(f) = ρ1a f ρ2b f = 2ρ. (118) So Equation (9) equates to: tft = 2Tk,ˆγ(ρ) = 2 Z x k( , x)ρ(x) dˆγ(x) = Z x k( , x) dˆα(x) Z y k( , y) dˆβ(y), (119) by definition of ρ (see Equation (103)), yielding: tft = f ˆα. (120) Clearly, t 7 gt = f0 + tf ˆα is a solution of the latter equation, g0 = f0 and gt L2(Ω) given that supp ˆγ is finite and k L2 Ω2 by assumption. The set of solutions for the IPM loss is thus characterized. Finally, let us compute Lˆα(ft). By linearity of Lˆα for a = b = id: Lˆα(ft) = Lˆα(f0) + t Lˆα(f ˆα) = Lˆα(f0) + t Lˆα Tk,ˆγ( 2ρ) . (121) But, from Equation (106), Lˆα(f) = 2ρ, f L2(ˆγ), hence: Lˆα(ft) = Lˆα(f0) + t 2ρ, Tk,ˆγ( 2ρ) L2(ˆγ) = Lˆα(f0) + t Tk,ˆγ( 2ρ) 2 Hˆγ k. (122) By noticing that Tk,ˆγ( 2ρ) = f ˆα and that f ˆα Hˆγ k = MMDk ˆα, ˆβ since f ˆα is the unnormalized MMD witness function, the expression of Lˆα(ft) in the proposition is obtained. A Neural Tangent Kernel Perspective of GANs A.6.4. VANILLA GAN Unfortunately, finding the solutions to Equation (9) in the case of the original GAN formulation, i.e. a = log(1 σ) and b = log σ, remains to the extent of our knowledge an open problem. We provide in the rest of this section some leads that might prove useful for more advanced analyses. Let us first determine the expression of Equation (9) for vanilla GAN. Lemma 11. For a = log(1 σ) and b = log σ, Equation (9) equates to: tft = Tk,ˆγ ρ2 2σ(f) . (123) Proof. We have from Lemma 10, with af = bf = f: ˆγLˆα(f) = ρ1a f ρ2b f = ρ1 σ (f) 1 σ(f) + ρ2 σ (f) σ(f) . (124) By noticing that σ (f) = σ(f) 1 σ(f) , we obtain: ˆγLˆα(f) = ρ1a f ρ2b f = ρ1σ(f) + ρ2 1 σ(f) = ρ2 2σ(f). (125) By plugging the latter expression in Equation (9), the desired result is achieved. Note that Assumption 3 holds for these choices of a and b. Therefore, under Assumptions 1 and 2, there exists a unique solution to Equation (123) in R+ L2(Ω) with a given initialization f0. Let us first study Equation (123) in the simplified case of a one-dimensional ordinary differential equation. Proposition 8. Let r {0, 2} and λ R. The set of differentiable solutions over R to this ordinary differential equation: tyt = λ r 2σ(yt) (126) is the following set: y: t 7 (1 r) W e2λt+C 2λt C C R where W the is principal branch of the Lambert W function (Corless et al., 1996). Proof. The theorem of Cauchy-Lipschitz ensures that there exists a unique global solution to Equation (126) for a given initial condition y0 R. Therefore, we only need to show that all elements of S are solutions of Equation (126) and that they can cover any initial condition. Let us first prove that y: t 7 (1 r) W e2λt+C 2λt C is a solution of Equation (126). Let us express the derivative of y: 1 1 r tyt = 2λ e2λt+CW e2λt+C 1 . (128) W (z) = W (z) z(1+W (z)), so: 1 1 r tyt = 2λ 1 + W e2λt+C 1 = 2λ 1 + W e2λt+C . (129) Moreover, W(z) = ze W (z), and with r 1 {1, 1}: 1 1 r tyt = 2λ 1 + e2λt+Ce W(e2λt+C) = 2λ 1 + e(r 1)yt . (130) A Neural Tangent Kernel Perspective of GANs Finally, we notice that, since r {0, 2}: λ r 2σ(yt) = 2λ(1 r) 1 + e(r 1)yt . (131) Therefore: tyt = λ r 2σ(yt) (132) and yt is a solution to Equation (126). Since y0 = (1 r) W e C C and z 7 W(ez) z can be proven to be bijective over R, the elements of S can cover any initial condition. With this, the result is proved. Suppose that f0 = 0 in Equation (123) and that ρ2 has values in {0, 2} i.e. ˆα and ˆβ have disjoint supports (which is the typical case for distributions with finite support). From Proposition 8, a candidate solution would be: ft = ϕt(x)(ρ2 1) = ϕt(x)(ρ), (133) where: ϕt: x 7 W e2tx+1 2tx 1, (134) since the initial condition y0 = 0 gives the constant value C = 1 in Equation (127). Note that the Lambert W function of a symmetric linear operator is well-defined, all the more so as we choose the principal branch of the Lambert function in our case; see the work of Corless et al. (2007) for more details. Note also that the estimation of W(ez) is actually numerically stable using approximations from Iacono & Boyd (2017). However, Equation (133) cannot be a solution of Equation (123). Indeed, one can prove by following essentially the same reasoning as the proof of Proposition 8 that: tft = 2 Tk,ˆγ ψt Tk,ˆγ 1 (ρ2 1), (135) with: ψt: x 7 1 + W e2tx+1 > 0. (136) However, this does not allow us to obtain Equation (123) since in the latter the sigmoid is taken coordinate-wise, where the exponential in Equation (135) acts on matrices. Nonetheless, for t small enough, ft as defined in Equation (135) should approximate the solution of Equation (123), since sigmoid is approximately linear around 0 and ft 0 when t is small enough. We find in practice that for reasonable values of t, e.g. t 5, the approximate solution of Equation (135) is actually close to the numerical solution of Equation (123) obtained using an ODE solver. Thus, we provide here a candidate approximate expression for the discriminator in the setting of the original GAN formulation i.e., for binary classifiers. We leave for future work a more in-depth study of this case. B. Discussions and Remarks We develop in this section some remarks and explanations on the topics that are broached in the main paper. B.1. From Finite to Infinite-Width Networks The constancy of the neural tangent kernel during training when the width of the network becomes increasingly large is broadly applicable. As summarized by Liu et al. (2020), typical neural networks with the building blocks of multilayer perceptrons and convolutional neural networks comply with this property, as long as they end with a linear layer and they do not have any bottleneck indeed, this constancy needs the minimum internal width to grow unbounded (Arora et al., 2019). This includes, for example, residual convolutional neural networks (He et al., 2016). The requirement of a final linear activation can be circumvented by transferring this activation into the loss function, as we did for the original GAN formulation in Section 3. This makes our framework encompass a wide range of discriminator architectures. A Neural Tangent Kernel Perspective of GANs Indeed, many building blocks of state-of-the-art discriminators can be studied in this infinite-width regime with a constant NTK, as highlighted by the exhaustiveness of the Neural Tangents library (Novak et al., 2020). Assumptions about the used activation functions are mild and include many standard activations such as Re LU, sigmoid and tanh. Beyond fully connected linear layers and convolutions, NTK constancy also affect typical operations such as self-attention (Hron et al., 2020), layer normalization and batch normalization (Yang, 2020). This variety of networks affected by the constancy of the NTK supports the generality of our approach, as it includes powerful discriminator architectures such as Big GAN (Brock et al., 2019). We highlight that the NTK of the discriminator remains constant throughout the whole GAN optimization process, and not only under a fixed generator. Indeed, if it remains constant in-between generator updates, then it also remains constant when the generator changes. This is because, for a finite training time, the constancy of the NTK solely depends on the network architecture and initialization, regardless of the training loss which may change in the course of training without affecting the NTK. There are nevertheless some limits to the NTK approximation, as we are not aware of works studying the application of the infinite-width regime to some operations such as spectral normalization, and networks in the regime of a constant NTK cannot perform feature learning as they are equivalent to kernel methods (Geiger et al., 2020; Yang & Hu, 2021). However, this framework remains general and constitutes the most advanced attempt at theoretically modeling the discriminator s architecture in GANs. B.2. Loss of the Generator and its Gradient We highlight in this section the importance of taking into account alternating optimization and discriminator gradients in the optimization of the generator. Let us focus on an example similar to the one of Arjovsky et al. (2017, Example 1) and choose as β a single Dirac centered at 0 and as αg = αθ single Dirac centered at xθ = θ (the generator parameters being the coordinates of the generated point). Let us study for the sake of simplicity the case of LSGAN since it is a recurring example in this work, but a similar reasoning can be done for other GAN instances. In the theoretical min-max formulation of GANs considered by Arjovsky et al. (2017), the generator is trained to minimize the following quantity: Cf αθ (αθ) Ex αθ h cf αθ (x) i = f αθ(xθ)2, (137) f αθ = arg max f L2( 1 n Lαθ(f) Ex αθ af(x) Ey β bf(y) o = arg min f L2( 1 f αθ(xθ) + 1 2 + f αθ(0) 1 2 . (138) Consequently, f αθ(0) = 1 and f αθ(xθ) = 1 when xθ = 0, thus in this case: Cf αθ (αθ) = 1. (139) This constancy of the generator loss would make it impossible to be learned by gradient descent, as pointed out by Arjovsky et al. (2017). However, the setting does not correspond to the actual optimization process used in practice and represented by Equation (3). We do have θCf αθ (αθ) = 0 when xθ = 0, but the generator never uses this gradient in standard GAN optimization. Indeed, this gradient takes into account the dependency of the optimal discriminator f αθ in the generator parameters, since the optimal discriminator depends on the generated distribution. Yet, in practice and with few exceptions such as Unrolled GANs (Metz et al., 2017) and as done in Equation (3), this dependency is ignored when computing the gradient of the generator, because of the alternating optimization setting where the discriminator is trained in-between generator s updates. Therefore, despite being constant on the training data, this loss can yield non-zero gradients to the generator. However, this requires the gradient of f αθ to be defined, which is the issue addressed in Section 3.2. A Neural Tangent Kernel Perspective of GANs B.3. Differentiability of the Bias-Free Re LU Kernel Remark 1 contradicts the results of Bietti & Mairal (2019) on the regularity of the NTK of a bias-free Re LU MLP with one hidden layer, which can be expressed as follows (up to a constant scaling the matrix multiplication in linear layers): k(x, y) = x y κ x, y where: κ: [0, 1] R π u(π arccos u) + 1 1 u2 . (141) More particularly, Bietti & Mairal (2019, Proposition 3) claim that k( , y) is not Lipschitz around y for all y in the unit sphere. By following their proof, it amounts to prove that k( , y) is not Lipschitz around y for all y in any centered sphere. We highlight that this also contradicts empirical evidence, as we did observe the Lipschitzness of such NTK in practice using the Neural Tangents library (Novak et al., 2020). We believe that the mistake in the proof of Bietti & Mairal (2019) lies in the confusion between functions κ and k0: x, y 7 κ x,y x y , which have different geometries. Their proof relies on the fact that κ is indeed non-Lipschitz in the neighborhood of u = 1. However, this does not imply that k0 is not Lipschitz, or not derivable. We can prove that it is actually at least locally Lipschitz. Indeed, let us compute the following derivative for x = y Rn \ {0}: x = y x x x x, y x 2 y κ (u) = 1 x y κ (u), (142) where u = x,y x y and: π κ (u) = u 1 u2 + 2(π arccos u). (143) Note that κ (u) u 1 πu 2 1 u. Therefore: x y x 2y x, y x x y y 2 x, y y 2 x, y y x y 0, which proves that k0 is actually Lipschitz around points (y, y), as well as differentiable, and confirms our remark. B.4. Integral Operator and Instance Noise Instance noise (Sønderby et al., 2017) consists in adding random Gaussian noise to the input and target samples. This amounts to convolving the data distributions with a Gaussian density, which will have the effect of smoothing the discriminator. In the following, for the case of IPM losses, we link instance noise with our framework, showing that smoothing of the data distributions already occurs via the NTK kernel, stemming from the fact that the discriminator is a neural network trained with gradient descent. More specifically, it can be shown that if k is an RBF kernel, the optimal discriminators in both case are the same. This is based on the fact that the density of a convolution of an empirical measure ˆµ = 1 i δxi, where δz is the Dirac distribution centered on z, and a Gaussian density k with associated RBF kernel k can be written as k ˆµ = 1 A Neural Tangent Kernel Perspective of GANs Let us consider the following regularized discriminator optimization problem in L2(R) smoothed from L2(Ω) with instance noise, i.e. convolving ˆα and ˆβ with k. sup f L2(R) n L k ˆα(f) Ex k ˆα f(x) Ey k ˆβ f(y) λ f 2 L2 o (145) The optimum f IN can be found by taking the gradient: L k ˆα f IN λ f IN 2 = 0 f IN = 1 k ˆα k ˆβ . (146) If we now study the resolution of the optimization problem in Hˆγ k as in Section 5.1 with f0 = 0, we find the following discriminator: ft = t Ex ˆα k(x, ) Ey ˆβ k(y, ) = t k ˆα k ˆβ . (147) Therefore, we have that f IN ft, i.e. instance noise and regularization by neural networks obtain the same smoothed solution. This analysis was done using the example of an RBF kernel, but it also holds for stationary kernels, i.e. k(x, y) = k(x y), which can be used to convolve measures. We remind that this is relevant, given that NTKs are stationary over spheres (Jacot et al., 2018; Yang & Salman, 2019), around where data can be concentrated in high dimensions. B.5. Positive Definite NTKs Optimality results in the theory of NTKs usually rely on the assumption that the considered NTK k is positive definite over the training dataset ˆγ (Jacot et al., 2018; Zhang et al., 2020). This property offers several theoretical advantages. Indeed, this gives sufficient representational power to its RKHS to include the optimal solution over ˆγ. Moreover, this positive definiteness property equates for finite datasets to the invertibility of the mapping Tk,ˆγ supp ˆγ: L2(ˆγ) L2(ˆγ) h 7 Tk,ˆγ(h) supp ˆγ , (148) that can be seen as a multiplication by the invertible Gram matrix of k over ˆγ. From this, one can retrieve the expression of f Hˆγ k from its restriction f|supp ˆγ to supp ˆγ in the following way: f = Tk,ˆγ Tk,ˆγ 1 supp ˆγ f|supp ˆγ , (149) as shown in Lemma 9. Finally, as shown by Jacot et al. (2018) and in Appendix A.5, this makes the discriminator loss function strictly increase during training. One may wonder whether this assumption is reasonable for NTKs. Jacot et al. (2018) proved that it indeed holds for NTKs of non-shallow MLPs with non-polynomial activations if data is supported on the unit sphere, supported by the fact that the NTK is stationary over the unit sphere. Others, such as Fan & Wang (2020), have observed positive definiteness of the NTK subject to specific assumptions on the networks and data. We are not aware of more general results of this kind. However, one may conjecture that, at least for specific kinds of networks, NTKs are positive definite for any training data. Indeed, besides global convergence results (Allen-Zhu et al., 2019), prior work indicates that MLPs are universal approximators (Hornik et al., 1989; Leshno et al., 1993). This property can be linked in our context to universal kernels (Steinwart, 2001), which are guaranteed to be positive definite over any training data (Sriperumbudur et al., 2011). Universality is linked to the density of the kernel RKHS in the space of continuous functions. In the case of NTKs, previously cited approximation properties can be interpreted as signs of expressive RKHSs, and thus support the hypothesis of universal NTKs. Furthermore, beyond positive definiteness, universal kernels are also characteristic (Sriperumbudur et al., 2011), which is interesting when they are used to compute MMDs, as we do in Section 5.1. Note that for the standard case of Re LU MLPs, Ji et al. (2020) showed universal approximation results in the infinite-width regime, and works such as the one of Chen & Xu (2021) observed that their RKHS is close to the one of the Laplace kernel, which is positive definite. A Neural Tangent Kernel Perspective of GANs Bias-free Re LU NTKs are not characteristic. As already noted by Leshno et al. (1993), the presence of bias is important when it comes to representational power of MLPs. We can retrieve this observation in our framework. In the case of a Re LU shallow network with one hidden layer and without bias, Bietti & Mairal (2019) determine its associated NTK as follows (up to a constant scaling the matrix multiplication in linear layers): k(x, y) = x y κ x, y with in particular k(x, 0) = 0 for all x Ω; suppose that 0 Ω. This expression of the kernel implies that k is not positive definite for all datasets: take for example x = 0 and y Ω\ {0}; then the Gram matrix of k has a null row, hence k is not strictly positive definite over {x, y}. Another consequence is that k is not characteristic. Indeed, take probability distributions µ = δ y 2 and ν = 1 2 δx + δy with δz being the Dirac distribution centered on z Ω, and where x = 0 and y Ω\ {0}. Then: Ez µk(z, ) = k 1 2k(y, ) = 1 2 k(y, ) + k(x, ) = Ez νk(z, ), (151) i.e., kernel embeddings of µ and ν = µ are identical, making k not characteristic by definition. B.6. Societal Impact As our work is mainly theoretical and does not deal with real-world data, it does not have direct broader negative impact on the society. However, the practical perspectives that it opens constitute an object of interrogation. Indeed, the developments of performant generative models can be the source of harmful manipulation (Tolosana et al., 2020) and reproduction of existing biases in databases (Jain et al., 2020), especially as GANs are still misunderstood. While such negative effects should be considered, attempts such as ours at explaining generative models might also lead to ways to mitigate potential harms by paving the way for more principled GAN models. C. GAN(TK)2 and Further Empirical Analyses We present in this section additional experimental results that complement and explain some of the results already exposed in Section 6. All these experiments were conducted using the proposed general toolkit GAN(TK)2. We focus in this article on particular experiments for the sake of clarity and as an illustration of the potential of analysis of our framework, but GAN(TK)2 is a general-purpose toolkit centered around the infinite-width of the discriminator and could be leveraged for an even more extensive empirical analysis. We specifically focus on the IPM and LSGAN losses for the discriminator since they are the two losses for which we know the analytic behavior of the discriminator in the infinite-width limit, but other losses can be studied as well in GAN(TK)2. We leave a large-scale empirical study of our framework, which is out of the scope of this paper, for future work. C.1. Two-Dimensional Datasets We provide in Table 1 numerical results corresponding to the experiments described in Section 6 on the 8 Gaussians dataset. We present additional experimental results on two other two-dimensional problems, Density and AB; see, respectively, Figures 3 and 4. Numerical results are detailed in Tables 2 and 3. We globally retrieve the same conclusions that we developed in Section 6 on these datasets with more complex shapes. C.2. Re LU vs. Sigmoid Activations We additionally introduce a new baseline for the 8 Gaussians, Density and AB problems, where we replace the Re LU activation in the discriminator by a sigmoid-like activation σ, that we abbreviate to sigmoid in this experimental study for readability purposes. We choose σ instead of the actual sigmoid σ for computational reasons, since σ, contrary to σ, allows for analytic computations of NTKs in the Neural Tangents library (Novak et al., 2020). σ is defined in the latter using the error function erf scaled in order to minimize a squared loss with respect to σ over [ 5, 5], with the following expression: erf x 2.402 056 353 171 979 6 A Neural Tangent Kernel Perspective of GANs IPM, Re LU (infinite) IPM, Re LU, no bias (infinite) IPM, Sigmoid (infinite) 0.5 0.0 0.5 1.0 0.75 Initialization 0.5 0.0 0.5 1.0 IPM, Re LU (finite) 0.5 0.0 0.5 1.0 IPM, Re LU, no bias (finite) 0.5 0.0 0.5 1.0 IPM, Sigmoid (finite) Figure 3. Generator (G) and target ( ) samples for different methods applied to the Density problem. In the background, cf . 0.50 0.25 0.00 0.25 0.50 Figure 4. Initial generator (G) and target ( ) samples for the AB problem. Table 1. Sinkhorn divergence (Feydy et al., 2019, lower is better, similar to W2) averaged over three runs between the final generated distribution and the target dataset for the 8 Gaussians problem. Loss RBF kernel Re LU Re LU (no bias) Sigmoid IPM (inf.) (2.60 0.06) 10 2 (9.40 2.71) 10 7 (9.70 1.88) 10 2 (8.40 0.02) 10 2 IPM (1.21 0.14) 10 1 (1.20 0.60) 100 (7.40 1.30) 10 1 LSGAN (inf.) (4.21 0.10) 10 1 (7.56 0.45) 10 2 (1.27 0.01) 101 (7.35 0.11) 100 LSGAN (3.07 0.68) 100 (7.52 0.01) 100 (7.41 0.54) 100 A Neural Tangent Kernel Perspective of GANs Table 2. Sinkhorn divergence averaged over three runs between the final generated distribution and the target dataset for the Density problem. Loss RBF kernel Re LU Re LU (no bias) Sigmoid IPM (inf.) (2.37 0.32) 10 3 (3.34 0.49) 10 9 (7.34 0.34) 10 2 (6.25 0.31) 10 3 IPM (5.02 1.19) 10 3 (9.25 0.30) 10 2 (3.06 0.57) 10 2 LSGAN (inf.) (7.53 0.59) 10 3 (1.49 0.11) 10 3 (2.80 0.03) 10 1 (2.21 0.01) 10 1 LSGAN (1.53 1.08) 10 2 (1.64 0.19) 10 1 (5.88 0.80) 10 2 Table 3. Sinkhorn divergence averaged over three runs between the final generated distribution and the target dataset for the AB problem. Loss RBF kernel Re LU Re LU (no bias) Sigmoid IPM (inf.) (4.65 0.82) 10 3 (2.64 2.13) 10 9 (6.11 0.19) 10 3 (5.69 0.38) 10 3 IPM (2.75 0.20) 10 3 (3.65 1.44) 10 2 (1.25 0.32) 10 2 LSGAN (inf.) (1.13 0.05) 10 2 (8.63 2.24) 10 3 (1.02 0.40) 10 1 (1.40 0.06) 10 2 LSGAN (1.32 1.30) 10 1 (2.57 0.73) 10 2 (8.78 2.23) 10 2 (a) RBF kernel: blurry digits on MNIST, prohibitively noisy images on Celeb A. (b) Re LU: sharp digits on MNIST, high-quality images on Celeb A. (c) Re LU (no bias): mostly sharp digits with some artifacts and blurry images on MNIST, blurry and noisy images on Celeb A. Figure 5. Uncurated samples from the results of the descent of a set of 1024 particles over a subset of 1024 elements of MNIST and Celeb A, starting from a standard Gaussian. Training is done using the IPM loss in the infinite-width kernel setting. A Neural Tangent Kernel Perspective of GANs Results are given in Tables 1 to 3 and an illustration is available in Figure 3. We observe that the sigmoid baseline is consistently outperformed by the RBF kernel and Re LU activation (with bias) for all regimes and losses. This is in accordance with common experimental practice, where internal sigmoid activations are found less effective than Re LU because of the potential activation saturation that they can induce. We provide a qualitative explanation to this underperformance of sigmoid via our framework in Appendix C.4. C.3. Qualitative MNIST and Celeb A Experiment An experimental analysis of our framework on complex image datasets is out the scope of our study we leave it for future work. Nonetheless, we present an experiment on MNIST (Le Cun et al., 1998) and Celeb A (Liu et al., 2015) images in a similar setting as the experiments on two-dimensional point clouds of the previous sections. For each dataset, we make a point cloud ˆα, initialized to a standard Gaussian, move towards a subset of the MNIST dataset following the gradients of the IPM loss in the infinite-width regime. Qualitative results are presented in Figure 5. We notice, similarly to the two-dimensional experiments, that the Re LU network with bias outperforms its bias-free counterpart and a standard RBF kernel in terms of sample quality. The difference between the RBF kernel and Re LU NTK is even more flagrant in this complex high-dimensional setting, as the RBF kernel is unable to produce accurate samples. C.4. Visualizing the Gradient Field Induced by the Discriminator We raise in Sections 4.4 and 5 the open problem of studying the convergence of the generated distribution towards the target distribution with respect to the gradients of the discriminator. We aim in this section at qualitatively studying these gradients in a simplified case that could shed some light on the more general setting and explain some of our experimental results. These gradient fields can be plotted using the provided GAN(TK)2 toolkit. C.4.1. SETTING Since we study gradients of the discriminator expressed in Equation (10), we assume that f0 = 0 for instance, using the anti-symmetrical initialization Zhang et al. (2020) in order to ignore residual gradients from the initialization. By Theorem 1, for any loss and any training time, the discriminator can be expressed as f ˆα = Tk,ˆγ(h0), for some h0 L2(ˆγ). Thus, there exists h1 L2(ˆγ) such that: x supp ˆγ h1(x)k(x, ). (153) Consequently, x supp ˆγ h1(x) k(x, ), cf ˆ α = X x supp ˆγ h1(x) k(x, )c f ˆα( ) . (154) Dirac-GAN setting. The latter linear combination of gradients indicates that, by examining gradients of cf ˆ α for pairs of (x, y) supp ˆα supp ˆβ, one could already develop potentially valid intuitions that can hold even when multiple points are considered. This is especially the case for the IPM loss, as h0, h1 have a simple form: h1(x) = 1 if x supp ˆα and h1(y) = 1 if y supp ˆα (assuming points from ˆα and ˆβ are uniformly weighted); moreover, note that c f ˆα( ) = 1. Thus, we study here cf ˆ α when ˆα and ˆβ are only comprised of one point, i.e. the setting of Dirac GAN (Mescheder et al., 2018), with ˆα = δx ˆαx and ˆβ = δy. Visualizing high-dimensional inputs. Unfortunately, the gradient field is difficult to visualize when the samples live in a high-dimensional space. Interestingly, the NTK k(x, y) for any architecture starting with a fully connected layer only depends on x , y and x, y (Yang & Salman, 2019), and therefore all the information of cf ˆ α is contained in Span{x, y}. From this, we show in Figures 6 and 7 the gradient field cf ˆ α in the two-dimensional space Span{x, y} for different architectures and losses in the infinite-width regime described in Section 6 and in this section. Figure 6 corresponds to two-dimensional x, y R2, and Figure 7 to high-dimensional x, y R512. Note that in the plots, the gradient field is symmetric w.r.t. the horizontal axis and for this reason we have restricted the illustration to the case where the second coordinate is positive. A Neural Tangent Kernel Perspective of GANs 2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 LSGAN Re LU, no bias 2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 Figure 6. Gradient field cf ˆ αx (x) received by a generated sample x R2 (i.e. ˆα = ˆαx = δx) initialized to x0 with respect to its coordinates in Span{x0, y} where y, marked by a , is the target distribution (i.e. ˆβ = δy), with y = 1. Arrows correspond to the movement of x in Span{x0, y} following cf ˆ αx (x), for different losses and networks; scales are specific for each pair of loss and network. The ideal case is the convergence of x along this gradient field towards the target y. Note that in the chosen orthonormal coordinate system, without loss of generality, y has coordinate (1, 0); moreover, the gradient field is symmetrical with respect to the horizontal axis. A Neural Tangent Kernel Perspective of GANs 2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 LSGAN Re LU, no bias 2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 Figure 7. Same plot as Figure 6 but with underlying points x, y R512. A Neural Tangent Kernel Perspective of GANs Convergence of the gradient flow. In the last paragraph, we have seen that the gradient field in the Dirac-GAN setting lives in the two-dimensional Span{x, y}, independently of the dimensionality of x, y. This means that when training the generated distribution, as in Section 6, the position of the particle x always remains in this two-dimensional space, and hence (non-)convergence in this setting can be easily checked by studying this gradient field. This is what we do in the following, for different architectures and losses. C.4.2. QUALITATIVE ANALYSIS OF THE GRADIENT FIELD x is far from y. When generated outputs are far away from the target, it is essential that their gradient has a large enough magnitude in order to pull these points towards the target. The behavior of the gradients for distant points can be observed in the plots. For Re LU networks, for both losses, the gradients for distant points seem to be well behaved and large enough. Note that in the IPM case, the magnitude of the gradients is even larger when x is further away from y. This is not the case for the RBF kernel when the variance parameter is too small, as the magnitude of the gradient becomes prohibitively small. We highlight that we selected a large variance parameter in order to avoid such a behavior, but diminishing magnitudes can still be observed. Note that choosing an overly large variance may also have a negative impact on the points that are closer to the target. x is close to y. A particularity of the NTK of Re LU discriminators with bias that arises from this study is that the gradients vanish more slowly when the generated x tends to the target y, compared to NTKs of Re LU without bias and sigmoid networks, and to the RBF kernel. We hypothesize that this is also another distinguishing feature that helps the generated distribution to converge more easily to the target distribution, especially when they are not far apart. On the contrary, this gradient vanishes more rapidly for NTKs of Re LU without bias and sigmoid networks, compared to the RBF kernel. This can explain the worse performance of such NTKs compared to the RBF kernel in our experiments (see Tables 1 to 3). Note that this phenomenon is even more pronounced in high-dimensional spaces such as in Figure 7. x is close to 0. Finally, we highlight gradient vanishing and instabilities around the origin for Re LU networks without bias. This is related to its differentiability issues at the origin exposed in Section 4.3, and to its lack of representational power discussed in Appendix B.5. This can also be retrieved on larger scale experiments of Figures 2 and 3 where the origin is the source of instabilities in the descent. Sigmoid network. It is also possible to evaluate the properties of the discriminator s gradient for architectures that are not used in practice, such as networks with the sigmoid activation. Figures 2 and 3 provide a clear explanation: as stated above, the magnitudes of the gradients become too small when x y, and heavily depend on the direction from which x approaches y. Ideally, the induced gradient flow should be insensitive to the direction in order for the convergence to be reliable and robust, which seems to be the case for Re LU networks. D. Experimental Details We detail in this section the experimental parameters needed to reproduce our experiments. D.1. GAN(TK)2 Specifications and Computing Resources GAN(TK)2 is implemented in Python (tested on versions 3.8.1 and 3.9.2) and based on JAX (Bradbury et al., 2018) for tensor computations and Neural Tangents (Novak et al., 2020) for NTKs. We refer to the code released at https: //github.com/emited/gantk2 for detailed specifications and instructions. All experiments presented in this paper were run on Nvidia GPUs (Nvidia Titan RTX 24GB of VRAM with CUDA 11.2 as well as Nvidia Titan V 12GB and Nvidia Ge Force RTX 2080 Ti 11 GB with CUDA 10.2). All two-dimensional experiments require only a few minutes of computations on a single GPU. Experiments on MNIST and Celeb A were run using simultaneously four GPUs for parallel computations, for at most a couple of hours. D.2. Datasets 8 Gaussians. The target distribution is composed of 8 Gaussians with their means being evenly distributed on the centered sphere of radius 5, and each with a standard deviation of 0.5. The input fake distribution is drawn at initialization from a standard normal distribution N(0, 1). We sample in our experiments 500 points from each distribution at each run to build A Neural Tangent Kernel Perspective of GANs AB and Density. These two datasets are taken from the Geomloss library examples (Feydy et al., 2019)1 and are distributed under the MIT license. To sample a point from a distribution based on these greyscale images files, we sample a pixel (considered to lie in [ 1, 1]2) in the image from a distribution where each pixel probability is proportional to the darkness of this pixel, and then apply a Gaussian noise centered at the chosen pixel coordinates with a standard deviation equal to the inverse of the image size. We sample in our experiments 500 points from each distribution at each run to build ˆα and ˆβ. MNIST and Celeb A. We preprocess each MNIST image (Le Cun et al., 1998) by extending it from 28 28 frames to 32 32 frames (by padding it with black pixels). Celeb A images (Liu et al., 2015) are downsampled from a size of 178 218 to 32 39 and then center-cropped to 32 32. For both datasets, we normalize pixels in the [ 1, 1] range. For our experiments, we consider a subset of 1024 elements of each dataset, which are randomly sampled for each run. D.3. Parameters Sinkhorn divergence. The Sinkhorn divergence is computed using the Geomloss library (Feydy et al., 2019), with a blur parameter of 0.001 and a scaling of 0.95, making it close to the Wasserstein W2 distance. RBF kernel. The RBF kernel used in our experiments is the following: k(x, y) = e where n is the dimension of x and y, i.e. the dimension of the data. Architecture. We used for the neural networks of our experiments the standard NTK paramaterization (Jacot et al., 2018), with a scaling factor of 1 for matrix multiplications and, when bias in enabled, a multiplicative constant of 1 for biases (except for sigmoid where this bias factor is lowered to 0.2 to avoid saturating the sigmoid, and for Celeb A where it is equal to 4). All considered networks are composed of 3 hidden layers and end with a linear layer. In the finite-width case, the width of these hidden layers is 128. We additionally use antisymmetrical initialization (Zhang et al., 2020), except for the finite-width LSGAN loss. Discriminator optimization. Discriminators in the finite-width regime are trained using full-batch gradient descent without momentum, with one step per update to the distributions and the following learning rates ε: for the IPM loss: ε = 0.01; for the IPM loss with reset and LSGAN: ε = 0.1. In the infinite-width limit, we use the analytic expression derived in Section 5 with training time τ = 1 (except for MNIST and Celeb A where τ = 1000) and f0 = 0 (through the initialization of Zhang et al. (2020)) to avoid the computational cost of accumulating discriminators analytic expressions across the generator s optimization steps. Point cloud descent. The multiplicative constant η over the gradient applied to each datapoint for two-dimensional problems is chosen as follows: for the IPM loss in the infinite-width regime: η = 1000; for the IPM loss in the finite-width regime: η = 100; for the IPM loss in the finite-width regime and discriminator reset: η = 1000; 1They can be downloaded at https://github.com/jeanfeydy/geomloss/tree/main/geomloss/examples/ optimal_transport/data: AB corresponds to files A.png (source) and B.png (target), and Density corresponds to files density_a.png (source) and density_a.png (target). A Neural Tangent Kernel Perspective of GANs for LSGAN in the infinite-width regime: η = 1000; for LSGAN in the finite-width regime: η = 1. We multiply η by 1000 when using sigmoid activations, because of the low magnitude of the gradients it provides. We choose for MNIST η = 100. Training is performed for the following number of iterations: for 8 Gaussians: 20 000; for Density and AB: 10 000; for MNIST: 50 000.