# understanding_overparameterization_in_generative_adversarial_networks__76aa484e.pdf Published as a conference paper at ICLR 2021 UNDERSTANDING OVERPARAMETERIZATION IN GENERATIVE ADVERSARIAL NETWORKS Yogesh Balaji1 , Mohammadmahdi Sajedi2 , Neha Mukund Kalibhat1, Mucong Ding1, Dominik St oger2, Mahdi Soltanolkotabi2, Soheil Feizi1 1 University of Maryland, College Park, MD 2 University of Southern California, Los Angeles, CA A broad class of unsupervised deep learning methods such as Generative Adversarial Networks (GANs) involve training of overparameterized models where the number of parameters of the model exceeds a certain threshold. Indeed, most successful GANs used in practice are trained using overparameterized generator and discriminator networks, both in terms of depth and width. A large body of work in supervised learning have shown the importance of model overparameterization in the convergence of the gradient descent (GD) to globally optimal solutions. In contrast, the unsupervised setting and GANs in particular involve non-convex concave mini-max optimization problems that are often trained using Gradient Descent/Ascent (GDA). The role and benefits of model overparameterization in the convergence of GDA to a global saddle point in non-convex concave problems is far less understood. In this work, we present a comprehensive analysis of the importance of model overparameterization in GANs both theoretically and empirically. We theoretically show that in an overparameterized GAN model with a 1-layer neural network generator and a linear discriminator, GDA converges to a global saddle point of the underlying non-convex concave min-max problem. To the best of our knowledge, this is the first result for global convergence of GDA in such settings. Our theory is based on a more general result that holds for a broader class of nonlinear generators and discriminators that obey certain assumptions (including deeper generators and random feature discriminators). Our theory utilizes and builds upon a novel connection with the convergence analysis of linear timevarying dynamical systems which may have broader implications for understanding the convergence behavior of GDA for non-convex concave problems involving overparameterized models. We also empirically study the role of model overparameterization in GANs using several large-scale experiments on CIFAR-10 and Celeb-A datasets. Our experiments show that overparameterization improves the quality of generated samples across various model architectures and datasets. Remarkably, we observe that overparameterization leads to faster and more stable convergence behavior of GDA across the board. 1 INTRODUCTION In recent years, we have witnessed tremendous progress in deep generative modeling with some state-of-the-art models capable of generating photo-realistic images of objects and scenes (Brock et al., 2019; Karras et al., 2019; Clark et al., 2019). Three prominent classes of deep generative models include GANs (Goodfellow et al., 2014), VAEs (Kingma & Welling, 2014) and normalizing flows (Dinh et al., 2017). Of these, GANs remain a popular choice for data synthesis especially in the image domain. GANs are based on a two player min-max game between a generator network that generates samples from a distribution, and a critic (discriminator) network that discriminates real distribution from the generated one. The networks are optimized using Gradient Descent/Ascent (GDA) to reach a saddle-point of the min-max optimization problem. First two authors contributed equally. Correspondence to yogesh@cs.umd.edu, sajedi@usc.edu Published as a conference paper at ICLR 2021 Figure 1: Overparameterization in GANs. We train DCGAN models by varying the size of the hidden dimension k (larger the k, more overparameterized the models are, see Fig. 8 for details). Overparameterized GANs enjoy improved training and test FID scores (the left panel), generate high-quality samples (the middle panel) and have fast and stable convergence (the right panel). One of the key factors that has contributed to the successful training of GANs is model overparameterization, defined based on the model parameters count. By increasing the complexity of discriminator and generator networks, both in depth and width, recent papers show that GANs can achieve photo-realistic image and video synthesis (Brock et al., 2019; Clark et al., 2019; Karras et al., 2019). While these works empirically demonstrate some benefits of overparameterization, there is lack of a rigorous study explaining this phenomena. In this work, we attempt to provide a comprehensive understanding of the role of overparameterization in GANs, both theoretically and empirically. We note that while overparameterization is a key factor in training successful GANs, other factors such as generator and discriminator architectures, regularization functions and model hyperparameters have to be taken into account as well to improve the performance of GANs. Recently, there has been a large body of work in supervised learning (e.g. regression or classification problems) studying the importance of model overparameterization in gradient descent (GD) s convergence to globally optimal solutions (Soltanolkotabi et al., 2018; Allen-Zhu et al., 2019; Du et al., 2019; Oymak & Soltanolkotabi, 2019; Zou & Gu, 2019; Oymak et al., 2019). A key observation in these works is that, under some conditions, overparameterized models experience lazy training (Chizat et al., 2019) where optimal model parameters computed by GD remain close to a randomly initialized model. Thus, using a linear approximation of the model in the parameter space, one can show the global convergence of GD in such minimization problems. In contrast, training GANs often involves solving a non-convex concave min-max optimization problem that fundamentally differs from a single minimization problem of classification/regression. The key question is whether overparameterized GANs also experience lazy training in the sense that overparameterized generator and discriminator networks remain sufficiently close to their initializations. This may then lead to a general theory of global convergence of GDA for such overparameterized non-convex concave min-max problems. In this paper we first theoretically study the role of overparameterization for a GAN model with a 1-hidden layer generator and a linear discriminator. We study two optimization procedures to solve this problem: (i) using a conventional training procedure in GANs based on GDA in which generator and discriminator networks perform simultaneous steps of gradient descent to optimize their respective models, (ii) using GD to optimize generator s parameters for the optimal discriminator. The latter case corresponds to taking a sufficiently large number of gradient ascent steps to update discriminator s parameters for each GD step of the generator. In both cases, our results show that in an overparameterized regime, the GAN optimization converges to a global solution. To the best of our knowledge, this is the first result showing the global convergence of GDA in such settings. While in our results we focus on one-hidden layer generators and linear discriminators, our theory is based on analyzing a general class of min-max optimization problems which can be used to study a much broader class of generators and discriminators potentially including deep generators and deep random feature-based discriminators. A key component of our analysis is a novel connection to exponential stability of non-symmetric time varying dynamical systems in control theory which may have broader implications for theoretical analysis of GAN s training. Ideas from control theory have Published as a conference paper at ICLR 2021 also been used for understanding and improving training dynamics of GANs in (Xu et al., 2019; An et al., 2018). Having analyzed overparameterized GANs for relatively simple models, we next provide a comprehensive empirical study of this problem for practical GANs such as DCGAN (Radford et al., 2016) and Res Net GAN (Gulrajani et al., 2017) trained on CIFAR-10 and Celeb-A datasets. For example, the benefit of overparamterization in training DCGANs on CIFAR-10 is illustrated in Figure 1. We have three key observations: (i) as the model becomes more overparameterized (e.g. using wider networks), the training FID scores that measure the training error, decrease. This phenomenon has been observed in other studies as well (Brock et al., 2019). (ii) overparameterization does not hurt the test FID scores (i.e. the generalization gap remains small). This improved test-time performance can also be seen qualitatively in the center panel of Figure 1, where overparameterized models produce samples of improved quality. (iii) Remarkably, overparameterized GANs, with a lot of parameters to optimize over, have significantly improved convergence behavior of GDA, both in terms of rate and stability, compared to small GAN models (see the right panel of Figure 1). In summary, in this paper We provide the first theoretical guarantee of simultaneous GDA s global convergence for an overparameterized GAN with one-hidden neural network generator and a linear discriminator (Theorem 2.1). By establishing connections with linear time-varying dynamical systems, we provide a theoretical framework to analyze simultaneous GDA s global convergence for a general overparameterized GAN (including deeper generators and random feature discriminators), under some general conditions (Theorems 2.3 and A.4). We provide a comprehensive empirical study of the role of model overparameterization in GANs using several large-scale experiments on CIFAR-10 and Celeb-A datasets. We observe overparameterization improves GANs training error, generalization error, sample qualities as well as the convergence rate and stability of GDA. 2 THEORETICAL RESULTS 2.1 PROBLEM FORMULATION Given n data points of the form x1, x2, . . . , xn Rm, the goal of GAN s training is to find a generator that can mimic sampling from the same distribution as the training data. More specifically, the goal is to find a generator mapping Gθ(z) : Rd Rm, parameterized by θ Rp, so that Gθ(z1), Gθ(z2), . . . , Gθ(zn) with z1, z2, . . . , zn generated i.i.d. according to N(0, Id) has a similar empirical distribution to x1, x2, . . . , xn1. To measure the discrepancy between the data points and the GAN outputs, one typically uses a discriminator mapping Deθ : Rm R parameterized with eθ Rep. The overall training approach takes the form of the following min-max optimization problem which minimizes the worst-case discrepancy detected by the discriminator min θ max eθ i=1 Deθ(xi) 1 i=1 Deθ(Gθ(zi)) + R(eθ). (1) Here, R(eθ) is a regularizer that typically ensures the discriminator is Lipschitz. This formulation mimics the popular Wasserstein GAN (Arjovsky et al., 2017) (or, IPM GAN) formulations. This optimization problem is typically solved by running Gradient Descent Ascent (GDA) on the minimization/maximization variables. The generator and discriminator mappings G and D used in practice are often deep neural networks. Thus, the min-max optimization problem above is highly nonlinear and non-convex concave. Saddle point optimization is a classical and fundamental problem in game theory (Von Neumann & Morgenstern, 1953) and control (Gutman, 1979). However, most of the classical results apply to the 1In general, the number of observed and generated samples can be different. However, in practical GAN implementations, batch sizes of observed and generated samples are usually the same. Thus, for simplicity, we make this assumption in our setup. Published as a conference paper at ICLR 2021 convex-concave case (Arrow et al., 1958) while the saddle point optimization of GANs is often non convex-concave. If GDA converges to the global (local) saddle points, we say it is globally (locally) stable. For a general min-max optimization, however, GDA can be trapped in a loop or even diverge. Except in some special cases (e.g. (Feizi et al., 2018) for a quadratic GAN formulation or (Lei et al., 2019) for the under-parametrized setup when the generator is a one-layer network), GDA is not globally stable for GANs in general (Nagarajan & Kolter, 2017; Mescheder et al., 2018; Adolphs et al., 2019; Mescheder et al., 2017; Daskalakis et al., 2020). None of these works, however, study the role of model overparameterization in the global/local convergence (stability) of GDA. In particular, it has been empirically observed (as we also demonstrate in this paper) that when the generator/discriminator contain a large number of parameters (i.e. are sufficiently overparameterized) GDA does indeed find (near) globally optimal solutions. In this section we wish to demystify this phenomenon from a theoretical perspective. 2.2 DEFINITION OF MODEL OVERPARAMETERIZATION In this paper, we use overparameterization in the context of model parameters count. Informally speaking, overparameterized models have large number of parameters, that is we assume that the number of model parameters is sufficiently large. In specific problem setups of Section 2, we precisely compute thresholds where the number of model parameters should exceed in order to observe nice convergence properties of GDA. Note that the definition of overparameterization based on model parameters count is related, but distinct from the complexity of the hypothesis class. For instance, in our empirical studies, when we say we overparameterize a neural network, we fix the number of layers in the neural network and increase the hidden dimensions. Our definition does not include the case where the number of layers also increases, which forms a different hypothesis class. 2.3 RESULTS FOR ONE-HIDDEN LAYER GENERATORS AND RANDOM DISCRIMINATORS In this section, we discuss our main results on the convergence of gradient based algorithms when training GANs in the overparameterized regime. We focus on the case where the generator takes the form of a single hidden-layer Re LU network with d inputs, k hidden units, and m outputs. Specifically, G (z) = V Re LU (W z) with W Rk d and V Rm k denoting the input-to-hidden and hidden-to-output weights. We also consider a linear discriminator of the form D(x) = d T x with an ℓ2 regularizer on the weights i.e. R(d) = d 2 ℓ2 /2. The overall min-max optimization problem (equation 1) takes the form min W Rk d max d Rm L(W , d) := d, 1 i=1 (xi V Re LU (W zi)) d 2 ℓ2 2 . (2) Note that we initialize V at random and keep it fixed throughout the training. The common approach to solve the above optimization problem is to run a Gradient Descent Ascent (GDA) algorithm. At iteration t, GDA takes the form dt+1 = dt + µ d L (Wt, dt) Wt+1 = Wt η W L (Wt, dt) (3) Next, we establish the global convergence of GDA for an overparameterized model. Note that a global saddle point (W , d ) is defined as L(W , d) L(W , d ) L(W , d ) for all feasible W and d. If these inequalities hold in a local neighborhood, (W , d ) is called a local saddle point. Theorem 2.1 Let x1, x2, . . . , xn Rm be n training data with their mean defined as x := 1 n Pn i=1 xi. Consider the GAN model with a linear discriminator of the form D(x) = d T x parameterized by d Rm and a one hidden layer neural network generator of the form G(z) = V φ(W z) parameterized by W Rk d with V Rm k a fixed matrix generated at random with i.i.d. N(0, σ2 v) entries. Also assume the input data to the generator {zi}n i=1 are generated i.i.d. according to N(0, σ2 z Id). Furthermore, assume the generator weights at initialization W0 Rk d Published as a conference paper at ICLR 2021 are generated i.i.d. according to N(0, σ2 w). Furthermore, assume the standard deviations above obey σvσwσz x ℓ2 /(md 5 2 log d 3 2 ). Then, as long as k C md4 log (d)3 with C a fixed constant, running GDA updates per equation 3 starting from the random W0 above and d0 = 02 with step-sizes obeying 0 < µ 1 and η = η µ 324 k d+ n 1 π n σ2v σ2z , with η 1, satisfies i=1 V Re LU (Wτzi) x ℓ2 5 1 10 5 ηµ τ 1 n i=1 V Re LU (W0zi) x This holds with probability at least 1 (n + 5) e m 1500 5k e c1 n (2k + 2) e d 216 ne c2 md3 log(d)2 where c1, c2 are fixed numerical constants. To better understand the implications of the above theorem, note that the objective of equation 2 can be simplified by solving the inner maximization in a closed form so that the min-max problem in equation 2 is equivalent to the following single minimization problem: min W L(W ) := 1 i=1 V Re LU (W zi) x which has a global optimum of zero. As a result equation 4 in Theorem 2.1 guarantees that running simultaneous GDA updates achieves the global optimum. This holds as long as the generator network is sufficiently overparameterized in the sense that the number of hidden nodes is polynomially large in its output dimension m and input dimension d. Interestingly, the rate of convergence guaranteed by this result is geometric, guaranteeing fast GDA convergence to the global optima. To the extent of our knowledge, this is the first result that establishes the global convergence of simultaneous GDA for an overparameterized GAN model. While the result proved above shows the global convergence of GDA for a GAN with 1-hidden layer generator and a linear discriminator, for a general GAN model, local saddle points may not even exist and GDA may converge to approximate local saddle points (Berard et al., 2020; Farnia & Ozdaglar, 2020). For a general min-max problem, (Daskalakis et al., 2020) has recently shown that approximate local saddle points exist under some general conditions on the lipschitzness of the objective function. Understanding GDA dynamics for a general GAN remains an important open problem. Our result in Theorem 2.1 is a first and important step towards that. We acknowledge that the considered GAN formulation of equation 2 is very simpler than GANs used in practice. Specially, since the discriminator is linear, this GAN can be viewed as a momentmatching GAN (Li et al., 2017) pushing first moments of input and generative distributions towards each other. Alternatively, this GAN formulation can be viewed as one instance of the Sliced Wasserstein GAN (Deshpande et al., 2018). Although the maximization on discriminator s parameters is concave, the minimization over the generator s parameters is still non-convex due to the use of a neural-net generator. Thus, the overall optimization problem is a non-trivial non-convex concave min-max problem. From that perspective, our result in Theorem 2.1 partially explains the role of model overparameterization in GDA s convergence for GANs. Given the closed form equation 5, one may wonder what would happen if we run gradient descent on this minimization objective directly. That is running gradient descent updates of the form Wτ+1 = Wτ η L(Wτ) with L(W ) given by equation 5. This is equivalent to GDA but instead of running one gradient ascent iteration for the maximization iteration we run infinitely many. Interestingly, in some successful GAN implementations (Gulrajani et al., 2017), often more updates on the discriminator s parameters are run per generator s updates. This is the subject of the next result. Theorem 2.2 Consider the setup of Theorem 2.1. Then as long as k C md4 log (d)3 2The zero initialization of d is merely done for simplicity. A similar result can be derived for an arbitrary initialization of the discriminator s parameters with minor modifications. See Theorem 2.3 for such a result. Published as a conference paper at ICLR 2021 (a) Discriminator trained to optimality (b) Gradient Descent Ascent (c) diter steps of discriminator update per generator iteration Figure 2: Convergence plot a GAN model with linear discriminator and 1-hidden layer generator as the hidden dimension (k) increases. Final mse is the mse loss between true data mean and the mean of generated distribution. Over-parameterized models show improved convergence with C a fixed numerical constant, running GD updates of the form Wτ+1 = Wτ η L(Wτ) on the loss given in equation 5 with step-size η = 2 η 243k d+ n 1 π n σ2v σ2z , with η 1, satisfies i=1 V Re LU (Wτzi) x ℓ2 1 4 10 6 η τ 1 n i=1 V Re LU (W0zi) x This holds with probability at least 1 (n + 5) e m 1500 5k e c1 n (2k + 2) e d 216 ne c2 md3 log (d)2 with c1, c2 fixed numerical constants. This theorem states that if we solve the max part of equation 2 in closed form and run GD on the loss function per equation 5 with enough overparameterization, the loss will decrease at a geometric rate to zero. This result holds again when the model is sufficiently overparameterized. The proof of Theorem 2.2 relies on a result from (Oymak & Soltanolkotabi, 2020), which was developed in the framework of supervised learning. Also note that the amount of overparameterization required in both Theorems 2.1 and 2.2 is the same. 2.4 CAN THE ANALYSIS BE EXTENDED TO MORE GENERAL GANS? In the previous section, we focused on the implications of our results for one-hidden layer generator and linear discriminator. However, as it will become clear in the proofs, our theoretical results are based on analyzing the convergence behavior of GDA on a more general min-max problem of the form min θ Rp max d Rmh(θ, d) := d, f (θ) y d 2 ℓ2 2 , (7) where f : Rp Rm denotes a general nonlinear mapping. Theorem 2.3 (Informal version of Theorem A.4) Consider a general nonlinear mapping f : Rp Rm with the singular values of its Jacobian mapping around initialization obeying certain assumptions (most notably σmin(J (θ0)) α). Then, running GDA iterations of the form dt+1 = dt + µ dh(θt, dt) θt+1 = θt η θh(θt, dt) (8) with sufficiently small step sizes η and µ obeys f(θt) y ℓ2 γ 1 ηα2 f(θ0) y 2 ℓ2 + d0 2 ℓ2. Note that similar to the previous sections one can solve the maximization problem in equation 7 in closed form so that equation 7 is equivalent to the following minimization problem min θ Rp L(θ) := 1 2 f(θ) y 2 ℓ2 , (9) Published as a conference paper at ICLR 2021 with global optima equal to zero. Theorem 2.3 ensures that GDA converges with a fast geometric rate to this global optima. This holds as soon as the model f(θ) is sufficiently overparameterized which is quantitatively captured via the minimum singular value assumption on the Jacobian at initialization (σmin(J (θ0)) α which can only hold when m p). This general result can thus be used to provide theoretical guarantees for a much more general class of generators and discriminators. To be more specific, consider a deep GAN model where the generator Gθ is a deep neural network with parameters θ and the discriminator is a deep random feature model of the form Dd(x) = d T ψ(x) parameterized with d and ψ : Rd Rm a deep neural network with random weights. Then the minmax training optimization problem equation 1 with a regularizer R(d) = d 2 ℓ2 /2 is a special instance of equation 7 with i=1 ψ(Gθ(zi)) and y := 1 Therefore, the above result can in principle be used to rigorously analyze global convergence of GDA for an overparameterized GAN problem with a deep generator and a deep random feature discriminator model. However, characterizing the precise amount of overparameterization required for such a result to hold requires a precise analysis of the minimum singular value of the Jacobian of f(θ) at initialization as well as other singular value related conditions stated in Theorem A.4. We defer such a precise analysis to future works. Figure 3: MLP Overparameterization on MNIST. Numerical Validations: Next, we numerically study the convergence of GAN model considered in Theorems 2.1 and 2.2 where the discriminator is a linear network while the generator is a one hidden layer neural net. In our experiments, we generate xi s from an m-dimension Gaussian distribution with mean µ and an identity covariance matrix. The mean vector µ is randomly generated. We train two variants of GAN models using (1) GDA (as considered in Thm 2.1) and (2) GD on generator while solving the discriminator to optimality (as considered in Thm 2.2). In Fig. 2, we plot the converged loss values of GAN models trained using both techniques (1) and (2) as the hidden dimension k of the generator is varied. The MSE loss between the true data mean and the data mean of generated samples is used as our evaluation metric. As this MSE loss approaches 0, the model converges to the global saddle point. We observe that overparameterized GAN models show improved convergence behavior than the narrower models. Additionally, the MSE loss converges to 0 for larger values of k which shows that with sufficient overparamterization, GDA converges to a global saddle point. 3 EXPERIMENTS In this section, we demonstrate benefits of overparamterization in large GAN models. In particular, we train GANs on two benchmark datasets: CIFAR-10 (32 32 resolution) and Celeb-A (64 64 resolution). We use two commonly used GAN architectures: DCGAN and Resnet-based GAN. For both of these architectures, we train several models, each with a different number of filters in each layer, denoted by k. For simplicity, we refer to k as the hidden dimension. Appendix Fig. 8 illustrates the architectures used in our experiments. Networks with large k are more overparameterized. We use the same value of k for both generator and discriminator networks. This is in line with the design choice made in most recent GAN models (Radford et al., 2016; Brock et al., 2019), where the size of generator and discriminator models are roughly maintained the same. We train each model till convergence and evaluate the performance of converged models using FID scores. FID scores measure the Frechet distance between feature distributions of real and generated data Published as a conference paper at ICLR 2021 816 32 64 128 256 Hidden Dimension (k) Train FID Test FID (a) DCGAN, CIFAR 816 32 64 128 256 Hidden Dimension (k) Train FID Test FID (b) DCGAN, Celeb A 16 32 64 128 256 Hidden Dimension (k) Train FID Test FID (d) Resnet, CIFAR 16 32 64 128 256 Hidden Dimension (k) Train FID Test FID (e) Resnet, Celeb A Figure 4: Overparamterization Results: We plot the FID scores (lower, better) of DCGAN and Resnet DCGAN as the hidden dimension k is varied. Results on CIFAR-10 and Celeb-A are shown on the plots on the left and right panels, respectively. Overparameterization gives better FID scores. 50K 100K 150K 200K 250K Training Iteration k = 8 k = 16 k = 32 k = 64 k = 128 k = 256 50K 100K 150K 200K 250K Training Iteration k = 8 k = 16 k = 32 k = 64 k = 128 k = 256 (b) Celeb A Figure 5: DCGAN Training Results: We plot the FID scores across training iterations of DCGAN on CIFAR-10 and Celeb-A for different values of hidden dimension k. Remarkably, we observe that over-parameterization improves the rate of convergence of GDA and its stability in training. distributions (Heusel et al., 2017). A small FID score indicates high-quality synthesized samples. Each experiment is conducted for 5 runs, and mean and the variance of FID scores are reported. Overparameterization yields better generative models: In Fig. 4, we show the plot of FID scores as the hidden dimension (k) is varied for DCGAN and Resnet GAN models. We observe a clear trend where the FID scores are high (i.e. poor) for small values of k, while they improve as models become more overparameterized. Also, the FID scores saturate beyond k = 64 for DCGAN models, and k = 128 for Resnet GAN models. Interestingly, these are the standard values used in the existing model architecures (Radford et al., 2016; Gulrajani et al., 2017).This trend is also consistent on MLP GANs trained on MNIST dataset (Fig. 3). We however notice that FID score in MLP GANs increase marginally as k increases from 1024 to 2048. This is potentially due to an increased generalization gap in this regime where it offsets potential benefits of over-parameterization Overparameterization leads to improved convergence of GDA: In Fig. 5, we show the plot of FID scores over training iterations for different values for k. We observe that models with larger Published as a conference paper at ICLR 2021 Figure 6: Generalization in GANs: We plot the NND scores as the hidden dimension k is varied for DCGAN (shown in (a)) and Resnet (shown in (b)) models. values of k converge faster and demonstrate a more stable behavior. This agrees with our theoretical results that overparameterized models have a fast rate of convergence. Generalization gap in GANs: To study the generalization gap, we compute the FID scores by using (1) the training-set of real data, which we call FID train, and (2) a held-out validation set of real data, which we call FID test. In Fig. 4, a plot of FID train (in blue) and FID test (in green) are shown as the hidden dimension k is varied. We observe that FID test values are consistently higher than the the FID train values. Their gap does not increase with increasing overparameterization. However, as explained in (Gulrajani et al., 2019), the FID score has the issue of assigning low values to memorized samples. To alleviate the issue, (Gulrajani et al., 2019; Arora et al., 2017) proposed Neural Net Divergence (NND) to measure generalization in GANs. In Fig. 6, we plot NND scores by varying the hidden dimensions in DCGAN and Resnet GAN trained on CIFAR-10 dataset. We observe that increasing the value of k decreases the NND score. Interestingly, the NND score of memorized samples are higher than most of the GAN models. This indicates that overparameterized models have not been memorizing training samples and produce better generative models. 4 CONCLUSION In this paper, we perform a systematic study of the importance of overparameterization in training GANs. We first analyze a GAN model with one-hidden layer generator and a linear discriminator optimized using Gradient Descent Ascent (GDA). Under this setup, we prove that with sufficient overparameterization, GDA converges to a global saddle point. Additionally, our result demonstrate that overparameterized models have a fast rate of convergence. We then validate our theoretical findings through extensive experiments on DCGAN and Resnet models trained on CIFAR-10 and Celeb-A datasets. We observe overparameterized models to perform well both in terms of the rate of convergence and the quality of generated samples. 5 ACKNOWLEDGEMENT M. Sajedi would like to thank Sarah Dean for introducing (Rugh, 1996). This project was supported in part by NSF CAREER AWARD 1942230, HR00112090132, HR001119S0026, NIST 60NANB20D134 and Simons Fellowship on Foundations of Deep Learning. M. Soltanolkotabi is supported by the Packard Fellowship in Science and Engineering, a Sloan Research Fellowship in Mathematics, an NSF-CAREER under award 1846369, the Air Force Office of Scientific Research Young Investigator Program (AFOSR-YIP) under award FA9550-18-1-0078, DARPA Learning with Less Labels (Lw LL) and Fast NICS programs, and NSF-CIF awards 1813877 and 2008443. Published as a conference paper at ICLR 2021 Leonard Adolphs, Hadi Daneshmand, Aurelien Lucchi, and Thomas Hofmann. Local saddle point optimization: A curvature exploitation approach. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 486 495, 2019. Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning via overparameterization. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, ICML 2019, Long Beach, California, USA, June 9-15, 2019, volume 97 of Proceedings of Machine Learning Research, pp. 242 252, 2019. Wangpeng An, Haoqian Wang, Qingyun Sun, Jun Xu, Qionghai Dai, and Lei Zhang. A pid controller approach for stochastic optimization of deep networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8522 8531, 2018. Martin Arjovsky, Soumith Chintala, and L eon Bottou. Wasserstein generative adversarial networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, Australia, Aug 6-11, 2017, Proceedings of Machine Learning Research, pp. 214 223, 2017. Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, and Yi Zhang. Generalization and equilibrium in generative adversarial nets (gans). ar Xiv preprint ar Xiv:1703.00573, 2017. Kenneth J Arrow, Leonid Hurwicz, and Hirofumi Uzawa. Studies in linear and non-linear programming. Cambridge Univ. Press, 1958. Hugo Berard, Gauthier Gidel, Amjad Almahairi, Pascal Vincent, and Simon Lacoste Julien. A closer look at the optimization landscapes of generative adversarial networks. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=HJe Vn CEKw H. Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelity natural image synthesis. In International Conference on Learning Representations, 2019. Lenaic Chizat, Edouard Oyallon, and Francis Bach. On lazy training in differentiable programming. In Advances in Neural Information Processing Systems, pp. 2937 2947, 2019. Aidan Clark, Jeff Donahue, and Karen Simonyan. Adversarial video generation on complex datasets. ar Xiv preprint ar Xiv:1907.06571, 2019. URL https://arxiv.org/abs/1907.06571. Constantinos Daskalakis, Stratis Skoulakis, and Manolis Zampetakis. The complexity of constrained min-max optimization. ar Xiv preprint ar Xiv:2009.09623, 2020. URL https://arxiv.org/abs/2009.09623. Ishan Deshpande, Ziyu Zhang, and Alexander G Schwing. Generative modeling using the sliced wasserstein distance. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3483 3491, 2018. Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, 2017. Simon S. Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes over-parameterized neural networks. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019, Conference Track Proceedings, 2019. Farzan Farnia and Asuman Ozdaglar. Gans may have no nash equilibria. ar Xiv preprint ar Xiv:2002.09124, 2020. Soheil Feizi, Farzan Farnia, Tony Ginart, and David Tse. Understanding GANs: the LQG setting. ar Xiv preprint ar Xiv:1710.10793, 2018. URL https://arxiv.org/abs/1710.10793. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems 27, pp. 2672 2680, 2014. Published as a conference paper at ICLR 2021 Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein gans. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems 30, pp. 5767 5777, 2017. Ishaan Gulrajani, Colin Raffel, and Luke Metz. Towards GAN benchmarks which require generalization. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=Hkx KH2Ac Fm. Shaul Gutman. Uncertain dynamical systems a lyapunov min-max approach. IEEE Transactions on Automatic Control, 24(3):437 443, 1979. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in neural information processing systems 30, pp. 6626 6637, 2017. Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pp. 4401 4410. Computer Vision Foundation / IEEE, 2019. Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In Yoshua Bengio and Yann Le Cun (eds.), 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014. Michel Ledoux. The concentration of measure phenomenon. Number 89 in Mathematical Surveys and Monographs. American Mathematical Soc., 2001. Qi Lei, Jason D Lee, Alexandros G Dimakis, and Constantinos Daskalakis. Sgd learns one-layer networks in wgans. ar Xiv preprint ar Xiv:1910.07030, 2019. URL https://arxiv.org/abs/1910.07030. Chun-Liang Li, Wei-Cheng Chang, Yu Cheng, Yiming Yang, and Barnabas Poczos. Mmd gan: Towards deeper understanding of moment matching network. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems 30, pp. 2203 2213, 2017. Lars Mescheder, Sebastian Nowozin, and Andreas Geiger. The numerics of GANs. In Advances in Neural Information Processing Systems 30, pp. 1823 1833, 2017. Lars Mescheder, Andreas Geiger, and Sebastian Nowozin. Which training methods for gans do actually converge? ar Xiv preprint ar Xiv:1801.04406, 2018. URL https://arxiv.org/abs/1801.04406. Vaishnavh Nagarajan and J Zico Kolter. Gradient descent GAN optimization is locally stable. In Advances in Neural Information Processing Systems 30, pp. 5591 5600, 2017. Samet Oymak and Mahdi Soltanolkotabi. Overparameterized nonlinear learning: Gradient descent takes the shortest path? In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, Long Beach, California, USA, June 9-15, 2019, Proceedings of Machine Learning Research, pp. 4951 4960, 2019. Samet Oymak and Mahdi Soltanolkotabi. Towards moderate overparameterization: global convergence guarantees for training shallow neural networks. IEEE Journal on Selected Areas in Information Theory, 2020. Samet Oymak, Zalan Fabian, Mingchen Li, and Mahdi Soltanolkotabi. Generalization guarantees for neural networks via harnessing the low-rank structure of the jacobian. ar Xiv preprint ar Xiv:1906.05392, 2019. URL https://arxiv.org/abs/1906.05392. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In Yoshua Bengio and Yann Le Cun (eds.), 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016. Published as a conference paper at ICLR 2021 Wilson J Rugh. Linear system theory. Prentice-Hall, Inc., 1996. Mahdi Soltanolkotabi. Structured signal recovery from quadratic measurements: Breaking sample complexity barriers via nonconvex optimization. IEEE Transactions on Information Theory, 65 (4):2374 2400, 2019. Mahdi Soltanolkotabi, Adel Javanmard, and Jason D Lee. Theoretical insights into the optimization landscape of over-parameterized shallow neural networks. IEEE Transactions on Information Theory, 65(2):742 769, 2018. Dave Van Veen, Ajil Jalal, Mahdi Soltanolkotabi, Eric Price, Sriram Vishwanath, and Alexandros G Dimakis. Compressed sensing with deep image prior and learned regularization. ar Xiv preprint ar Xiv:1806.06438, 2018. URL https://arxiv.org/abs/1806.06438. John Von Neumann and Oskar Morgenstern. Theory of games and economic behavior. Princeton university press, 1953. Kun Xu, Chongxuan Li, Huanshu Wei, Jun Zhu, and Bo Zhang. Understanding and stabilizing gans training dynamics with control theory. ar Xiv preprint ar Xiv:1909.13188, 2019. Difan Zou and Quanquan Gu. An improved analysis of training over-parameterized deep neural networks. In Advances in Neural Information Processing Systems 32, pp. 2055 2064, 2019. Published as a conference paper at ICLR 2021 In this section, we prove Theorems 2.1 and 2.2. First, we provide some notations we use throughout the remainder of the paper in Section A.1. Before proving these specialized results for one hidden layer generators and linear discriminators (Theorems 2.1 and 2.2), we state and prove a more general result (formal version of Theorem 2.3) on the convergence of GDA on a general class of min-max problems in Section A.3. Then we state a few preliminary calculations in Section A.4. Next, we state some key lemmas in Section A.5 and defer their proofs to Appendix B. Finally, we prove Theorems 2.1 and 2.2 in Sections A.6 and A.7, respectively. A.1 NOTATION We will use C, c, c1, etc. to denote positive absolute constants, whose value may change throughout the paper and from line to line. We use φ (z) = Re LU (z) = max (0, z) and its (generalized) derivative φ (z) = 1{z 0} with 1 being the indicator function. σmin (X) and σmax (X) = X denote the minimum and maximum singular values of matrix X. For two arbitrary matrices A and B, A B denotes their kronecker product. The spectral radius of a matrix A Cn n is defined as ρ (A) = max {|λ1|, . . . , |λn|}, where λi s are the eigenvalues of A. Throughout the proof we shall assume φ := Re LU to avoid unnecessarily long expressions. A.2 PROOF SKETCH OF THE MAIN RESULTS In this section, we provide a brief overview of our proofs. We focus on the main result in this manuscript, which is about the convergence of GDA (Theorem 2.1). To do this we study the converge of GDA on the more general min-max problem of the form (see Theorem A.4 for a formal statement) min θ Rn max d Rmh(θ, d) := d, f (θ) y d 2 ℓ2 2 . (10) In this case the GDA iterates take the form dt+1 = (1 µ) dt + µ (f (θt) y) θt+1 = θt ηJ T (θt) dt . (11) Our proof for global convergence of GDA on this min-max loss consists of the following steps. Step 1: Recasting the GDA updates as a linear time-varying system In the first step we carry out a series of algebraic manipulations to recast the GDA updates (equation 11) into the following form rt+1 dt+1 where rt = f (θt) y denotes the residuum and At denotes a properly defined transition matrix. Step 2: Approximation by a linear time-invariant system Next, to analyze the behavior of the time-varying dynamical system above we approximate it by the following time-invariant linear dynamical system ert+1 edt+1 = I ηJ T (θ0) J (θ0) µI (1 µ) I where θ0 denotes the initialization. The validity of this approximation is ensured by our assumptions on the Jacobian of the function f, which, among others, guarantee that it does not change too much in a sufficiently large neighborhood around the initialization and that the smallest singular value of J (θ0) is bounded from below. Step 3: Analysis of time-invariant linear dynamical system To analyze the time-invariant dynamical system above, we utilize and refine intricate arguments Published as a conference paper at ICLR 2021 from the control theory literature involving the spectral radius of the fixed transition matrix above to obtain Step 4: Completing the proof via a perturbation argument In the last step of our proof we show that the two sequences rt dt and ert edt will remain close to each other. This is based on a novel perturbation argument. The latter combined with Step 3 allows us to conclude which finishes the global convergence of GDA on equation 10 and hence the proof of Theorem A.4. In order to deduce Theorem 2.1 from Theorem A.4, we need to check that the Jacobian at the initialization is bounded from below at the origin and that it does not change too quickly in a large enough neighborhood. In order to prove that we will leverage recent ideas from the deep learning theory literature revolving around the neural tangent kernel. This allows us to guarantee that this conditions are indeed met, if the neural network is sufficiently wide and the initialization is chosen large enough. The second main result of this manuscript, Theorem 2.2, can be deduced more directly from recent results on overparameterized learning (see Oymak & Soltanolkotabi (2020)). Hence, we have deferred its proof to Section A.7. A.3 ANALYSIS OF GDA: A CONTROL THEORY PERSPECTIVE In this section we will focus on solving a general min-max optimization problem of the form min θ Rn max d Rmh(θ, d) := d, f (θ) y d 2 ℓ2 2 , (12) where f : Rn Rm is a general nonlinear mapping. In particular, we focus on analyzing the convergence behavior of Gradient Descent/Ascent (GDA) on the above loss, starting from initial estimates θ0 and d0. In this case the GDA updates take the following form dt+1 = (1 µ) dt + µ (f (θt) y) θt+1 = θt ηJ T (θt) dt . (13) We note that solving the inner maximization problem in equation 12 would yield min θ Rn 1 2 f (θ) y 2 ℓ2 . (14) In this section, our goal is to show that when running the GDA updates of equation 13, the norm of the residual vector defined as rt := f (θt) y goes to zero and hence we reach a global optimum of equation 14 (and in turn equation 12). Our proof will build on ideas from control theory and dynamical systems literature. For that, we are first going to rewrite the equations 13 in a more convenient way. We define the average Jacobian along the path connecting two points x,y Rn as J (y, x) = Z 1 0 J (x + α (y x)) dα, where J (θ) Rm n is the Jacobian associated with the nonlinear mapping f. Next, from the fundamental theorem of calculus it follows that rt+1 = f (θt+1) y = f θt ηJ T t dt y = f (θt) ηJt+1,t J T t dt y = rt ηJt+1,t J T t dt, (15) Published as a conference paper at ICLR 2021 where we used the shorthands Jt := J (θt) and Jt+1,t := J (θt+1, θt) for exposition purposes. Next, we combine the updates rt and dt into a state vector of the form zt := rt dt this notation the relationship between the state vectors from one iteration to the next takes the form zt+1 = I ηJt+1,t J T t µI (1 µ) I | {z } =:At zt, t 0, (16) which resembles a time-varying linear dynamical system with transition matrix At. Now note that to show convergence of rt to zero it suffices to show convergence of zt to zero. To do this we utilize the following notion of uniform exponential stability, which will be crucial in analyzing the solutions of equation 16. (See Rugh (1996) for a comprehensive overview on stability notions in discrete state equations.) Definition 1 A linear state equation of the form zt+1 = Atzt is called uniformly exponentially stable if for every t 0 we have zt ℓ2 γλt z0 ℓ2, where γ 1 is a finite constant and 0 λ < 1. Using the above definition to show the convergence of the state vector zt to zero at a geometric rate it suffices to show the state equation 16 is exponentially stable.3 For that, we are first going to analyze a state equation which results from linearizing the nonlinear function f (θ) around the initialization θ0. In the next step, we are going to show that the behavior of these two problems are similar, provided we stay close to initialization (which we are also going to prove). Specifically, we consider the linearized problem min eθ Rn max e d Rmhlin eθ, ed := D ed, f (θ0) + J0 eθ θ0 y E ℓ2 2 . (17) We first analyze GDA on this linearized problem starting from the same initialization as the original problem, i.e. eθ0 = θ0 and ed0 = d0. The gradient descent update for eθt takes the form eθt+1 = eθt ηJ T 0 edt, (18) and the gradient ascent update for edt takes the form edt+1 = edt + µ f (θ0) + J0 eθt θ0 y edt = (1 µ) edt + µert, (19) where we used the linear residual defined as ert = f (θ0)+J0 eθt θ0 y. Moreover, the residual from one iterate to the next can be written as follows ert+1 = f (θ0) + J0 eθt+1 θ0 y = f (θ0) + J0 eθt ηJ T 0 edt θ0 y = ert ηJ0J T 0 edt. Again, we define a new vector ezt = ert edt R2m and by putting together equations 19 and 20 we ezt+1 = I ηJ0J T 0 µI (1 µ) I ezt = Aezt, t 0, (21) 3We note that technically the dynamical system equation 16 is not linear. However, we still use exponential stability with some abuse of notation to refer to the property that zt ℓ2 γλt z0 ℓ2 holds. As we will see in the forth-coming paragraphs, our formal analysis is via a novel perturbation analysis of a linear dynamical system and therefore keeping this terminology is justified. Published as a conference paper at ICLR 2021 which is of the form of a linear time-invariant state equation. As a first step in our proof, we are going to show that the linearized state equations are uniformly exponentially stable. First, recall the following well-known lemma, which characterizes uniform exponential stability in terms of the eigenvalues of A. Lemma A.1 (Rugh, 1996, Theorem 22.11) A linear state equation of the form ezt+1 = Aezt with A a fixed matrix is uniformly exponentially stable if and only if all eigenvalues of A have magnitudes strictly less than one, i.e. ρ (A) < 1. In this case, it holds for all t 0 and all z that Atz γρ (A)t z , where γ 1 is an absolute constant, which only depends on A. In the next lemma, we prove that under suitable assumptions on J0 and the step sizes µ and η the state equations 21 indeed fulfill this condition. Lemma A.2 Assume that α σmin (J0) σmax (J0) β and consider the matrix A = I ηJ0J T 0 µI (1 µ) I . Suppose that µ η 4β2. Then it holds that ρ (A) 1 ηα2. Proof Suppose that λ is an eigenvalue of A. Hence, there is an eigenvector [x, y]T = 0 such that I ηJ0J T 0 µI (1 µ) I holds. By a direct calculation we observe that this yields the equation ηJ0J T 0 x = In particular, x must be an eigenvector of J0J T 0 . Denoting the corresponding eigenvalue with s, we obtain the identity (1 λ)2 µ (1 λ) + ηs = 0. Hence, we must have Note that the square root is indeed well-defined, since 4 µηs µηβ2 µηs 0, where in the first inequality we used the assumption µ η 4β2 and in the second line we used that s β2, which is a consequence of our assumption on the singular values of J0. Hence, it follows by the reverse triangle inequality that where the second inequality is valid as µ 2 ηs 0 is implied by µ 2 2ηβ2 > ηs. In the last inequality we used the fact that α2 s, which is a consequence of our assumption on the singular values of J0. By rearranging terms, we obtain that |λ| < 1 ηα2. Since λ was an arbitrary eigenvalue of A, the result follows. Since the last lemma shows that under suitable conditions it holds that ρ (A) < 1, Lemma A.3 yields uniform exponential stability of our state equations. However, this will not be sufficient for our purposes. The reason is that Lemma A.3 does not specify the constant γ and that in order to deal with the time-varying dynamical system we will need a precise estimate. The next lemma shows that for the state equations 21 we have, under suitable assumptions, γ 5. Published as a conference paper at ICLR 2021 Lemma A.3 Consider the linear, time invariant system of equations ezt+1 = I ηJ0J T 0 µI (1 µ) I ezt = Aezt, t 0. Furthermore, assume that α σmin (J0) σmax (J0) β and suppose that the condition µ is satisfied. Then there is a constant γ 5 such that for all t 0 it holds that ezt ℓ2 γ 1 ηα2 t ez0 ℓ2 . Proof Denote the SVD decomposition of J0 by W ΣV T and note that I ηJ0J T 0 µI (1 µ) I I ηΣΣT µI (1 µ) I W T 0 0 W T . This means we can write I ηJ0J T 0 µI (1 µ) I C1 0 ... 0 Cm P T W T 0 0 W T where P is a permutation matrix and the matrices Ci are of the form Ci = 1 ησ2 i µ (1 µ) 1 i m, where the σi s denote the singular values of J0. Using this decomposition we can deduce ezt ℓ2 = Atez0 ℓ2 At ez0 ℓ2 = max 1 i m Ct i ez0 ℓ2 . Now suppose that Vi Di V 1 i is the eigenvalue decomposition of Ci, where the columns of Vi contain the eigenvectors and Di is a diagonal matrix consisting of the eigenvalues. (Note that it follows from our assumptions on µ and η that the matrix Ci is diagonalizable.) We have Ct i = Vi Dt i V 1 i Vi Dt i V 1 i = κi ρ (Ci)t , where we defined κi := Vi V 1 i . From Lemma A.2 we know that the assumption µ results in ρ (A) 1 ηα2. Therefore, defining γ := max 1 i mκi and noting ρ (A) = max 1 i mρ (Ci), we obtain that ezt ℓ2 max 1 i m Ct i ez0 ℓ2 γ 1 ηα2 t ez0 ℓ2 . In order to finish the proof we need to show that γ 5. For that, note that calculating the eigenvectors of Ci directly reveals that we can represent this matrix as 1 4 ησ2 i µ 2 1 4 ησ2 i µ 2 1 1 Since Vi = q λmax Vi V T i and V 1 i = q λmin Vi V T i , we calculate Vi V T i , which yields " 1 2 ησ2 i µ 1 1 2 Published as a conference paper at ICLR 2021 This representation allows us to directly calculate the two eigenvalues of Vi V T i , which shows that λmax Vi V T i λmin Vi V T i v u u u u u t 3 2 ησ2 i µ + r 1 + 2 ησ2 i µ 2 + 4 3 2 ησ2 i µ r 1 + 2 ησ2 i µ 2 + 4 = 3 2 ησ2 i µ + r 1 + 2 ησ2 i µ 2 + 4 1 4 ησ2 i µ 1 4 ησ2 i µ < 5, where the last inequality holds because of ησ2 i µ ηβ2 8. Since γ = max 1 i mκi, this finishes the Now that we have shown that the linearized iterates converge to the global optima we turn our attention to showing that the nonlinear iterates 16 are close to its linear counterpart 21. For that, we make the following assumptions. Assumption 1: The singular values of the Jacobian at initialization are bounded from below σmin (J (θ0)) α for some positive constants α and β. Assumption 2: In a neighborhood with radius R around the initialization, the Jacobian mapping associated with f obeys J (θ) β for all θ BR (θ0), where BR (θ0) := {θ Rp : θ θ0 ℓ2 R}. Assumption 3: In a neighborhood with radius R around the initialization, the spectral norm of the Jacobian varies no more than ϵ in the sense that J (θ) J (θ0) ϵ for all θ BR (θ0). With these assumptions in place, we are ready to state the main theorem. Theorem A.4 Consider the GDA updates for the min-max optimization problem 12 = dt + µ dh(θt, dt) θt η θh(θt, dt) and consider the GDA updates of the linearized problem 21 edt+1 eθt+1 = edt + µ dhlin(eθt, edt) eθt η θhlin(eθt, edt) Set zt := rt dt and ezt := ert edt , where rt := f (θt) y and ert = f (θ0) + J0 eθt θ0 y denote the residuals. Assume that the step sizes of the gradient descent ascent updates satisfy µ η 8β2 as well as 0 < Published as a conference paper at ICLR 2021 µ 1. Moreover, assume that the assumptions 1-3 hold for the Jacobian J (θ) of f (θ) around the initialization θ0 Rn with parameters α, β, ϵ, and J 0 0 0 J 0 ℓ2 + 18ϵβ2γ2 α4 z0 ℓ2 , (24) which satisfy 4γβϵ α2. Here, 1 γ 5 is a constant, which only depends on µ, η, and J0. By J 0 we denote the pseudo-inverse of the Jacobian at initialization J0. Then, assuming the same initialization θ0 = eθ0, d0 = ed0 (and, hence, z0 = ez0), the following holds for all iterations t 0. zt ℓ2 converges to 0 with a geometric rate, i.e. zt ℓ2 γ 1 ηα2 t z0 ℓ2 . (25) The trajectories of zt and ezt stay close to each other and converge to the same limit, i.e. zt ezt ℓ2 2ηγ2βϵ t 1 ηα2 4γ2βϵ e 15 ln 16 15 α2 z0 ℓ2 . (26) The parameters of the original and linearized problems stay close to each other, i.e. eθt θt ℓ2 9ϵβ2γ2 α4 z0 ℓ2 , (27) The parameters of the original problem stay close to the initialization, i.e. Theorem A.4 will be the main ingredient in the proof of Theorem 2.1. However, as discussed in Section 2.4 we believe that this meta theorem can be used to deal with a much richer class of generators and discriminators. A.3.1 PROOF OF THEOREM A.4 We will prove the statements in the theorem by induction. The base case for τ = 0 is trivial. Now assume that the equations equation 25 to equation 28 hold for τ = 0, . . . , t 1. Our goal is to show that they hold for iteration t as well. Part I: First, we are going to show that θt BR (θ0). Note that by the triangle inequality and the induction assumption we have that θt θ0 ℓ2 θt θt 1 ℓ2 + θt 1 θ0 ℓ2 θt θt 1 ℓ2 + R Hence, in order to prove the claim it remains to show that θt θt 1 ℓ2 R 2 . For that, we compute 1 η θt θt 1 ℓ2 = J T (θt 1) dt 1 ℓ2 J T (θt 1) edt 1 ℓ2 + J T (θt 1) dt 1 edt 1 ℓ2 J T 0 edt 1 ℓ2 + J (θt 1) J0 edt 1 ℓ2 + J T (θt 1) dt 1 edt 1 ℓ2 (i) γ J T 0 0 0 J T 0 ℓ2 + ϵ γ z0 ℓ2 + 4β2ϵγ2 15 α2 z0 ℓ2 J 0 0 0 J 0 ℓ2 + 3β2ϵγ2 Published as a conference paper at ICLR 2021 where γ 5 is a constant. Let us verify the last two inequalities. Inequality (ii) holds because 1 γ, 1 β2 J T 0 0 0 J T 0 V ΣT W T 0 0 V ΣT W T i=1 σ2 i wi, r0 2 + wi, d0 2 wi, r0 2 + wi, d0 2 = β2 J 0 0 0 J 0 Also (i) follows from assumptions 1-3, dt 1 edt 1 ℓ2 zt 1 ezt 1 ℓ2 together with induc- tion assumption equation 26, edt 1 ℓ2 ezt 1 ℓ2 z0 ℓ2, and J T 0 edt 1 ℓ2 J T 0 ert 1 J T 0 edt 1 I ηJ T 0 J0 µI (1 µ) I J T 0 ert 2 J T 0 edt 2 I ηJ T 0 J0 µI (1 µ) I t 1 J T 0 er0 J T 0 ed0 ℓ2 γ 1 ηα2 t 1 J T 0 0 0 J T 0 where in the last inequality we applied Lemma A.3. Finally, by using η 1 8β2 we arrive at θt θt 1 ℓ2 γηβ2 J 0 0 0 J 0 ℓ2 + 3ηβ2ϵγ2 J 0 0 0 J 0 where the last line is directly due to inequality (24), γ 5, and α β. Hence, we have established θt BR (θ0). Part II: In Lemma A.3 we showed that the time invariant system of state equations ezt+1 = Aezt is uniformly exponentially stable, i.e. ezt ℓ2 goes down to zero exponentially fast. Now by using the assumption that the Jacobian remains close to the Jacobian at the initialization J0, we aim to show the exponential stability of the time variant system of the state equations 16. For that, we compute zt = At 1zt 1 = I ηJt,t 1J T t 1 µI (1 µ) I = I ηJ0J T 0 µI (1 µ) I zt 1 + η J0J T 0 Jt,t 1J T t 1 dt 1 0 =: Azt 1 + t 1. Published as a conference paper at ICLR 2021 Now set λ := 1 ηα2. By induction, we obtain the relation zt = Atz0 +Pt 1 i=0 At 1 i i. Hence, i=0 At 1 i i At 1 i i ℓ2 γλt z0 ℓ2 + i=0 γλt 1 i η J0J T 0 Ji+1,i J T i di ℓ2 γλt z0 ℓ2 + i=0 ηγλt 1 i (2βϵ) zi ℓ2 . (31) The second inequality holds because of Lemma A.3. The last inequality holds because by combining our assumptions 1 to 3 with θt BR (θ0) and the induction assumption 28 for 0 i t 1, we have that J0J T 0 Ji+1,i J T i = J0J T 0 J0J T i + J0J T i Ji+1,i J T i J0 J0 Ji + J0 Ji+1,i Ji β J0 Ji + β J0 Ji+1,i 2βϵ. (32) In order to deal with inequality 31, we will rely on the following lemma. Lemma A.5 (Rugh, 1996, Lemma 24.5) Consider two real sequences p (t) and φ (t), where p (t) 0 for all t 0 and ψ, if t = 0 i=0 p (i) φ (i) , if t 1 where η and ψ are constants with η 0. Then for all t 1 we have i=0 (1 + η p (i)) . Now we define φt = zt ℓ2 λt and rewrite inequality 31 as Hence, Lemma A.5 yields that Published as a conference paper at ICLR 2021 where (i) follows from 4γβϵ α2 and (ii) holds by inserting λ = 1 ηα2. Inserting the definition of φ0 and φt we obtain that zt ℓ2 γ 1 ηα2 This completes the proof of Part II. Part III: In this part, our aim is to show that the error vector et := zt ezt obeys inequality 26. First, note that et = zt ezt ( ) = (Azt 1 + t 1) Aezt 1 = Aet 1 + t 1, where in ( ) we used the same notation as in Part II for t 1. Using a recursive argument as well as e0 = 0 we obtain that i=0 At 1 i i i=0 ηγ 1 ηα2 t 1 i i ℓ2 i=0 ηγ 1 ηα2 t 1 i J0J T 0 Ji+1,i J T i di ℓ2 i=0 2ηβϵγ 1 ηα2 t 1 i zi ℓ2 . The first inequality follows from the triangle inequality and Lemma A.3. Inequality (i) follows from the definition of i. Inequality (ii) follows from inequality 32. Setting c := 2ηβϵ we continue i=0 cγ 1 ηα2 t i 1 zi ℓ2 i=0 cγ2 1 ηα2 t i 1 1 ηα2 i=0 cγ2 1 ηα2 = 2ηγ2βϵ t 1 ηα2 t 1 z0 ℓ2 . Here (iii) holds because of our induction hypothesis 25 and (iv) follows simply from 1 ηα2 1 ηα2 2 . This shows the first part of equation 26 for iteration t. Finally, to derive the second part of equation 26 we observe that for all t 0 and 0 < x 1 16 we have t (1 x)t 1 1 e(15 ln 16 15)x. Since ηα2 16β2 1 16 we can use this estimate, which yields et ℓ2 2ηγ2βϵ t 1 ηα2 4γ2βϵ e 15 ln 16 15 α2 z0 ℓ2 . Hence, we have shown equation 26. Published as a conference paper at ICLR 2021 Part IV: In this part, we aim to show that the parameters of the original and linearized problems are close. For that, we compute that θt eθt ℓ2 = i=0 θh (θi, di) θhlin (θi, di) i=0 J T (θi) di J T 0 edi J T (θi) J T 0 edi ℓ2 + J T (θi) di edi ℓ2 i=0 ϵ ezi ℓ2 + β 1 ηα2 i z0 ℓ2 + 2ηγ2β2ϵ i=0 i 1 ηα2 i 1 z0 ℓ2 . Here (i) follows from assumptions 2 and 3, and (ii) holds because of Lemma A.3 and our induction hypothesis 26. Hence, using the formula Pt i=0 ixi = x(1+txt+1 (t+1)xt) (x 1)2 we obtain that θt eθt ℓ2 γϵ z0 ℓ2 ηα2 + 2ηβ2γ 1 t 1 ηα2 2 t 1 + (t 1) 1 ηα2 ηα2 + 2ηβ2γ 1 ηα2 (iii) γϵ z0 ℓ2 ηα4 z0 ℓ2 , where (iii) holds due to 1 γ and 1 β2 α2 . Hence, we have established inequality 27 for iteration t. Part V: In this part, we are going to prove equation 28 for iteration t. First, it follows from the triangle inequality that θt θ0 ℓ2 eθt θ0 ℓ2 + θt eθt ℓ2 eθt θ0 ℓ2 + 9ϵβ2γ2 Published as a conference paper at ICLR 2021 where in the second inequality we have used Part IV. Now we bound eθt θ0 ℓ2 from above as follows eθt θ0 ℓ2 = η i=0 J T 0 edi J T 0 edi ℓ2 J T 0 0 0 J T 0 = ηγ 1 1 ηα2 t J T 0 0 0 J T 0 ℓ2 (ii) γ β2 J 0 0 0 J 0 where (i) holds by equation 30 and (ii) holds by equation 29. Hence, it follows from the definition of R (equation 24) that θt θ0 ℓ2 γ β2 J 0 0 0 J 0 ℓ2 + 9ϵβ2γ2 This completes the proof. A.4 PRELIMINARIES FOR PROOFS OF RESULTS WITH ONE-HIDDEN LAYER GENERATOR AND LINEAR DISCRIMINATOR In this section, we gather some preliminary results that will be useful in proving the main results i.e. Theorems 2.1 and 2.2. We begin by noting that Theorem 2.1 is an instance of Theorem A.4 with f (W ) = 1 n Pn i=1 V φ (W zi). We thus begin this section by noting that f (W ) can be rewritten as follows 1 n Pn i=1 φ w T 1 zi . . . 1 n Pn i=1 φ w T k zi Furthermore, the Jacobian of this mapping f (W ) takes the form i=1 (V diag (φ (W zi))) z T i . (33) To characterize the spectral properties of this Jacobian it will be convenient to write down the expression for J (W )J (W )T which has a compact form J (W )J (W )T (i) = 1 (V diag (φ (W zi))) z T i diag (φ (W zj)) V T zj (ii) = 1 n2 V diag (φ (W zi)) diag (φ (W zj)) V T z T i zj n2 V diag ℓ=1,...,k i=1 ziφ w T ℓzi n2 V D2 V T , Published as a conference paper at ICLR 2021 where D is a diagonal matrix with entries i=1 ziφ (w T ℓzi) ℓ2 = ZT φ (Zwℓ) ℓ2 , (34) and Z Rn d contains the zi s in its rows. Note that we used simple properties of kronecker product in (i) and (ii), namely (A B)T = AT BT and (A B)(C D) = (AC) (BD). The next lemma establishes concentration of the diagonal entries of matrix D2 around their mean, which will be used in the future lemmas regarding the spectrum of the Jacobian mapping. The proof is deferred to Appendix B.1. Lemma A.6 Suppose w Rd is a fixed vector, z1, z2, , zn Rd are distributed as N(0, σ2 z Id) and constitute the rows of Z Rn d. Then for any 0 δ 3 2 the random variable D = ZT φ (Zw) ℓ2 satisfies (1 δ) E D2 D2 (1 + δ) E D2 with probability at least 1 2 e nδ2 54 + e c1nδ where c1 is a positive constant. Moreover we have E D2 = σ2 z Furthermore, using the above equation we have E J (W )J (W )T = σ2 z d + n 1 A.5 LEMMAS REGARDING THE INITIAL MISFIT AND THE SPECTRUM OF THE JACOBIAN In this section, we state some lemmas regarding the spectrum of the Jacobian mapping and the initial misfit, and defer their proofs to Appendix B. First, we state a result on the minimum singular value of the Jacobian mapping at initialization. Lemma A.7 (Minimum singular value of the Jacobian at initialization) Consider our GAN model with a linear discriminator and the generator being a one hidden layer neural network of the form z V φ(W z), where we have n independent data points z1, z2, , zn Rd distributed as N(0, σ2 z Id) and aggregated as the rows of a matrix Z Rn d, and V Rm k has i.i.d N(0, σ2 v) entries. We also assume that W0 Rk d has i.i.d N(0, σ2 w) entries and all entries of W0, V , and Z are independent. Then the Jacobian matrix at the initialization point obeys σmin (J (W0)) q (1 δ)2 k (1 + δ)2 m (1 + η) (1 + δ) σvσz π 2n , 0 δ 3 with probability at least 1 3e η2m 54 + e c1nδ , where c1 is a positive constant. Next lemma helps us bound the spectral norm of the Jacobian at initialization, which will be used later to derive upper bounds on Jacobian at every point near initialization. Lemma A.8 (spectral norm of the Jacobian at initialization) Following the setup of previous lemma, the operator norm of the Jacobian matrix at initialization point W0 Rk d satisfies J (W0) (1 + δ) σvσz π 2n , 0 δ 3 with probability at least 1 e m 54 + e c1nδ , with c1 a positive constant. The next lemma is adapted from Van Veen et al. (2018) and allows us to bound the variations in the Jacobian matrix around initialization. Published as a conference paper at ICLR 2021 Lemma A.9 (single-sample Jacobian perturbation) Let V Rm k be a matrix with i.i.d. N 0, σ2 v entries, W Rk d, and define the Jacobian mapping J (W ; z) = (V diag (φ (W z))) z T . Then, by taking W0 to be a random matrix with i.i.d. N 0, σ2 w entries, we have J (W ; z) J (W0; z) σv z ℓ2 v u u u u t6 2k R for all W Rk d obeying W W0 R with probability at least 1 e m Our final key lemma bounds the initial misfit f(W0) y := 1 n Pn i=1 V φ (W0zi) x. Lemma A.10 (Initial misfit) Consider our GAN model with a linear discriminator and the generator being a one hidden layer neural network of the form z V φ(W z), where we have n independent data points z1, z2, , zn Rd distributed as N(0, σ2 z Id) and aggregated as the rows of a matrix Z Rn d, and V Rm k has i.i.d N(0, σ2 v) entries. We also assume that the initial W0 Rk d has i.i.d N(0, σ2 w) entries. Then the following event 1 n i=1 V φ (W0zi) x ℓ2 (1 + δ) 1 kdm + x ℓ2 , 0 δ 3 holds with probability at least 1 k e c2n(δ/27)2 + e (δ/9)2m 2 + e (δ/3)2kd 2 , with c2 a fixed constant. A.6 PROOF OF THEOREM 2.1 In this section, we prove Theorem 2.1 by using our general meta Theorem A.4. To do this we need to check that Assumptions 1-3 are satisfied with high probability. Specifically, in our case the parameter θ is the matrix W and the non-linear mapping f is given by f (W ) = 1 n Pn i=1 V φ (W zi). We note that in our result d0 = 0 and thus z0 ℓ2 = r0 ℓ2, which simplifies our analysis. To prove Assumption 1 note that by setting δ = 1 2 and η = 1 3 in Lemma A.7, we have σmin (J (W0)) σvσz This holds with probability at least 1 3e m 72 4k e c n 2k e d 216 , concluding the proof of Assumption 1. Next, by setting δ = 1 2 in Lemma A.8 we have J (W0) ζ := 3 with probability at least 1 e m 2 2k e c n k e d 216 . Now to bound spectral norm of Jacobian at W where W W0 R (the value of R is defined in the proof of assumption 3 below), we use triangle inequality to get J (W ) J (W0) + J (W ) J (W0) . This last inequality together with assumption 3, which we will prove below, yields J (W ) J (W0) + ϵ J (W0) + α2 4γβ J (W0) + J (W0) 2 Published as a conference paper at ICLR 2021 Therefore by choosing β = 2ζ we arrive at J (W ) J (W0) + J (W0) 2 = J (W0) + J (W0) 2 J (W0) + J (W0) 2 8 J (W0) 2 J (W0) 2ζ = β, establishing that assumption 2 holds with with probability at least 1 e m 2 2k e c n k e d 216 . Finally to show that Assumption 3 holds, we use the single-sample Jacobian perturbation result of Lemma A.9 combined with the triangle inequality to conclude that J (W ) J (W0) = i=1 J (W ; zi) J (W0; zi) i=1 J (W ; zi) J (W0; zi) v u u u u t6 2k R (i) σv Z F n v u u u u t6 2k R v u u u u t6 2k R where (i) holds by Cauchy Schwarz inequality, and (ii) holds because for a Gaussian matrix Z Rn d with N(0, σ2 z) entries the following holds nd P Z 2 F 3 2σ2 znd 1 e nd Published as a conference paper at ICLR 2021 Now we set ϵ = α2 4γβ and show that Assumption 3 holds with this choice of ϵ and with radius e R, whose value will be defined later in the proof. First, note that = σ2 vσ2 z 1 k 9 2 m 2 d+ n 1 (i) σvσz 1 8 where (i) holds by assuming k C m with C being a large positive constant. Combining the last inequality with equation 35, we observe that a sufficient condition for assumption 3 to hold is v u u u u t6 2k R which is equivalent to v u u u u t6 2k R Now the first term in the L.H.S. is upper bounded by 1 k if k (210000)2 md, and for the second term we need v u u u u t6 2k R which by defining x = 3d 2R σw 3 is equivalent to x 1 2 1050002 . This last inequality holds for x c log d with c < 1 a sufficiently small positive constant, which translates into So far we have shown that Assumption 3 holds with ϵ = α2 4γβ and with radius e R defined as e R := (d log d) 3 2 , and we conclude that it holds for any radius R less than e R as well. Now we work with Published as a conference paper at ICLR 2021 the definition of R in equation 24 to show that R e R: J 0 0 0 J 0 ℓ2 + 9ϵβ2γ2 α3 r0 ℓ2 + 9 α2 (iii) C 1 σvσz k d m + x ℓ2 where (i) holds because J 0 1 α and 4γβϵ = α2, (ii) holds as 1 β α and as we substitute γ = 5 from Lemma A.3, and (iii) follows from k C m and using δ = 1 3 in Lemma A.10. Now a sufficient condition for equation 36 to hold is that k d m + x ℓ2 which is equivalent to 2 3σvσwσz (d log d) k d m + (d log d) 3 2 x ℓ2 c kσvσwσz. Finally, this inequality is satisfied by assuming k C md4 log (d)3 and setting σvσwσz x ℓ2 md 5 2 log d 3 2 . This shows that assumption 3 holds with probability at least 1 ne m ne c md3 log(d)2 k e c n e m 1500 e kd 162 , concluding the proof of Theorem 2.1. A.7 PROOF OF THEOREM 2.2 Consider a nonlinear least-squares optimization problem of the form min θ Rp L(θ) := 1 2 f (θ) y 2 ℓ2 with f : Rp 7 Rm and y Rm. Suppose the Jacobian mapping associated with f satisfies the following three assumptions. Assumption 1 We assume σmin (J (θ0)) 2α for a fixed point θ0 Rp. Assumption 2 Let be a norm dominated by the Frobenius norm i.e. θ θ F holds for all θ Rp. Fix a point θ0 and a number R > 0. For any θ satisfying θ θ0 R, we have J (θ) J (θ0) α Assumption 3 We assume for all θ Rp obeying θ θ0 R, we have J (θ) β. With these assumptions in place we are now ready to state the following result from Oymak & Soltanolkotabi (2020): Theorem A.11 Given θ0 Rp, suppose assumptions 1, 2, and 3 hold with R = 3 f (θ0) y ℓ2 Published as a conference paper at ICLR 2021 Then, using a learning rate η 1 3β2 , all gradient descent updates obey f (θτ) y ℓ2 1 ηα2 τ f (θ0) y ℓ2 . (38) We are going to apply this theorem in our case where the parameter is W , the nonlinear mapping is f (W ) = 1 n Pn i=1 V φ (W zi) with φ = Re LU, and the norm set to the operator norm. Similar to previous part, by using Lemma A.7 we conclude that with probability at least 1 3e m 72 4k e c n 2k e d 216 , assumption 1 is satisfied with Next we show that assumption 2 is valid for α as defined in the previous line and for radius e R defined later. First we note that α where the inequality holds by assuming K C m for a sufficiently large positive constant C. Now by using equation 35 assumption 2 holds if v u u u u t6 2k R which is equivalent to v u u u u t6 2k R The first term in the L.H.S. of the inequality above is upper bounded by 1 k if k C md. For upper bounding the second term it is sufficient to show that v u u u u t6 2k R which by defining x = 3d 2R σw 3 is equivalent to x log d x C. Now this last inequality holds if we have x c log(d) for a sufficiently small constant c, which by rearranging terms amounts to showing that R c σw (d log(d)) 3 2 . Hence up to this point, we have shown that assumption 2 holds with radius e R := c σw (d log(d)) 3 2 , and this implies that it holds for all values of R less than e R. Therefore, we work with the definition of R in equation 37 to show that R e R as follows: R = 3 f (θ0) y ℓ2 k d m + x ℓ2 = 2 2σvσwσz k m d + 3 x ℓ2 where in (i) we used Lemma A.10 with δ = 1 3. Hence for showing R e R it suffices to show that k m d + 3 x ℓ2 (d log (d)) Published as a conference paper at ICLR 2021 which by assuming k C m simplifies to σvσwσz (d log (d)) k m d + (d log (d)) 3 2 x ℓ2 C k σvσwσz. Now this last inequality holds if k C md4 log (d)3 and by setting σvσwσz x ℓ2 md 5 2 log d 3 2 . Therefore Assumption 2 holds for radius R defined in equation 37 with probability at least 1 ne m 2 ne c md3 log (d)2 k e c n e m 1500 e kd Finally to show assumption 3 holds, we note that for all W satisfying W W0 R, where the value of R is defined in equation 37, it holds that J (W ) J (W0) + J (W ) J (W0) J (W0) + σmin (J (W0)) where the last inequality holds by using lemma A.8, hence establishing that assumption 3 holds with with probability at least 1 e m 2 2k e c n k e d 216 , finishing the proof of Theorem 2.2. B PROOFS OF THE AUXILIARY LEMMAS In this section, we first provide a proof of Lemma A.6 and next go over the proofs of the key lemmas stated in Section A.5. B.1 PROOF OF LEMMA A.6 Recall that J (W )J (W )T = 1 V diag(φ (W zi))diag(φ (W zj)V T (z T i zj) n2 V diag ℓ=1,...,k i=1 ziφ (w T ℓzi) n2 V D2 V T , where D is a diagonal matrix with entries i=1 ziφ (w T ℓzi) ℓ2 = ZT φ (Zwℓ) ℓ2 . (39) The matrix Z Rn d contains the zi s in its rows. In order to proceed we are going to analyze the entries of the diagonal matrix D2. We observe that ZT φ (Zw) 2 ℓ2 = (I ww T ||w||2 )ZT φ (Zw) ℓ2 | {z } A ||w||2 ZT φ (Zw) ℓ2 | {z } B First, we compute the expectation of A. We observe that i=1 (I ww T ||w||2 )ziφ (w T zi) Published as a conference paper at ICLR 2021 Conditioned on w, I ww T w 2 zi is distributed as N 0, σ2 z I ww T w 2 and w T zi has distribu- tion N 0, σ2 z w 2 . Moreover, these two random variables are independent, because w is in the null space of I ww T ||w||2 . This observation yields i=1 (I ww T ||w||2 )ziφ (w T zi) j=1 E (I ww T ||w||2 )zi, (I ww T φ (w T zi)φ (w T zj) E φ (w T zi) 2 1 2(d 1)σ2 z = n 2 (d 1)σ2 z. Next we show that A is concentrated around its mean. Because I ww T ||w||2 zi is independent from w T zi, we use z i as an independent copy of zi. Hence we can write A = (I ww T ||w||2 )ZT φ (Zw) ||w||2 )ZT φ (Z w) i=1 (I ww T ||w||2 )ziφ w T z i where gi = I ww T w 2 zi N 0, σ2 z I ww T w 2 and ui = φ w T z i bern( 1 these are all independent from each other. Note that Pn i=1 giui 2 ℓ2 has the same distribution as g 2 ℓ2 u 2 ℓ2, where g N 0, σ2 z I ww T w 2 and u is a vector with entries ui. Note that for the norm of u, the event n 2 (1 δ) u 2 ℓ2 n holds with probability at least 1 2e nδ2 2 . Recall that for g N(0, σ2Id) and 0 < δ 1 P g 2 ℓ2 (1 + δ)E g 2 ℓ2 P g 2 ℓ2 (1 δ)E g 2 ℓ2 By applying the union bound and noting that E g 2 ℓ2 = (d 1) σ2 z, for 0 < δ 3 2, we obtain that the event | g 2 ℓ2 u 2 ℓ2 n 2 (d 1)σ2 z| δ n 2 (d 1)σ2 z 4Here, bern( 1 2) means that the random variable takes values 0 and 1 each with probability 1/2. Published as a conference paper at ICLR 2021 holds with probability at least 1 2e nδ2 54 . In order to analyze B, we first note that ||w||2 ZT φ (Zw) ||w||ZT φ (Zw) ||w||, φ (Zw) = g, φ (||w||g) 2 i=1 gi 1(gi 0) i=1 Re LU(gi) where gi = z T i w w N 0, σ2 z . It follows that i=1 Re LU(gi) i=1 E Re LU 2(gi) + X i =j E Re LU(gi)Re LU(gj) which results in E D2 ℓℓ = E (A) + E (B) = σ2 z Next, in order to show that B concentrates around its mean, we note that Re LU (gi) is a sub Gaussian random variable with ψ2 norm Cσz, where C is a fixed constant. Therefore X = Pn i=1 Re LU (gi) is sub-Gaussian with ψ2 norm C nσz. By the sub-exponential tail bound for X2 E(X2) we obtain P |B E (B)| t 2e c t nσ2z . Finally by putting these results together and using union bounds we have P D2 ℓℓ E D2 ℓℓ δE D2 ℓℓ 2e nδ2 18 + 2e dδ2 54 + 2e c1nδ, 0 δ 3 finishing the proof of Lemma A.6. B.2 PROOF OF LEMMA A.7 Our main tool for bounding the minimum singular value of the Jacobian mapping will be the following lemma from Soltanolkotabi (2019): Lemma B.1 Let d Rk be a fixed vector with nonzero entries and D = diag (d) . Also, let A Rk m have i.i.d. N (0, 1) entries and T Rm. Define bk (d) = E Dg ℓ2 , where g N (0, Ik) . Also define σ (T ) := max v T v ℓ2 . Then for all u T we have DAu ℓ2 bk (d) u ℓ2 d ℓ ω (T ) + η with probability at least 1 6e 8 d 2 ℓ σ2(T ) . Published as a conference paper at ICLR 2021 In order to apply this lemma, we set the elements of d to be Dℓℓas in equation 39 and choose T = Sm 1 and A = V T Rk m with N(0, σ2 v) entries. It follows that bk (d) = E Dg ℓ2 = Var Dg ℓ2 , We are going to use the fact that for a B-Lipschitz function φ and normal random variable g N(0, 1), based on the Poincare inequality (Ledoux, 2001) we have Var(φ (g)) B2. By noting that for a diagonal matrix D Dx ℓ2 Dy ℓ2 Dx Dy ℓ2 d ℓ x y ℓ2 , we get Var( Dg ℓ2) d 2 ℓ2 d 2 ℓ . This combined with ω Sm 1 m and Lemma B.1 yields that the event σmin (V D) σv d 2 ℓ2 d 2 ℓ d ℓ m η (41) holds with probability at least 1 3e 8 d 2 ℓ . Next, using the concentration bound for D2 ℓℓ, which we obtained in Section B.1, we bound d 2 ℓ2 and d ℓ , where we have set di = Dii for 1 i k. For 0 δ 3 2 we compute that P max 1 i kdi (1 + δ) q E [d2 i ] = P i=1 d2 i (1 + δ)2 E d2 i ! k P d2 i (1 + δ)2 E d2 i k P d2 i (1 + δ) E d2 i k e nδ2 54 + e c1nδ , (42) as well as P d ℓ2 (1 δ) E [d2 i ] P i=1 d2 i (1 δ)2 E d2 i ! k P d2 i (1 δ)2 E d2 i k P d2 i (1 δ)E d2 i k e nδ2 54 + e c1nδ . Finally by replacing η with η d ℓ m in equation 41, combined with equation 42 and equation 43, for a random W0 with i.i.d. N 0, σ2 w entries we have: σmin (J (W0)) = 1 nσmin (V D) (1 δ)2 k (1 + δ)2 m (1 + η) (1 + δ) q (1 δ)2 k (1 + δ)2 m (1 + η) (1 + δ) σvσz π 2n , 0 δ 3 with probability at least 1 3e η2m 54 + e c1nδ . This completes the proof of Lemma A.7. Published as a conference paper at ICLR 2021 B.3 PROOF OF LEMMA A.8 Recall that J (W )J (W )T = 1 n2 V diag ℓ=1,...,k i=1 ziφ (w T ℓzi) n2 V D2 V T , which implies that For matrix V Rm k with i.i.d N(0, σ2 v) the event holds with probability at least 1 e m 2 . Regarding matrix D by repeating equation 42 the following event D = max 1 i k Dii (1 + δ) q E[D2 ii] = (1 + δ) σz holds with probability at least 1 k e nδ2 54 + e c1nδ . Putting these together it yields that the event J (W0) (1 + δ) σvσz π 2n , 0 δ 3 holds with probability at least 1 e m 54 + e c1nδ , finishing the proof of Lemma A.8. B.4 PROOF OF LEMMA A.10 First, note that if W has i.i.d. N 0, σ2 w entries and V , W , Z are all independent, then f (W ) ℓ2 = 1 n V φ W ZT 1n 1 ℓ2 has the same distribution as v ℓ2 a ℓ2, where v N 0, σ2 v Im and a = 1 nφ W ZT 1 has independent sub-Gaussian entries, so its ℓ2-norm is concentrated. Note that conditioned on W , ai = 1 n Pn j=1 Re LU z T j wi is sub-Gaussian with ai ψ2 = C wi ℓ2σz n , and it is concentrated around Eai = 1 2π wi ℓ2 σz. This gives P {ai (1 + δ)Eai} 1 e c δ2(Eai)2 ai 2 ψ2 = 1 e cnδ2, which implies that P a2 i (1 + 3δ)(Eai)2 P a2 i (1 + δ)2(Eai)2 1 e cnδ2, 0 δ 1. Due to the union bound we get that a 2 ℓ2 (1 + δ) i=1 (Eai)2 ) i=1 a2 i (1 + δ) (Eai)2 ) i=1 P a2 i (1 + δ) (Eai)2 k e cn(δ/3)2, 0 δ 3. By substituting Pk i=1(Eai)2 = 1 2πσ2 z W 2 F this shows P a ℓ2 (1 + δ) 1 P a 2 ℓ2 (1 + δ) 1 2π σ2 z W 2 F 1 k e cn(δ/3)2, 0 δ 3. Published as a conference paper at ICLR 2021 We also have the following result for v N(0, σ2 v Im) P v ℓ2 (1 + δ) σv m 1 e δ2m By combining the above results we obtain P a ℓ2 v ℓ2 (1 + δ) 1 2π σvσz m W F P a ℓ2 v ℓ2 (1 + δ/3)2 1 2π σvσz m W F 1 k e cn(δ/9)2 e (δ/3)2m Furthermore, we can bound W F by the tail inequality P n W F (1 + δ) σw kd o 1 e δ2kd Hence, by combining the last two results we have that P a ℓ2 v ℓ2 (1 + δ) 1 k d m P a ℓ2 v ℓ2 (1 + δ/3)2 1 1 k e cn(δ/27)2 e (δ/9)2m 2 e (δ/3)2kd Therefore, due to the triangle inequality the event f(W0) x ℓ2 (1 + δ) 1 k d m + x ℓ2 , 0 δ 3 holds with probability at least 1 k e c2n(δ/27)2 e (δ/9)2m 2 e (δ/3)2kd 2 for some positive constant c2, completing the proof of Lemma A.10. C ADDITIONAL EXPERIMENTS Effect of single component overparameterization: In Section 3 of the main paper, we performed experiments in the setting where the size of generator and discriminator are held roughly the same (both discriminator and generator uses the same value of k). In this part, we analyze singlecomponent overparameterization where we study the effect of overparameterization when one of the components (generator / discriminator) has varying k, while the other component uses the standard value of k (64 for DCGAN and 128 for Resnet GAN). The FID variation of single-component overparameterization are shown in Fig. 7. We observe similar trends as the previous case where increasing overparameterization leads to improved FID scores. Interestingly, increasing the value of k beyond the default value used in the other component leads to a slight drop in performance. Hence, choosing comparable sizes of discriminator and generator models is recommended. D EXPERIMENTAL DETAILS The model architectures we use this in the experiments are shown in Figure 8. In both DCGAN and Resnet-based GANs, the papemeter k controls the number of convolutional filters in each layer. The larger the value of k is, the more overparameterized the models are. Optimization: Both DCGAN and Resnet-based GAN models are optimized using the commonly used hyper-parameters: Adam with learning rate 0.0001 and betas (0.5, 0.999) for DCGAN, gradient penalty of 10 and 5 critic iterations per generator s iteration for both DCGAN and Resnet-based GAN models. Models are trained for 300, 000 iterations with a batch size of 64. E NEAREST NEIGHBOR VISUALIZATION In this section, we visualize the nearest neighbors of samples generated using GAN models trained with different levels of overparameterization. More specifically, we trained a DCGAN model with k = 8 and k = 128, synthesize random samples from the trained model and query the nearest neighbors in the training set. The plot of obtained samples is shown in Figure. 10. We observe that overparameterized models generate samples with high diversity. Published as a conference paper at ICLR 2021 16 32 64 128 256 Hidden Dimension (k) Train FID Test FID (a) Discriminator overparameterization 16 32 64 128 256 Hidden Dimension (k) Train FID Test FID (b) Generator overparameterization Figure 7: Single Component Overparamterization Results: We plot FID scores of Resnet GAN as the hidden dimension of one of the components are varied, while the hidden dimension of other component is held fixed. Even in this case, overparameterization improves model convergence. Figure 8: Architectures used in over-parameterization experiments. The number of out-channels in convolutional layers is indicated in red. Parameter k controls the width of the architectures larger the k, more over-parameterized the models are. Hidden Dimension (k) (a) Discriminator trained to optimality Hidden Dimension (k) (b) Gradient Descent Ascent Hidden Dimension (k) Disc Iters diter = 2 Disc Iters diter = 5 Disc Iters diter = 10 (c) diter steps of discriminator update per generator iteration Figure 9: Convergence plot GAN model trained on the Two-Moons dataset, with linear discriminator and 1-hidden layer generator as the hidden dimension (k) increases. Over-parameterizated models show improved convergence Published as a conference paper at ICLR 2021 Figure 10: Nearest neighbor visualization. We visualize the nearest neighbor samples in training set for generations from DCGAN model trained on CIFAR-10 dataset. Left panel shows DCGAN trained with k = 8, while the right one shows the one with k = 128. We observe that overparameterized models generate samples with high diversity.