# minimax_estimation_of_neural_net_distance__b6a4eea4.pdf Minimax Estimation of Neural Net Distance Kaiyi Ji Department of ECE The Ohio State University Columbus, OH 43210 ji.367@osu.edu Yingbin Liang Department of ECE The Ohio State University Columbus, OH 43210 liang.889@osu.edu An important class of distance metrics proposed for training generative adversarial networks (GANs) is the integral probability metric (IPM), in which the neural net distance captures the practical GAN training via two neural networks. This paper investigates the minimax estimation problem of the neural net distance based on samples drawn from the distributions. We develop the first known minimax lower bound on the estimation error of the neural net distance, and an upper bound tighter than an existing bound on the estimator error for the empirical neural net distance. Our lower and upper bounds match not only in the order of the sample size but also in terms of the norm of the parameter matrices of neural networks, which justifies the empirical neural net distance as a good approximation of the true neural net distance for training GANs in practice. 1 Introduction Generative adversarial networks (GANs), first introduced by [9], have become an important technique for learning generative models from complicated real-life data. Training GANs is performed via a minmax optimization with the maximum and minimum respectively taken over a class of discriminators and a class of generators, where both discriminator and generators are modeled by neural networks. Given that the discriminator class is sufficiently large, [9] interpreted the GAN training as finding a generator such that the generated distribution ν is as close as possible to the target true distribution µ, measured by the Jensen-Shannon distance d JS(µ, ν), as shown below: min ν DG d JS(µ, ν). (1) Inspired by such an idea, a large body of GAN models were then proposed based on various distance metrics between a pair of distributions, in order to improve the training stability and performance, e.g., [2, 3, 13, 15]. Among them, the integral probability metric (IPM) [19] arises as an important class of distance metrics for training GANs, which takes the following form d F(µ, ν) = sup f F |Ex µf(x) Ex νf(x)| . (2) In particular, different choices of the function class F in (2) result in different distance metrics. For example, if F represents a set of all 1-Lipschitz functions, then d F(µ, ν) corresponds to the Wasserstein-1 distance, which is used in Wasserstein-GAN (WGAN) [2]. If F represents a unit ball in a reproducing kernel Hilbert space (RKHS), then d F(µ, ν) corresponds to the maximum mean discrepancy (MMD) distance, which is used in MMD-GAN [7, 13]. Practical GAN training naturally motivates to take F in (2) as a set Fnn of neural networks, which results in the so-called neural net distance d Fnn(µ, ν) introduced and studied in [3, 28]. For computational feasibility, in practice d Fnn(ˆµn, ˆνm) is typically adopted as an approximation (i.e., an estimator) of the true neural net distance d Fnn(µ, ν) for the practical GAN training, where ˆµn 32nd Conference on Neural Information Processing Systems (Neur IPS 2018), Montréal, Canada. and ˆνm are the empirical distributions corresponding to µ and ν, respectively, based on n samples drawn from µ and m samples drawn from ν. Thus, one important question one can ask here is how well d Fnn(ˆµn, ˆνm) approximates d Fnn(µ, ν). If they are close, then training GANs to small d Fnn(ˆµn, ˆνm) also implies small d Fnn(µ, ν), i.e., the generated distribution ν is guaranteed to be close to the true distribution µ. To answer this question, [3] derived an upper bound on the quantity |d Fnn(µ, ν) d Fnn(ˆµn, ˆνm)|, and showed that d Fnn(ˆµn, ˆνm) converges to d Fnn(µ, ν) at a rate of O(n 1/2 + m 1/2). However, the following two important questions are still left open: (a) Whether the rate O(n 1/2 + m 1/2) of convergence is optimal? We certainly want to be assured that the empirical objective d Fnn(ˆµn, ˆνm) used in practice does not fall short at the first place. (b) The dependence of the upper bound on neural networks in [3] is characterized by the total number of parameters of neural networks, which can be quite loose by considering recent work, e.g., [20, 27]. Thus, the goal of this paper is to address the above issue (a) by developing a lower bound on the minimax estimation error of d Fnn(µ, ν) (see Section 2.2 for the precise formulation) and to address issue (b) by developing a tighter upper bound than [3]. In fact, the above problem can be viewed as a distance estimation problem, i.e., estimating the neural net distance d Fnn(µ, ν) based on samples i.i.d. drawn from µ and ν, respectively. The empirical distance d Fnn(ˆµn, ˆνm) serves as its plug-in estimator (i.e., substituting the true distributions by their empirical versions). We are interested in exploring the optimality of the convergence of such a plug-in estimator not only in terms of the size of samples but also the parameters of neural networks. We further note that the neural net distance can be used in a variety of other applications such as the support measure machine [18] and the anomaly detection [29], and hence the performance guarantee we establish here can be of interest in those domains. 1.1 Our Contribution In this paper, we investigate the minimax estimation of the neural net distance d Fnn(µ, ν), where the major challenge in analysis lies in dealing with complicated neural network functions. This paper establishes a tighter upper bound on the convergence rate of the empirical estimator than the existing one in [3], and develop a lower bound that matches our upper bound not only in the order of the sample size but also in terms of the norm of the parameter matrices of neural networks. Our specific contributions are summarized as follows: In Section 3.1, we provide the first known lower bound on the minimax estimation error of d Fnn(µ, ν) based on finite samples, which takes the form as cl max n 1/2, m 1/2 where the constant cl depends only on the parameters of neural networks. Such a lower bound further specializes to bl Qd i=1 M(i) max n 1/2, m 1/2 for Re LU networks, where bl is a constant, d is the depth of neural networks and M(i) can be either the Frobenius norm or 1, norm constraint of the parameter matrix Wi in layer i. Our proof exploits the Le Cam s method with the technical development of a lower bound on the difference between two neural networks. In Section 3.2, we develop an upper bound on the estimation error of d Fnn(µ, ν) by d Fnn(ˆµn, ˆνm), which takes the form as cu(n 1/2 + m 1/2), where the constant cu depends only on the parameters of neural networks. Such an upper bound further specializes to bu d + h Qd i=1 M(i)(n 1/2 + m 1/2) for Re LU networks, where bu is a constant, h is the dimension of the support of µ and ν, and d + h can be replaced by d or d + log h depending on the distribution class and the norm of the weight matrices. Our proof includes the following two major technical developments presented in Section 3.4. A new concentration inequality: In order to develop an upper bound for the unboundedsupport sub-Gaussican class, standard Mc Diarmid inequality under bounded difference condition is not applicable. We thus first generalize a Mc Diarmid inequality [11] for unbounded functions of scalar sub-Gaussian variables to that of sub-Gaussian vectors, which can be of independent interest for other applications. Such a development requires substantial machineries. We then apply such a concentration inequality to upper-bounding the estimation error of the neural net distance in terms of Rademacher complexity. Upper bound on Rademacher complexity: Though existing Rademacher complexity bounds [8, 22] of neural networks can be used for input data with bounded support, direct applications of those bounds to the unbounded sub-Gaussian input data yield order-level loose bounds. Thus, we develop a tighter bound on the Rademacher complexity that exploits the sub-Gaussianity of the input variables. Such a bound is also tighter than the existing same type by [23]. The details of the comparison are provided after Theorem 7. In Section 3.3, comparison of the lower and upper bounds indicates that the empirical neural net distance (i.e., the plug-in estimator) achieves the optimal minimax estimation rate in terms of n 1/2 + m 1/2. Furthermore, for Re LU networks, the two bounds also match in terms of Qd i=1 M(i), indicating that both Qd i=1 Wi F and Qd i=1 Wi 1, are key quantities that capture the estimation accuracy. Such a result is consistent with those made in [20] for the generalization error of training deep neural networks. We note that there is still a gap d between the bounds, which requires future efforts to address. 1.2 Related Work Estimation of IPMs. [25] studied the empirical estimation of several IPMs including the Wasserstein distance, MMD and Dudley metric, and established the convergence rate for their empirical estimators. A recent paper [26] established that the empirical estimator of MMD achieves the minimax optimal convergence rate. [3] introduced the neural net distance that also belongs to the IPM class, and established the convergence rate of its empirical estimator. This paper establishes a tighter upper bound for such a distance metric, as well as a lower bound that matches our upper bound in the order of sample sizes and the norm of the parameter matrices. Generalization error of GANs. In this paper, we focus on the minimax estimation error of the neural net distance, and hence the quantity |d Fnn(µ, ν) d Fnn(ˆµn, ˆνm)| is of our interest, on which our bound is tighter than the earlier study in [3]. Such a quantity relates but is different from the following generalization error recently studied in [14, 28] for training GANs. [28] studied the generalization error d F(µ, ˆν ) infν DG d F(µ, ν), where ˆν was the minimizer of d F(ˆµn, ν) and F was taken as a class Fnnof neural networks. [14] studied the same type of generalization error but took F as a Sobolev space, and characterized how the smoothness of Sobolev space helps the GAN training. Rademacher complexity of neural networks. Part of our analysis of the minimax estimation error of the neural net distance requires to upper-bound the average Rademacher complexity of neural networks. Although various bounds on the Rademacher complexity of neural networks, e.g., [1, 4, 8, 21, 22], can be used for distributions with bounded support, direct application of the best known existing bound for sub-Gaussian variables turns out to be order-level looser than the bound we establish here. [6, 23] studied the average Rademacher complexity of one-hidden layer neural networks over Gaussian variables. Specialization of our bound to the setting of [23] improves its bound, and to the setting of [6] equals its bound. 1.3 Notations We use the bold-faced small and capital letters to denote vectors and matrices, respectively. Given a vector w Rh, w 2 = Ph i=1 w2 i 1/2 denotes the ℓ2 norm, and w 1 = Ph i=1 |wi| denotes the ℓ1 norm, where wi denotes the ith coordinate of w. For a matrix W = [Wij], we use W F = P i,j W2 ij 1/2 to denote its Frobenius norm, W 1, to denote the maximal ℓ1 norm of the row vectors of W, and W to denote its spectral norm. For a real distribution µ, we denote ˆµn as its empirical distribution, which takes 1/n probability on each of the n samples i.i.d. drawn from µ. 2 Preliminaries and Problem Formulations In this section, we first introduce the neural net distance and the specifications of the corresponding neural networks. We then introduce the minimax estimation problem that we study in this paper. 2.1 Neural Net Distance The neural net distance between two distributions µ and ν introduced in [3] is defined as d Fnn(µ, ν) = sup f Fnn |Ex µf(x) Ex νf(x)| , (3) where Fnn is a class of neural networks. In this paper, given the domain X Rh, we let Fnn be the following set of depth-d neural networks of the form: f Fnn : x X 7 w T d σd 1 (Wd 1σd 2 ( σ1(W1x))) , (4) where Wi, i = 1, 2, ..., d 1 are parameter matrices, wd is a parameter vector (so that the output of the neural network is a scalar), and each σi denotes the entry-wise activation function of layer i for i = 1, 2, . . . , d 1, i.e., for an input z Rt, σi(z) := [σi(z1), σi(z2), ..., σi(zt)]T . Throughout this paper, we adopt the following two assumptions on the activation functions in (4). Assumption 1. All activation functions σi( ) for i = 1, 2, ..., d 1 satisfy σi( ) is continuous and non-decreasing and σi(0) 0. σi( ) is Li-Lipschitz, where Li > 0. Assumption 2. For all activation functions σi, i = 1, 2, ..., d 1, there exist positive constants q(i) and Qσ(i) such that for any 0 x1 x2 q(i), σi(x2) σi(x1) Qσ(i)(x2 x1). Note that Assumptions 1 and 2 hold for a variety of commonly used activation functions including Re LU, sigmoid, soft Plus and tanh. In particular, in Assumption 2, the existence of the constants q(i) and Qσ(i) are more important than the particular values they take, which affect only the constant terms in our bounds presented later. For example, Assumption 2 holds for Re LU for any q(i) and Qσ(i) = 1, and holds for sigmoid for any q(i) > 0 and Qσ(i) = 1/(2 + 2eq(i)). As shown in [2], the practical training of GANs is conducted over neural networks with parameters lying in a compact space. Thus, we consider the following two compact parameter sets as taken in [5, 8, 22, 24], Wi Rni ni+1 : Wi 1, M1, (i) {wd Rnd : wd 1 M1, (d)} , Wi Rni ni+1 : Wi F MF (i) {wd Rnd : wd MF (d)} . (5) 2.2 Minimax Estimation Problem In this paper, we study the minimax estimation problem defined as follows. Supposed P is a subset of Borel probability measures of interest. Let ˆd(n, m) denote any estimator of the neural net distance d Fnn(µ, ν) constructed by using the samples {xi}n i=1 and {yj}m j=1 respectively generated i.i.d. by µ, ν P. Our goal is to first find a lower bound Cl(P, n, m) on the estimation error such that inf ˆd(n,m) sup µ,ν P P n |d Fnn(µ, ν) ˆd(n, m)| Cl(P, n, m) o > 0, (6) where P is the probability measure with respect to the random samples {xi}n i=1 and {yi}m i=1. We then focus on the empirical estimator d Fnn(ˆµn, ˆνm) and are interested in finding an upper bound Cu(P, n, m) on the estimation error such that for any arbitrarily small δ > 0, sup µ,ν P P {|d Fnn(µ, ν) d Fnn(ˆµn, ˆνm)| Cu(P, n, m)} > 1 δ. (7) Clearly such an upper bound also holds if the left hand side of (7) is defined in the same minimax sense as in (6). It can be seen that the minimax estimation problem is defined with respect to the set P of distributions that µ and ν belong to. In this paper, we consider the set of all sub-Gaussian distributions over Rh. We further divide the set into the two subsets and analyze them separately, for which the technical tools are very different. The first set Pu B contains all sub-Gaussian distributions with unbounded support, and bounded mean and variance. Specifically, we assume that there exist τ > 0 and Γu B > 0 such that for any probability measure µ Pu B and any vector a Rh, Ex µ ea T (x E(x)) e a 2τ 2/2 with 0 < τ, E(x) Γu B. (8) The second class PB of distributions contains all sub-Gaussian distributions with bounded support X = {x : x ΓB} Rh (note that this set in fact includes all distributions with bounded support). These two mutually exclusive classes cover most probability distributions in practice. 3 Main Results 3.1 Minimax Lower Bound We first develop the following minimax lower bound for the sub-Gaussian distribution class Pu B with unbounded support. Theorem 1 (unbounded-support sub-Gaussian class Pu B). Let Fnn be the set of neural networks defined by (4). For the parameter set WF in (5), if m 1 + n 1 < 3q(1)/(2MF (1)Γu B), then inf ˆd(n,m) sup µ,ν Pu B P n d Fnn(µ, ν) ˆd(n, m) C(Pu B) max n 1/2, m 1/2 o 1 3 6 MF (1)MF (d)Γu B 1 Φ q(1) 2MF (1)Γu B i=1 Qσ(i), (10) and Φ( ) is the cumulative distribution function (CDF) of the standard Gaussian distribution and the constants Ω(i), i = 2, 3, ..., d 1 are given by the following recursion Ω(2) = min MF (2), q(2) σ1(q(1)) , Ω(i) = min MF (i), q(i) σi 1(Ω(i 1) Ω(2)σ1(q(1))) for i = 3, 4, ..., d 1. (11) The same result holds for the parameter set W1, by replacing MF (i) in (10) with M1, (i). Theorem 1 implies that d Fnn(µ, ν) cannot be estimated at a rate faster than max n 1/2, m 1/2 by any estimator over the class Pu B. The proof of Theorem 1 is based on the Le Cam s method. Such a technique was also used in [26] to derive the minimax lower bound for estimating MMD. However, our technical development is quite different from that in [26]. In specific, one major step of the Le Cam s method is to lower-bound the difference arising due to two hypothesis distributions. In the MMD case in [26], MMD can be expressed in a closed form for the chosen distributions. Hence, the lower bound in Le Cam s method can be derived based on such a closed form of MMD. As a comparison, the neural net distance does not have a closed-form expression for the chosen distributions. As a result, our derivation involves lower-bounding the difference of the expectations of the neural network function with respect to two corresponding distributions. Such developments require substantial machineries to deal with the complicated multi-layer structure of neural networks. See Appendix A.1 for more details. For general neural networks, C(Pu B) takes a complicated form as in (10). We next specialize to Re LU networks to illustrate how this constant depends on the neural network parameters. Corollary 1. Under the setting of Theorem 1, suppose each activation function is Re LU, i.e., σi(z) = max{0, z}, i = 1, 2, ..., d 1. For the parameter set WF and all m, n 1, we have inf ˆd(n,m) sup µ,ν Pu B P ( d Fnn(µ, ν) ˆd(n, m) 0.08 Γu B i=1 MF (i) max n 1/2, m 1/2 ) The same result holds for the parameter set W1, by replacing MF (i) with M1, (i). Next, we provide the minimax lower bound for the distribution class PB with bounded support. The proof (see Appendix B) is also based on the Le Cam s method, but with the construction of distributions having the bounded support sets, which are different from those for Theorem 1. Theorem 2 (bounded-support class PB). Let Fnn be the set of neural networks defined by (4). For the parameter set WF , we have inf ˆd(n,m) sup µ,ν PB P n d Fnn(µ, ν) ˆd(n, m) C(PB) max n 1/2, m 1/2 o 1 C(PB) = 0.17 (MF (d)σd 1( σ1(MF (1)ΓB)) MF (d)σd 1( σ1( MF (1)ΓB))) , (13) where all constants MF (i), i = 1, 2, ..., d in the second term of the right side of (13) have negative signs. The same result holds for the parameter set W1, by replacing MF (i) in (13) with M1, (i). Corollary 2. Under the setting of Theorem 2, suppose that each activation function is Re LU. For the parameter set WF , we have, inf ˆd(n,m) sup µ,ν PB P ( d Fnn(µ, ν) ˆd(n, m) 0.17 ΓB i=1 MF (i) max n 1/2, m 1/2 ) The same result holds for the parameter set W1, by replacing MF (i) with M1, (i). 3.2 Rademacher Complexity-based Upper Bound In this subsection, we provide an upper bound on |d Fnn(µ, ν) d Fnn(ˆµn, ˆνm)|, which serves as an upper bound on the minimax estimation error. Our main technical development lies in deriving the bound for the unbounded-support sub-Gaussian class Pu B, which requires a number of new technical developments. We discuss its proof in Section 3.4. Theorem 3 (unbounded-support sub-Gaussian class Pu B). Let Fnn be the set of neural networks defined by (4), and suppose that two distributions µ, ν Pu B and ˆµn, ˆνm are their empirical measures. For a constant δ > 0 satisfying 6h min{n, m} m 1 + n 1 4 p log(1/δ), we have (I) If the parameter set is WF and each activation function satisfies σi(αx) = ασi(x) for all α > 0 (e.g., Re LU or leaky Re LU), then with probability at least 1 δ over the randomness of ˆµn and ˆνm, |d Fnn(µ, ν) d Fnn(ˆµn, ˆνm)| 6d log 2 + 5h/4 + p 2h log(1/δ) n 1/2 + m 1/2 . (II) If the parameter set is W1, and each activation function satisfies σi(0) = 0 (e.g., Re LU, leaky Re LU or tanh), then with probability at least 1 δ over the randomness of ˆµn and ˆνm, |d Fnn(µ, ν) d Fnn(ˆµn, ˆνm)| i=1 M1, (i) 2d log 2 + 2 log h + p 2h log(1/δ) n 1/2 + m 1/2 . Corollary 3. Theorem 3 is directly applicable to Re LU networks with Li = 1 for i = 1, . . . , d. We next present an upper bound on |d Fnn(µ, ν) d Fnn(ˆµn, ˆνm)| for the bounded-support class PB. In such a case, each data sample xi satisfies xi ΓB, and hence we apply the standard Mc Diarmid inequality [16] and the Rademacher complexity bounds in [8] to upper-bound |d Fnn(µ, ν) d Fnn(ˆµn, ˆνm)|. The detailed proof can be found in Appendix D. Theorem 4 (bounded-support class PB). Let Fnn be the set of neural networks defined by (4), and suppose that two distributions µ, ν PB. Then, we have (I) If the parameter set is WF and each activation function satisfies σi(αx) = ασi(x) for all α > 0, then with probability at least 1 δ over the randomness of ˆµn and ˆνm, |d Fnn(µ, ν) d Fnn(ˆµn, ˆνm)| i=1 M1, (i) d log 2 + p 2 n 1/2 + m 1/2 . (II) If the parameter set is W1, and each activation function satisfies σi(0) = 0, then with probability at least 1 δ over the randomness of ˆµn and ˆνm, |d Fnn(µ, ν) d Fnn(ˆµn, ˆνm)| i=1 M1, (i) d + 1 + log h + p 2 log(1/δ) n 1/2 + m 1/2 , Corollary 4. Theorem 4 is applicable for Re LU networks with Li = 1 for i = 1, . . . , d. As a comparison, the upper bound derived in [3] is linear with the total number of the parameters of neural networks, whereas our bound in Theorem 4 scales only with the square root of depth d (and other terms in Theorem 4 matches the lower bound in Corollary 2), which is much smaller. 3.3 Optimality of Minimax Estimation and Discussions We compare the lower and upper bounds and make the following remarks on the optimality of minimax estimation of the neural net distance. For the unbounded-support sub-Gaussian class Pu B, comparison of Theorems 1 and 3 indicates that the empirical estimator d Fnn(ˆµn, ˆνm) achieves the optimal minimax estimation rate max{n 1/2, m 1/2} as the sample size goes to infinity. Furthermore, for Re LU networks, comparison of Corollaries 1 and 3 implies that the lower and upper bounds match further in terms of Γu B Qd i=1 M(i) max n 1/2, m 1/2 , where M(i) can be MF (i) or M1, (i), indicating that both Qd i=1 Wi F and Qd i=1 Wi 1, capture the estimation accuracy. Such an observation is consistent with those made in [20] for the generalization error of training deep neural networks. Moreover, the mean norm E(x) and the variance parameter of the distributions also determine the estimation accuracy due to the match of the bounds in Γu B. The same observations hold for the bounded-support class PB by comparing Theorems 2 and 4 as well as comparing Corollaries 2 and 4. We further note that for Re LU networks, for both the unbounded-support sub-Gaussian class Pu B and the bounded-support class PB, there is a gap of d + h, d + log h depending on the distribution class and the norm of the weight matrices). To close the gap, the size-independent bound on Rademacher complexity in [8] appears appealing. However, such a bound is applicable only to the bounded-support class PB, and helps to remove the dependence on d but at the cost of sacrificing the rate (i.e., from m 1/2 + n 1/2 to m 1/4 + n 1/4). Consequently, such an upper bound matches the lower bound in Corollary 2 for Re LU networks over the network parameters, but not in terms of the sample size, and is interesting only in the regime when d max{n, m}. It is thus still an open problem and calling for future efforts to close the gap of d for estimating the neural net distance. 3.4 Proof Outline for Theorem 3 In this subsection, we briefly explain the three major steps to prove Theorem 3, because some of these intermediate steps correspond to theorems that can be of independent interest. The detailed proof can be found in Appendix C. Step 1: A new Mc Diarmid s type of inequality. To establish an upper bound on |d Fnn(µ, ν) d Fnn(ˆµn, ˆνm)|, the standard Mc Diarmid s inequality [16] that requires the bounded difference condition is not applicable here, because the input data has unbounded support so that the functions in Fnn can be unbounded, e.g., Re LU neural networks. Such a challenge can be addressed by a generalized Mc Diarmid s inequality for scalar sub-Gaussian variables established in [11]. However, the input data are vectors in our setting. Thus, we further generalize the result in [11] and establish the following new Mc Diarmid s type of concentration inequality for unbounded sub-Gaussian random vectors and Lipschitz (possibly unbounded) functions. Such development turns out to be nontrivial, which requires further machineries and tail bound inequalities (see detailed proof in Appendix C.1). Theorem 5. Let {xi}n i=1 i.i.d. µ and {yi}m i=1 i.i.d. ν be two collections of random variables, where µ, ν Pu B are two unbounded-support sub-Gaussian distributions over Rh. Suppose that F : (Rh)n+m 7 R is a function of x1, ..., xn, y1, ..., ym, which satisfies for any i, |F(x1, ..., xi, ...., ym) F(x1, ..., x i, ...., ym)| LF xi x i /n, |F(x1, ..., yi, ...., ym) F(x1, ..., y i, ...., ym)| LF yi y i /m. (14) Then, for all 0 ϵ 3hΓu BLF min{m, n}(n 1 + m 1), P (F(x1, ..., xn, ...., ym) E F(x1, ..., xn, ...., ym) ϵ) exp ϵ2mn 8hΓ2 u BL2 F(m + n) Step 2: Upper bound based on Rademacher complexity. By applying Theorem 5, we derive an upper bound on |d Fnn(µ, ν) d Fnn(ˆµn, ˆνm)| in terms of the average Rademacher complexity that we define below. Definition 1. The average Rademacher complexity Rn(Fnn, µ) corresponding to the distribution µ with n samples is defined as Rn(Fnn, µ) = Ex,ϵ supf Fnn 1 n Pn i=1 ϵif(xi) , where {xi}n i=1 are generated i.i.d. by µ and {ϵi}n i=1 are independent random variables chosen from { 1, 1} uniformly. Then, we have the following result with the proof provided in Appendix C.2. Recall that Li is the Lipchitz constant of the activation function σi( ). Theorem 6. Let Fnn be the set of neural networks defined by (4). For the parameter set WF defined in (5), suppose that µ, ν Pu B are two sub-Gaussian distributions satisfying (8) and ˆµn, ˆνm are the empirical measures of µ, ν. If 6h min{n, m} m 1 + n 1 4 p log(1/δ), then with probability at least 1 δ over the randomness of ˆµn and ˆνm , |d Fnn(µ, ν) d Fnn(ˆµn, ˆνm)| 2Rn(Fnn, µ) + 2Rm(Fnn, ν) + 2Γu B 2h (n 1 + m 1) log(1/δ). (16) The same result holds for the parameter set W1, by replacing MF (i) in (16) with M1, (i). Step 3: Average Rademacher complexity bound for unbounded sub-Gaussian variables. We derive an upper bound on the Rademacher complexity Rn(Fnn, µ). In particular, as we explain next, our upper bound is tighter than directly applying the existing bounds in [8, 22]. To see this, [8, 22] provided upper bounds on the data-dependent Rademacher complexity of neural networks defined by ˆRn(Fnn) = Eϵ supf Fnn 1 n Pn i=1 ϵif(xi). For the parameter set WF , [22] showed that ˆRn(Fnn) was bounded by 2d Qd i=1 MF (i) Qd 1 i=1 Li p Pn i=1 xi 2 n, and [8] further improved this bound to ( p 2 log(2)d + 1) Qd i=1 MF (i) Qd 1 i=1 Li p Pn i=1 xi 2 n. Directly applying this result for unbounded sub-Gaussian inputs {xi}n i=1 yields Ex ˆRn(Fnn) O Γu B dh n . (17) We next show that by exploiting the sub-Gaussianity of the input data, we provide an improved bound on the average Rademacher complexity. The detailed proof can be found in Appendix C.3. Theorem 7. Let Fnn be the set of neural networks defined by (4), and let x1, ..., xn Rh be i.i.d. random samples generated by an unbounded-supported sub-Gaussian distribution µ Pu B. Then, (I) If the parameter set is WF and activation functions satisfy σi(αx) = ασi(x) for all α > 0, then Rn(Fnn, µ) Γu B 6d log 2 + 5h/4 n. (18) (II) If the parameter set is W1, and each activation function satisfies σi(0) = 0, then i=1 M1, (i) d log 2 + log h n. (19) Theorem 7 indicates that for the parameter set WF , our upper bound in (18) replaces the order dependence O( dh) in (17) to O( d + h), and hence our proof has the order-level improvement than directly using the existing bounds. The same observation can be made for the parameter set W1, . Such improvement is because our proof takes advantage of the sub-Gaussianity of the inputs whereas the bounds in [8, 22] must hold for any data input (and hence the worst-case data input). We also note that [23] provided an upper bound on the Rademacher complexity for one-hidden-layer neural networks for Gaussian inputs. Casting Lemma 3.2 in [23] to our setting of (18) yields Rn(Fnn, µ) O Γu BMF (2)MF (1)L1 p n1h/ n , (20) where n1 is the number of neurons in the hidden layer. Compared with (20), our bound has an order-level O( n1) improvement. Summary. Therefore, Theorem 3 follows by combining Theorems 6 and 7 and using the fact that p 1/n + 1/m < p 4 Conclusion In this paper, we developed both the lower and upper bounds for the minimax estimation of the neural net distance based on finite samples. Our results established the minimax optimality of the empirical estimator in terms of not only the sample size but also the norm of the parameter matrices of neural networks, which justifies its usage for training GANs. Acknowledgments The work was supported in part by U.S. National Science Foundation under the grant CCF-1801855. [1] M. Anthony and P. L. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge University Press, 2009. [2] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein generative adversarial networks. In Proc. International Conference on Machine Learning (ICML), pages 214 223, 2017. [3] S. Arora, R. Ge, Y. Liang, T. Ma, and Y. Zhang. Generalization and equilibrium in generative adversarial nets (GANs). In Proc. International Conference on Machine Learning (ICML), 2017. [4] P. L. Bartlett, D. J. Foster, and M. J. Telgarsky. Spectrally-normalized margin bounds for neural networks. In Proc. Advances in Neural Information Processing Systems (NIPS), pages 6241 6250, 2017. [5] P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: risk bounds and structural results. Journal of Machine Learning Research, 3:463 482, Nov 2002. [6] S. S. Du and J. D. Lee. On the power of over-parametrization in neural networks with quadratic activation. ar Xiv preprint ar Xiv:1803.01206, 2018. [7] G. Dziugaite, D. Roy, and Z. Ghahramani. Training generative neural networks via maximum mean discrepancy optimization. In Proc. of the 31st Conference on Uncertainty in Artificial Intelligence (UAI), pages 258 267, 2015. [8] N. Golowich, A. Rakhlin, and O. Shamir. Size-independent sample complexity of neural networks. In Proc. Conference on Learning Theory (COLT), 2018. To appear. [9] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Proc. Advances in Neural Information Processing Systems (NIPS), pages 2672 2680, 2014. [10] D. Hsu, S. Kakade, and T. Zhang. A tail inequality for quadratic forms of sub Gaussian random vectors. Electronic Communications in Probability, 17, 2012. [11] A. Kontorovich. Concentration in unbounded metric spaces and algorithmic stability. In Proc. International Conference on Machine Learning (ICML), pages 28 36, 2014. [12] M. Ledoux and M. Talagrand. Probability in Banach Spaces. Springer, 1991. [13] Y. Li, K. Swersky, and R. Zemel. Generative moment matching networks. In Proc. International Conference on Machine Learning (ICML), pages 1718 1727, 2015. [14] T. Liang. How well can generative adversarial networks (GAN) learn densities: a nonparametric view. ar Xiv preprint ar Xiv:1712.08244, 2017. [15] S. Liu, O. Bousquet, and K. Chaudhuri. Approximation and convergence properties of generative adversarial learning. In Proc. Advances in Neural Information Processing Systems (NIPS), pages 5551 5559, 2017. [16] C. Mc Diarmid. On the method of bounded differences. Surveys in Combinatorics, 141(1):148 188, 1989. [17] S. J. Montgomery-Smith. The distribution of Rademacher sums. Proceedings of the American Mathematical Society, 109(2):517 522, 1990. [18] K. Muandet, K. Fukumizu, F. Dinuzzo, and B. Schölkopf. Learning from distributions via support measure machines. In Proc. Advances in Neural Information Processing Systems (NIPS), pages 10 18, 2012. [19] A. Müller. Integral probability metrics and their generating classes of functions. Advances in Applied Probability, 29(2):429 443, 1997. [20] B. Neyshabur, S. Bhojanapalli, D. Mc Allester, and N. Srebro. Exploring generalization in deep learning. In Proc. Advances in Neural Information Processing Systems (NIPS), pages 5949 5958, 2017. [21] B. Neyshabur, S. Bhojanapalli, D. Mc Allester, and N. Srebro. A pac-bayesian approach to spectrally-normalized margin bounds for neural networks. In Proc. International Conference on Learning Representations (ICLR), 2018. [22] B. Neyshabur, R. Tomioka, and N. Srebro. Norm-based capacity control in neural networks. In Proc. Conference on Learning Theory (COLT), pages 1376 1401, 2015. [23] S. Oymak. Learning compact neural networks with regularization. ar Xiv preprint, ar Xiv:1802.01223, 2018. [24] T. Salimans and D. P. Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Proc. Advances in Neural Information Processing Systems (NIPS), pages 901 909, 2016. [25] B. K. Sriperumbudur, K. Fukumizu, A. Gretton, B. Schölkopf, and G. R. Lanckriet. On the empirical estimation of integral probability metrics. Electronic Journal of Statistics, 6:1550 1599, 2012. [26] I. O. Tolstikhin, B. K. Sriperumbudur, and B. Schölkopf. Minimax estimation of maximum mean discrepancy with radial kernels. In Proc. Advances in Neural Information Processing Systems (NIPS), pages 1930 1938, 2016. [27] C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals. Understanding deep learning requires rethinking generalization. In Proc. International Conference on Learning Representations (ICLR), 2017. [28] P. Zhang, Q. Liu, D. Zhou, T. Xu, and X. He. On the discrimination-generalization tradeoff in GANs. In Proc. International Conference on Learning Representations (ICLR), 2018. [29] S. Zou, Y. Liang, and H. V. Poor. Nonparametric detection of geometric structures over networks. IEEE Transactions on Signal Processing, 65(19):5034 5046, 2015.