# automated_spectral_kernel_learning__58ce2238.pdf The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) Automated Spectral Kernel Learning Jian Li,1,2 Yong Liu, 1,2 Weiping Wang1 1Institute of Information Engineering, Chinese Academy of Sciences 2School of Cyber Security, University of Chinese Academy of Sciences {lijian9026, liuyong, wangweiping}@iie.ac.cn The generalization performance of kernel methods is largely determined by the kernel, but spectral representations of stationary kernels are both input-independent and outputindependent, which limits their applications on complicated tasks. In this paper, we propose an efficient learning framework that incorporates the process of finding suitable kernels and model training. Using non-stationary spectral kernels and backpropagation w.r.t. the objective, we obtain favorable spectral representations that depends on both inputs and outputs. Further, based on Rademacher complexity, we derive data-dependent generalization error bounds, where we investigate the effect of those factors and introduce regularization terms to improve the performance. Extensive experimental results validate the effectiveness of the proposed algorithm and coincide with our theoretical findings. Introduction Kernel methods have achieved great success in many conventional domains over past decades, while they show relatively inferior performance on complicated tasks nowadays. The fundamental limitation of common kernels has been revealed that they are both stationary and monotony (Bengio, Delalleau, and Roux 2006). The stationary property shows that stationary kernels only depend on the distance x x while are free from the input x itself. The monotony property indicates that values of stationary kernels decrease over the distance, ignoring the long-range interdependence. Spectral approaches were developed to fully characterize stationary kernels with concise representation forms, such as sparse spectrum kernels (Qui nonero-Candela et al. 2010), sparse mixture kernels (Wilson and Adams 2013) and random Fourier features methods to handle with large scale settings (Rahimi and Recht 2007; Le, Sarl os, and Smola 2013; Li, Liu, and Wang 2019). With sound theoretical guarantees, namely Bochner s theorem (Rudin 1962; Stein 2012), spectral kernels are constructed from the inverse Fourier transform in the frequency domain. Although approximate spectral representation provides an efficient approach for stationary spectral kernels, the performance of Corresponding author Copyright c 2020, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: The overview of ASKL. random features is limited because stationary kernels are input-independent and output-independent. Yaglom s theorem, rather than Bochner s theorem, provides more general forms which encompass both stationary and non-stationary kernels via inverse Fourier transform (Yaglom 1987; Samo and Roberts 2015). Recently, due to its general and concise spectral statements, non-stationary kernels have been applied to Gaussian process regression (Remes, Heinonen, and Kaski 2017; Sun et al. 2019). Hyper parameters for kernels determine the performance of kernel methods. Meanwhile, kernel selection approaches have been well-studied (Liu and Liao 2014; Li et al. 2017; Ding et al. 2018; Liu et al. 2018; 2019). However, due to the separation of kernel selection and model training, those techniques are inefficient and lead the undesirable performance. In this paper, to achieve better performance ability for kernel methods, we propose an efficient algorithm, namely Automated Spectral Kernel Learning (ASKL), learning suitable kernels and model weights together. A brief overview is illustrated in Figure 1. On the algorithmic front, ASKL consists of: (1) non-stationary kernels to obtain input-dependent features, (2) backpropagation w.r.t the objective to make features output-dependent, (3) regularization terms to achieve sharper generalization error bounds. On the theoretical front, the theoretical underpinning is Rademacher complexity theory, which indicates how the feature mappings affect the performance and suggests ways to refine the algorithm. Background In ordinary supervised learning settings, training samples {(xi, yi)n i=1} are drawn i.i.d. from a fixed but unknown distribution ρ over X Y, where X = Rd is the input space and Y RK is the output space in single-valued (K = 1) or vector-valued (K > 1) forms. The goal is to learn an estimator f : X Y, which outputs K predictive labels. We define a standard hypothesis space for kernel methods H = x f(x) = W T φ(x) , where W RD K is the model weight, φ(x) : Rd RD is a nonlinear feature mapping. For kernel methods, φ(x) is an implicit feature mapping associated with a Mercer kernel k(x, x ) = φ(x), φ(x ) . To improve the computational efficiency but also retain favorable statistical properties, random Fourier features were proposed to approximate kernel with explicit feature mappings φ(x) via k(x, x ) φ(x), φ(x ) (Rahimi and Recht 2007). In statistical learning theory, the supervised learning problem is to minimize the expected loss on X Y inf f H E(f), E(f) = X Y ℓ(f(x), y) dρ(x, y), (1) where ℓis a loss function related to specific tasks. Stationary Kernels The connection between the stationary kernel k(τ) and its spectral density s(ω) is revealed in Bochner s theorem via inverse Fourier transform. Theorem 1 (Bochner s theorem). A stationary continuous kernel k(x, x ) = k(x x ) on Rd is positive definite if and only if it can be represented as Rd eiωT (x x )s(ω)dω, (2) where s(ω) is a non-negative measure. The spectral density s(ω) is the probability density of the corresponding kernel. From the inverse Fourier transform, we find that spectral kernels are highly relevant to the probability measure s(ω). E.g. s(ω) for Gaussian kernels with the width σ correspond to Gaussian distribution N(0, 1/σ). Non-Stationary Kernels While the stationary kernels ignore input-dependent information and long-range correlations, non-stationary kernels alleviate these restrictions because they depend on the inputs themselves (Samo and Roberts 2015). Theorem 2 (Yaglom s theorem). A general kernel k(x, x ) is positive definite on Rd is positive define if and only if it admits the form Rd Rd ei(ωT x ω T x )μ(dω, dω ), (3) where μ(dω, dω ) is the Lebesgue-Stieltjes measure associated to some positive semi-definite (PSD) spectral density function s(ω, ω ) with bounded variations. When μ is concentrated on the diagonal ω = ω , the spectral characterization of stationary kernels in the Bochner s theorem is recovered. s(ω, ω ) is a joint probability density. Figure 2: The architecture of learning framework Automated Spectral Kernel Learning In this section, we devise a learning framework for arbitrary kernel-based supervised applications: We present the learning framework with the minimization objective, integrating empirical risk minimization (ERM) with regularizers on feature mappings and model weights. Spectral representation for non-stationary spectral kernels based on Yaglom s theorem is conducted. We apply first-order gradient approaches to solve the minimization objective. We update frequency matrices Ω, Ω together with model weights W via backpropagation. Learning Framework The minimization of the expected loss (1) is hard to estimate in practical problems. In this paper, we put two additional regularization terms into the ERM, thus the empirical objective E(W , Ω, Ω ) becomes arg min W ,Ω,Ω 1 n i=1 ℓ(f(xi), yi) +λ1 W + λ2 φ(X) 2 F , (4) where both feature mappings φ(X) RD n on all data and f(xi) = W T φ(xi) RD use the non-stationary spectral representation φ( ), which is presented as cos(ΩT x + b) + cos(Ω T x + b ) . The trace norm W and the squared Frobenius norm φ(X) 2 F exerts constraints on updating model weights W and frequency matrices Ω, Ω , respectively. Those two regularization terms, especially the norm on feature mappings, are rarely used in the R-ERM, leading to the better generalization performance as proven in next section. In the objective (4), we update model weights W and frequency pairs Ω, Ω to learn feature mappings both inputdependent (non-stationary kernels) and output-dependent (backpropagation towards the objective). As shown in Figure 2, the architecture can be regarded as a single hidden layer neural network with cosine as activation. The spectral density surface s(ω, ω ) (joint probability density), determining the performance of spectral kernels, is optimized by updating frequency matrices Ω, Ω via backpropagation (Rumelhart et al. 1988). In this structure, only W and Ω, Ω are trainable and optimized towards the objective. Non-Stationary Spectral Kernels Representation According to Yaglom s theorem, to produce a positive semidefinite (PSD) kernel, the spectral density needs to be a PSD function. In order to construct a PSD spectral density s(ω, ω ), we include symmetries s(ω, ω ) = s(ω , ω) and sufficient diagonal components s(ω, ω) and s(ω , ω ). We integrate exponential components and the corresponding integrated spectral density s(ω, ω ) and get Eω,ω (x, x ) = 1 4 ei(ωT x ω T x ) + ei(ω T x ωT x ) +ei(ωT x ωT x ) + ei(ω T x ω T x ) . The PSD kernel can be rewritten as Rd Rd Eω,ω (x, x )μ(dω, dω ). (5) where s(ω, ω ) is the spectral density surface. Similar spectral representation forms for non-stationary kernels are also used in (Samo and Roberts 2015; Remes, Heinonen, and Kaski 2017). When ω = ω , non-stationary kernel in (5) degrades into stationary kernel as in Bochner s theorem (2) and the probability density function becomes univariate s(ω). In the stationary cases, random Fourier features are used to approximate stationary kernels (Rahimi and Recht 2007). Similarly, in the non-stationary cases, we can approximate (5) with Monte Carlo random sampling Rd Rd Eω,ω (x, x )μ(dω, dω ) = Eω,ω s Eω,ω (x, x ) = Eω,ω s 1 4 cos(ωT x ω T x ) + cos(ω T x ωT x ) + cos(ωT x ωT x ) + cos(ω T x ω T x ) cos(ωT i x ω T i x ) + cos(ω T i x ωT i x ) + cos(ωT i x ωT i x ) + cos(ω T i x ω T i x ) = ψ(x), ψ(x ) where (ωi, ω i)D i=1 i.i.d. s(ω, ω ), D is the number of features and random Fourier feature mapping of spectral kernel is cos(ΩT x) + cos(Ω T x) sin(ΩT x) + sin(Ω T x) where features mapping are Rd R2D. To alleviate computational costs, we use the following mapping Rd RD cos(ΩT x + b) + cos(Ω T x + b ) , (6) where frequency matrices Ω, Ω Rd D are integrated with (ωi, ω i)D i=1 that Ω = [ω1, , ωD] and Ω = [ω 1, , ω D]. The phase vectors b, b are drawn uniformly from [0, 2π]D. In fact, the spectral kernels induced by ψ(x) and φ(x) are equivalent in the expectation manner. Remark 1. Equation (6) provides explicit feature mappings to approximate non-stationary kernels, where suitable frequency matrices Ω, Ω are the key to obtain a favorable performance. Standard random Fourier features generate frequency matrices via samples from the assigned spectral density s(ω, ω ), e.g frequency matrices correspond to random Gaussian matrix for Gaussian kernels. In the proposed algorithm ASKL, the frequency matrices Ω, Ω are jointly optimized together with model weights W during training. Update Trainable Matrices As shown in the previous section, non-stationary spectral kernels are input-dependent. Further, we propose a general learning framework, namely Automated Spectral Kernel Learning (ASKL), that optimizes frequency matrices Ω, Ω for non-stationary kernels according to the objective via backpropagation. Therefore, the learned feature mappings are both input-dependent and output-dependent. The learning frame ASKL can be solved by first-order gradient descent methods, such as stochastic gradient descent (SGD) and its variants Adadelta (Zeiler 2012) and Adam (Kingma and Ba 2014). Using the backpropagation, we derive how to update estimator weights W and frequency matrices Ω, Ω in the following analyses. More generally, we consider mini-batch gradient descent where we use m examples in each iteration. Specifically, full gradient descent is the special case m = n where all examples are used and stochastic gradient descent (SGD) corresponds to the special case m = 1 where only one example is used. Update W in Proximal Gradient Approach To minimize the objective (4), we use first-order gradient descent algorithms to update W in the direction of negative gradient. The updates of gradient of W depend on empirical loss and trace norm in (4), but trace norm is nondifferentiable on many points for each dimension (unlike hinge loss and Relu are nondifferentiable only on one point), thus the derivative/subgradient of the trace norm cannot be applied to standard gradient approaches. We employ singular value thresholding (SVT) to solve the minimization of trace norm with proximal gradient descent (Cai, Cand es, and Shen 2010). We simply the updates of W in two steps and put detailed deduction process in the Appendix section: 1) Update W with SGD on the empirical loss Q = W t η g(W t), (7) where η is the learning rate and the gradient of empirical loss on m-batch examples is g(W ) = 1 m ℓ(f(xi), y) i=1 φ(xi) ℓ(f(xi), y) and Q is an intermediate matrix and m examples are used. 2) Update W with SVT on the trace norm W t+1 = Udiag {σj λ1η}+ V T , (8) where Q = UΣV T is the singular values decomposition, Σ is the diagonal diag({σj}1 i r) and r is the rank of Q. Update Ω, Ω Using the chain rule for computing the derivative, the derivative of objective only depends on empirical loss ℓ(f(xi), y) in terms of Ω, Ω . Both empirical risk ℓ(f(xi), y) and squared Frobenius norm φ(X) 2 F are differentiable with respect to Ω, Ω . We derive the gradient of the minimization objective (4) w.r.t. Ω as an example. Ωt+1 = Ωt η E(Ωt) (9) where the gradients w.r.t Ω are ℓ(f(xi), y) Ω + λ2 φ(X) 2 F Ω , ℓ(f(xi), y) Ω = xi D W ℓ(f(xi), y) φ(X) 2 F Ω = 1 i=1 2xi φ(xi)T D and D is a diagonal matrix in D D size filled with a vector 2D sin(ΩT xi + b) Specific Loss Functions For gradients in (7) and (9), only gradients w.r.t. the loss function ℓ(f(xi),y) f(xi) are uncertain. Hinge Loss for Classification Problems. Let the label y = [0, , 0, 1, 0, , 0]T only one element (its category) is not zero. The hinge loss is defined as ℓ(f(xi), y) = |1 (y T f(xi) maxy =y y T f(xi))|+. The sub-gradient of hinge loss w.r.t the estimator is ℓ(f(xi), y) 0, y T f(xi) max y =y y T f(xi) 1, Squared Loss for Regression Problems. Let y be the K-size vector-valued label where K > 1 for multi-label regression and K = 1 for univariate regression. The Squared loss function is ℓ(f(xi), y) = f(xi) y 2 2. Then, the gradient of squared loss is ℓ(f(xi), y) f(xi) = 2(f(xi) y). Theoretical Guarantee In this section, we study the generalization performance for the proposed algorithm ASKL. A data-dependent excess risk bound is derived, and then we explore how the inputdependent and output-dependent feature mappings affect the statistical performance and how to improve the algorithm by utilizing additional regularization terms. Definition 1. The empirical Rademacher complexity of hypothesis space H is k=1 ϵikfk(xi) where fk(xi) is the k-th value of the estimator f(xi) with K outputs and ϵiks are n K independent Rademacher variables with equal probabilities P(ϵik = +1) = P(ϵik = 1) = 1/2. Its deterministic estimate is R(H) = E R(H). Theorem 3 (Excess Risk Bound). Assume that B = supf H W < and assume the loss function ℓis LLipschitz for RK equaipped with the 2-norm, with probability at least 1 δ, the following excess risk bound holds E( fn) E(f ) 4 2L R(H) + O where f H is the most accurate estimator in the hypothesis space, fn is the empirical estimator and i=1 φ(xi), φ(xi) cos (ωj ω j)T xi + 1 . The proof is given in the Appendix. The error bounds depend on Rademacher complexity term. Due to cos (ωj ω j)T xi 1, Rademacher complexity in (11) is naturally bounded by R(H) B K/n, thus the convergence rate is E( fn) E(f ) O Based on the theoretical findings, we make some technical comments to understand how the factors, including feature mappings and regularization terms, affect the generalization performance and suggest ways to refine the algorithm: Influence of Non-Stationary Kernels. As mentioned in the above section, non-stationary kernels depend on inputs themselves instead of the distance between inputs. We explore the impact of non-stationary kernels based on Rademacher complexity in (11), which depends on the trace n i=1 k(xi, xi). For stationary kernels (shiftinvariant kernels), the spectral representation holds the diagonals as k(xi, xi) = cos(ωT (xi xi)) = 1, thus the trace of kernel matrix n i=1 k(xi, xi) = n, which corresponds to the worst cases. While for non-stationary kernels, k(xi, xi) = cos((ω ω )T xi) [ 1, 1]. For most instances xi, the diagonals are k(xi, xi) < 1, so the trace n i=1 k(xi, xi) n. Therefore, error bounds of non-stationary spectral kernels are much tighter than bounds of stationary kernels, where non-stationary kernels achieve the better generalization performance. Influence of Spectral Learning. If frequency matrices Ω, Ω are just assigned according to specific spectral density s(ω, ω ), the non-stationary kernels are inputdependent but output-independent. By dynamically optimizing frequency matrices Ω, Ω towards the objective, we acquire more powerful feature representations. Then, the spectral measure s(ω, ω ) of optimal kernels can be estimated from optimized frequency matrices Ω, Ω . The learned feature mappings are dependent on both inputs and outputs, offering better feature representations. Using the Trace Norm W as a Regularizer. The convergence rate B = supf H W < is also dependent on a constant B, that is the supremum of trace norm W in terms of the specific hypothesis space. As a result, the minimization of trace norm W is useful to reduce B and obtain better error bounds. Based on Rademacher complexity theory, the use of trace norm W instead of squared Frobenius norm W 2 F as regularization terms was also explored for linear estimators in (Xu et al. 2016; Li et al. 2019). Using Squared Frobenius Norm φ(X) 2 F as a Regularizer. From (10), Rademacher complexity is bounded by the trace of the kernel and it can be written as i=1 φ(xi), φ(xi) = i=1 φ(xi) 2 2 = φ(X) 2 F . In standard theoretical learning for Rademacher complexity, the kernel trace cannot be used to improve learning algorithm because feature mappings are constants for specific inputs. For example, all diagonals are k(xi, xi) = 1 for shift-variant kernels thus the trace is the number of examples n. Note that, instead of assigned kernel parameters, our algorithm automatically learns optimal spectral density for kernels, thus the trace of the kernel is no longer a constant. Meanwhile, we use it as two regularization terms to improve the performance (avoid overfitting). Experiments In this section, compared with other algorithms, we evaluate the empirical behavior of our proposed algorithm ASKL on several benchmark datasets to demonstrate the effects of factors used in our algorithm, including the non-stationary spectral kernel, updating spectral density with backpropagation and additional regularization terms. Experimental Setup Based on random Fourier features, both our algorithm ASKL and compared methods apply nonlinear feature mapping into a fixed D-dimensional feature space where we set D = 2000. We apply Gaussian kernels as basic kernels because Gaussian kernels succeed in many types of data by mapping inputs into infinite-dimensional space, of which frequency matrices Ω, Ω are i.i.d. drawn from Gaussian distributions N(0, σ2). The generalization ability of algorithms are highly dependent on different parameters on λ1, λ2 and Gaussian kernel parameter σ for spectral kernels. For fair comparisons, we tune those parameters to achieve optimal empirical performance for all algorithms on all dataset, by using 5-folds cross-validation and grid search over parameters candidate sets. Regularization parameters are selected in λ1, λ2 {10 10, 10 9, , 10 1} and Gaussian kernel parameter σ is selected from candidate set σ {2 10, , 210}. Accuracy and mean squared error (MSE) Methods Kernel Density Regularizer SK Stationary Assigned W 2 F NSK Non-stationary Assigned W 2 F SKL Stationary Learned W 2 F NSKL Non-stationary Learned W 2 F ASKL Non-stationary Learned W , φ(X) 2 F Table 1: Compared algorithms. are used to evaluate performance for classification and regression, respectively. We implement all algorithms based on Pytorch and use Adam as optimizer with 32 examples in a mini-batch to solve the minimization problem. Compared Algorithms To assess the effectiveness of factors used in our algorithm, we compare the proposed algorithm with several relevant algorithms. As shown in Table 1, compared methods are special cases of ASKL: (1) SK (Rahimi and Recht 2007): known as random Fourier features for the stationary spectral kernel. This approach directly assigned the spectral density for shift-invariant kernels and uses the regularizer W 2 F on model weights . (2) NSK (Samo and Roberts 2015): Similar to SK but it uses spectral representation for non-stationary kernel which was introduced in (6) with assigned frequency matrices. (3) SKL (Huang et al. 2014): Random Fourier features for stationary kernels and squared Frobenius norm as regularization term with updating spectral density during training. (4) NSKL: A special case of ASKL with non-stationary spectral, learned spectral density. But it uses squared Frobenius norm on model weights W 2 F as regularization. Datasets We evaluate the performance of the proposed learning framework ASKL and compared algorithms based on several publicly available datasets, including both classification and regression tasks. Especially, we standardize outputs for regression tasks to [0, 100] for better illustration. To obtain stable results, we run methods on each dataset 30 times with randomly partition such that 80% data for training and 20% data for testing. Further, those multiple test errors allow the estimation of the statistical significance of difference among methods. To explore the influence of factors upon convergence, we evaluate both test accuracy and objective on MNIST dataset (Le Cun et al. 1998). Empirical Results Empirical results of all algorithms are shown in Table 2, where accuracy is used for classification tasks and root mean squared error (RMSE) is used for regression tasks. We bold results which have the best performance on each dataset, but also mark sub-optimal results with underlines which have a significant difference with the best ones, by using pairwise t-test on results of 30 times repeating data split and training. The results in Table 2 show: (1) The proposed algorithm ASKL outperforms compared algorithms on almost all dataset that coincides with our theoretical results. (2) The use of non-stationary kernels brings notable performance SK NSK SKL NSKL ASKL Accuracy( ) segment 89.93 2.12 90.15 2.08 94.58 1.86 94.37 0.81 95.02 1.54 satimage 74.54 1.35 75.15 1.38 83.61 1.08 83.74 1.34 85.32 1.45 USPS 93.19 2.84 93.81 2.13 95.13 0.91 95.27 1.65 97.76 1.14 pendigits 96.93 1.53 97.39 1.41 98.19 2.30 98.28 1.68 99.06 1.26 letter 76.50 1.21 78.21 1.56 93.60 1.14 94.66 2.21 95.70 1.74 porker 49.80 2.11 51.85 0.97 54.27 2.72 54.69 1.68 54.85 1.28 shuttle 98.17 2.81 98.21 1.46 98.87 1.42 98.74 1.07 98.98 0.94 MNIST 96.03 2.21 96.45 2.16 96.67 1.61 98.03 1.16 98.26 1.78 abalone 10.09 0.42 9.71 0.28 8.35 0.28 7.85 0.42 7.88 0.16 space ga 11.86 0.26 11.58 0.42 11.40 0.18 11.39 0.46 11.34 0.27 cpusmall 2.77 0.71 2.84 0.38 2.56 0.72 2.57 0.63 2.42 0.48 cadata 50.31 0.92 51.47 0.32 47.67 0.33 47.71 0.30 46.34 0.23 Table 2: Classification accuracy (%) for classification datasets and RMSE for regression datasets. ( ) means the lager the better while ( ) indicates the smaller the better. We bold the numbers of the best method and underline the numbers of the other methods which are not significantly worse than the best one. Figure 3: Accuracy curves on MNIST improvement but approaches based on stationary kernels still perform well on easy tasks, e.g. shuttle and cpusmall. (3) Due to the difference between assigned spectral density and learned frequency matrices, there are significant performance gaps between those two groups {SK, NSK} and {SKL, NSKL}, especially on complicated datasets satimage and letter. It confirms that learning feature mappings in both input-dependent and output-dependent ways leads to better generalization performance. (4) The proposed algorithm ASKL usually provides better results than NSKL. That shows the effectiveness of regularization terms W and φ(X) 2 F . The experimental results demonstrate the excellent performance and stability of ASKL, corroborating the theoretical analysis and excellent performance of ASKL. During iterations, test accuracy and the objective were recorded for every 200 iterations with batch size 32. Evaluation results in Figure 3 and Figure 4 show that ASKL outperforms other algorithms significantly. Accuracy curves and objective curves are correlated as the smaller objective corresponds to the higher accuracy. Figure 3 and Figure 4 empirically illustrate that ASKL achieves lower error bounds than with fast learning rates. Figure 4: Objective curves on MNIST Conclusion and Discussion In this paper, we propose automatically kernel learning framework, which jointly learns spectral density and the estimator. Both theoretical analysis and experiments illustrate that the framework obtains significant improvements in the generalization performance, owing to the use of three factors: non-stationary spectral kernel, backpropagation and two regularization terms. The use of non-stationary spectral kernels makes feature mappings input-dependent, while updating frequency matrices w.r.t the objective via backpropagation that guarantees feature mappings are outputdependent, obtaining more powerful representation abilities. Further, we derive Rademacher complexity bounds for the algorithm. To achieve sharper bounds, we minimize two regularization terms together with ERM. Connection with Deep Neural Networks. The proposed learning framework is also a neural network with a single hidden layer and the cosine activation, thus the theoretical findings are also applied to this kind of neural networks. Those results can be extended to deep neural networks (DNN) by stacking φ in the hierarchical structure. For example l-hidden layers are used that the spectral kernel is k(x, x ) = φl(φl 1( φ1(x))), φl(φl 1( φ1(x ))) . The outputs of other activations such as sigmoid and Relu can also be seen as random feature mapping ϕ(x), corresponding kernel function is k(x, x ) = ϕ(x), ϕ(x ) . It is a possible way to understand deep neural networks in kernels view based on Rademacher complexity theory. Appendix Firstly, we introduce common notations used in Rademacher complexity theory. The space of loss function associated with H is denoted by L = ! ℓ(f(x), y) "" f H # . (13) Definition 2 (Rademacher complexity on loss space). Assume L is a space of loss functions as defined in Equation (13). Then the empirical Rademacher complexity of L is: i=1 ϵiℓ(f(xi), yi) where ϵis are independent Rademacher random variables uniformly distributed over { 1}. Its deterministic counterpart is R(L) = E R(L). Lemma 1 (Lemma 5 of (Cortes et al. 2016)). Let the loss function ℓbe L-Lipschitz for RK equaipped with the 2-norm, |ℓ(f(x), y) ℓ(f(x ), y )| L f(x) f(x ) 2. Then, the following contraction inequation exists Proof of Theorem 3 Proof. A standard fact is the derivation of expected loss and empirical means can be controlled by the Rademacher averages over loss space L (Lemma A.5 of (Bartlett et al. 2005)) E( fn) E(f ) 2 sup f H E(f) E(f) 4R(L). (14) Combining (14) with the contraction in Lemma 1 and connection between R(H) and R(H) (Lemma A.4 of (Bartlett et al. 2005)), there holds with high probability at least 1 δ E( fn) E(f ) 4 2L R(H) + O Then, we estimate empirical Rademacher complexity R(H) k=1 ϵikfk(xi) sup f H W , Φϵ where W , Φϵ RD K and W , Φϵ = Tr(W T Φϵ), we define the matrix Φϵ as follows: i=1 ϵi1φ(xi), i=1 ϵi2φ(xi), , i=1 ϵi Kφ(xi) Applying H older s inequality and W bounded by a constant B to (16), we can obtain sup f H W , Φϵ sup f H W Φϵ F n Eϵ [ Φϵ F ] Eϵ Φϵ 2 F . Then, we bound Eϵ Φϵ 2 F as follows Eϵ Φϵ 2 F Eϵ i=1 ϵikφ(xi) %%% 2 i=1 ϵikφ(xi) %%% 2 i,k=1 ϵikϵjk φ(xi), φ(xj) i=1 φ(xi), φ(xi) . The last step is due to the symmetry of φ(xi), φ(xi) shown in (6). The result is similar to (Bartlett and Mendelson 2002; Li et al. 2018) Applying spectral representation in (5) of non-stationary kernels, we further bound the Rademacher complexity i=1 φ(xi), φ(xi) cos (Ω Ω )T xi + 1 (19) where B = supf H W . Substituting the above inequation (19) to (15), we complete the proof. Singular Values Thresholding (SVT) In each iteration, to obtain a tight surrogate of Equation (4), we keep W while relaxing empirical loss g(W ) only, that leads proximal gradient (Parikh, Boyd, and others 2014) W t+1 = arg min W λ1 W + g(W ) = arg min W λ1 W + g(W t) + g(W t), W W t + 1 2η W W t 2 F = arg min W λ1 W + 1 2η W (W t η g(W t) 2 F = arg min W 1 2 W Q 2 F + λ1η W where Q = W t η g(W t) and η is the learning rate. Proposition 1 (Theorem 2.1 of (Cai, Cand es, and Shen 2010)). Let Q RD K with rank r and its singular values decomposition (SVD) is Q = UΣV T , where U Rd r and V RK r have orthogonal columns, Σ is the diagonal diag({σi}1 i r). Then, 2 W Q 2 F + η W ' = UΣηV T , where the diagonal is Ση = diag({σi η}+). Applying Proposition 1 (Cai, Cand es, and Shen 2010; Lu et al. 2015; Chatterjee and others 2015) to (8), we update W twice in each iteration, as shown in (7) and (8). Acknowledgments This work was supported in part by the National Natural Science Foundation of China (No.61703396, No.61673293), the CCF-Tencent Open Fund, the Youth Innovation Promotion Association CAS, the Excellent Talent Introduction of Institute of Information Engineering of CAS (No. Y7Z0111107), and the Beijing Municipal Science and Technology Project (No. Z191100007119002). Bartlett, P. L., and Mendelson, S. 2002. Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research 3(Nov):463 482. Bartlett, P. L.; Bousquet, O.; Mendelson, S.; et al. 2005. Local rademacher complexities. The Annals of Statistics 33(4):1497 1537. Bengio, Y.; Delalleau, O.; and Roux, N. L. 2006. The curse of highly variable functions for local kernel machines. In Advances in neural information processing systems, 107 114. Cai, J.-F.; Cand es, E. J.; and Shen, Z. 2010. A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization 20(4):1956 1982. Chatterjee, S., et al. 2015. Matrix estimation by universal singular value thresholding. The Annals of Statistics 43(1):177 214. Cortes, C.; Kuznetsov, V.; Mohri, M.; and Yang, S. 2016. Structured prediction theory based on factor graph complexity. In Advances in Neural Information Processing Systems 29 (NIPS), 2514 2522. Ding, L.; Liao, S.; Liu, Y.; Yang, P.; and Gao, X. 2018. Randomized kernel selection with spectra of multilevel circulant matrices. In Thirty-Second AAAI Conference on Artificial Intelligence. Huang, P.-S.; Avron, H.; Sainath, T. N.; Sindhwani, V.; and Ramabhadran, B. 2014. Kernel methods match deep neural networks on timit. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing, 205 209. Kingma, D. P., and Ba, J. 2014. Adam: A method for stochastic optimization. ar Xiv preprint ar Xiv:1412.6980. Le, Q.; Sarl os, T.; and Smola, A. 2013. Fastfood-approximating kernel expansions in loglinear time. In Proceedings of the 30th International Conference on Machine Learning (ICML), volume 85. Le Cun, Y.; Bottou, L.; Bengio, Y.; Haffner, P.; et al. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11):2278 2324. Li, J.; Liu, Y.; Lin, H.; Yue, Y.; and Wang, W. 2017. Efficient kernel selection via spectral analysis. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI), 2124 2130. Li, J.; Liu, Y.; Yin, R.; Zhang, H.; Ding, L.; and Wang, W. 2018. Multi-class learning: From theory to algorithm. In Advances in Neural Information Processing Systems 31 (Neur IPS), 1591 1600. Li, J.; Liu, Y.; Yin, R.; and Wang, W. 2019. Multi-class learning using unlabeled samples : Theory and algorithm. In Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI). Li, J.; Liu, Y.; and Wang, W. 2019. Distributed learning with random features. ar Xiv preprint ar Xiv:1906.03155. Liu, Y., and Liao, S. 2014. Kernel selection with spectral perturbation stability of kernel matrix. Science China Information Sciences 57(11):1 10. Liu, Y.; Lin, H.; Ding, L.-Z.; Wang, W.; and Liao, S. 2018. Fast cross-validation. In Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI), 2497 2503. Liu, Y.; Liao, S.; Jiang, S.; Ding, L.; Lin, H.; and Wang, W. 2019. Fast cross-validation for kernel-based algorithms. IEEE transactions on pattern analysis and machine intelligence. Lu, C.; Zhu, C.; Xu, C.; Yan, S.; and Lin, Z. 2015. Generalized singular value thresholding. In Proceedings of the 29th AAAI Conference on Artificial Intelligence, 1805 1811. Parikh, N.; Boyd, S.; et al. 2014. Proximal algorithms. Foundations and Trends R in Optimization 1(3):127 239. Qui nonero-Candela, J.; Rasmussen, C. E.; Figueiras-Vidal, A. R.; et al. 2010. Sparse spectrum gaussian process regression. Journal of Machine Learning Research 11(Jun):1865 1881. Rahimi, A., and Recht, B. 2007. Random features for large-scale kernel machines. In Advances in Neural Information Processing Systems 21 (NIPS), 1177 1184. Remes, S.; Heinonen, M.; and Kaski, S. 2017. Non-stationary spectral kernels. In Advances in Neural Information Processing Systems 30 (NIPS), 4642 4651. Rudin, W. 1962. Fourier analysis on groups, volume 121967. Wiley Online Library. Rumelhart, D. E.; Hinton, G. E.; Williams, R. J.; et al. 1988. Learning representations by back-propagating errors. Cognitive modeling 5(3):1. Samo, Y.-L. K., and Roberts, S. 2015. Generalized spectral kernels. ar Xiv preprint ar Xiv:1506.02236. Stein, M. L. 2012. Interpolation of spatial data: some theory for kriging. Springer Science & Business Media. Sun, S.; Zhang, G.; Shi, J.; and Grosse, R. 2019. Functional variational bayesian neural networks. ar Xiv preprint ar Xiv:1903.05779. Wilson, A., and Adams, R. 2013. Gaussian process kernels for pattern discovery and extrapolation. In International Conference on Machine Learning, 1067 1075. Xu, C.; Liu, T.; Tao, D.; and Xu, C. 2016. Local rademacher complexity for multi-label learning. IEEE Transactions on Image Processing 25(3):1495 1507. Yaglom, A. M. 1987. Correlation theory of stationary and related random functions. Volume I: Basic Results. 526. Zeiler, M. D. 2012. Adadelta: an adaptive learning rate method. ar Xiv preprint ar Xiv:1212.5701.