# orthogonal_random_features__f481cb67.pdf Orthogonal Random Features Felix Xinnan Yu Ananda Theertha Suresh Krzysztof Choromanski Daniel Holtmann-Rice Sanjiv Kumar Google Research, New York {felixyu, theertha, kchoro, dhr, sanjivk}@google.com We present an intriguing discovery related to Random Fourier Features: in Gaussian kernel approximation, replacing the random Gaussian matrix by a properly scaled random orthogonal matrix significantly decreases kernel approximation error. We call this technique Orthogonal Random Features (ORF), and provide theoretical and empirical justification for this behavior. Motivated by this discovery, we further propose Structured Orthogonal Random Features (SORF), which uses a class of structured discrete orthogonal matrices to speed up the computation. The method reduces the time cost from O(d2) to O(d log d), where d is the data dimensionality, with almost no compromise in kernel approximation quality compared to ORF. Experiments on several datasets verify the effectiveness of ORF and SORF over the existing methods. We also provide discussions on using the same type of discrete orthogonal structure for a broader range of applications. 1 Introduction Kernel methods are widely used in nonlinear learning [8], but they are computationally expensive for large datasets. Kernel approximation is a powerful technique to make kernel methods scalable, by mapping input features into a new space where dot products approximate the kernel well [19]. With accurate kernel approximation, efficient linear classifiers can be trained in the transformed space while retaining the expressive power of nonlinear methods [10, 21]. Formally, given a kernel K( , ) : Rd Rd ! R, kernel approximation methods seek to find a nonlinear transformation φ( ) : Rd ! Rd0 such that, for any x, y 2 Rd K(x, y) ˆK(x, y) = φ(x)T φ(y). Random Fourier Features [19] are used widely in approximating smooth, shift-invariant kernels. This technique requires the kernel to exhibit two properties: 1) shift-invariance, i.e. K(x, y) = K( ) where = x y; and 2) positive semi-definiteness of K( ) on Rd. The second property guarantees that the Fourier transform of K( ) is a nonnegative function [3]. Let p(w) be the Fourier transform of K(z). Then, Rd p(w)ejw T (x y)dw. This means that one can treat p(w) as a density function and use Monte-Carlo sampling to derive the following nonlinear map for a real-valued kernel: 1 x), , sin(w T Dx), cos(w T 1 x), , cos(w T where wi is sampled i.i.d. from a probability distribution with density p(w). Let W = T . The linear transformation Wx is central to the above computation since, 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 1 2 3 4 5 6 7 8 9 10 0 RFF (Random Gaussian) ORF (Random Orthogonal) 1 2 3 4 5 6 7 8 9 10 0 RFF (Random Gaussian) ORF (Random Orthogonal) 1 2 3 4 5 6 7 8 9 10 0 RFF (Random Gaussian) ORF (Random Orthogonal) Figure 1: Kernel approximation mean squared error (MSE) for the Gaussian kernel K(x, y) = e ||x y||2/2σ2. D: number of rows in the linear transformation W. d: input dimension. ORF imposes orthogonality on W (Section 3). The choice of matrix W determines how well the estimated kernel converges to the actual kernel; The computation of Wx has space and time costs of O(Dd). This is expensive for highdimensional data, especially since D is often required to be larger than d to achieve low approximation error. In this work, we address both of the above issues. We first show an intriguing discovery (Figure 1): by enforcing orthogonality on the rows of W, the kernel approximation error can be significantly reduced. We call this method Orthogonal Random Features (ORF). Section 3 describes the method and provides theoretical explanation for the improved performance. Since both generating a d d orthogonal matrix (O(d3) time and O(d2) space) and computing the transformation (O(d2) time and space) are prohibitively expensive for high-dimensional data, we further propose Structured Orthogonal Random Features (SORF) in Section 4. The idea is to replace random orthogonal matrices by a class of special structured matrices consisting of products of binary diagonal matrices and Walsh-Hadamard matrices. SORF has fast computation time, O(D log d), and almost no extra memory cost (with efficient in-place implementation). We show extensive experiments in Section 5. We also provide theoretical discussions in Section 6 of applying the structured matrices in a broader range of applications where random Gaussian matrix is used. 2 Related Works Explicit nonlinear random feature maps have been constructed for many types of kernels, such as intersection kernels [15], generalized RBF kernels [22], skewed multiplicative histogram kernels [14], additive kernels [24], and polynomial kernels [11, 18]. In this paper, we focus on approximating Gaussian kernels following the seminal Random Fourier Features (RFF) framework [19], which has been extensively studied both theoretically and empirically [26, 20, 23]. Key to the RFF technique is Monte-Carlo sampling. It is well known that the convergence of Monte Carlo can be largely improved by carefully choosing a deterministic sequence instead of random samples [17]. Following this line of reasoning, Yang et al. [25] proposed to use low-displacement rank sequences in RFF. Yu et al. [28] studied optimizing the sequences in a data-dependent fashion to achieve more compact maps. In contrast to the above works, this paper is motivated by an intriguing new discovery that using orthogonal random samples provides much faster convergence. Compared to [25], the proposed SORF method achieves both lower kernel approximation error and greatly reduced computation and memory costs. Furthermore, unlike [28], the results in this paper are data independent. Structured matrices have been used for speeding up dimensionality reduction [1], binary embedding [27], deep neural networks [5] and kernel approximation [13, 28, 7]. For the kernel approximation works, in particular, the structured randomness leads to a minor loss of accuracy, but allows faster computation since the structured matrices enable the use of FFT-like algorithms. Furthermore, these matrices provide substantial model compression since they require subquadratic (usually only linear) Method Extra Memory Time Lower error than RFF? Random Fourier Feature (RFF) [19] O(Dd) O(Dd) - Compact Nonlinear Map (CNM) [28] O(Dd) O(Dd) Yes (data-dependent) Quasi-Monte Carlo (QMC) [25] O(Dd) O(Dd) Yes Structured (fastfood/circulant) [28, 13] O(D) O(D log d) No Orthogonal Random Feature (ORF) O(Dd) O(Dd) Yes Structured ORF (SORF) O(D) or O(1) O(D log d) Yes Table 1: Comparison of different kernel approximation methods under the framework of Random Fourier Features [19]. We assume D d. The proposed SORF method have O(D) degrees of freedom. The computations can be efficiently implemented as in-place operations with fixed random seeds. Therefore it can cost O(1) in extra space. space. In comparison with the above works, our proposed methods SORF and ORF are more effective than RFF. In particular SORF demonstrates both lower approximation error and better efficiency than RFF. Table 1 compares the space and time costs of different techniques. 3 Orthogonal Random Features Our goal is to approximate a Gaussian kernel of the form K(x, y) = e ||x y||2/2σ2. In the paragraph below, we assume a square linear transformation matrix W 2 RD d, D = d. When D < d, we simply use the first D dimensions of the result. When D > d, we use multiple independently generated random features and concatenate the results. We comment on this setting at the end of this section. Recall that the linear transformation matrix of RFF can be written as where G 2 Rd d is a random Gaussian matrix, with every entry sampled independently from the standard normal distribution. Denote the approximate kernel based on the above WRFF as KRFF(x, y). For completeness, we first show the expectation and variance of KRFF(x, y). Lemma 1. (Appendix A.2) KRFF(x, y) is an unbiased estimator of the Gaussian kernel, i.e., E(KRFF(x, y)) = e ||x y||2/2σ2. Let z = ||x y||/σ. The variance of KRFF(x, y) is Var (KRFF(x, y)) = 1 2D The idea of Orthogonal Random Features (ORF) is to impose orthogonality on the matrix on the linear transformation matrix G. Note that one cannot achieve unbiased kernel estimation by simply replacing G by an orthogonal matrix, since the norms of the rows of G follow the χ-distribution, while rows of an orthogonal matrix have the unit norm. The linear transformation matrix of ORF has the following form where Q is a uniformly distributed random orthogonal matrix1. The set of rows of Q forms a bases in Rd. S is a diagonal matrix, with diagonal entries sampled i.i.d. from the χ-distribution with d degrees of freedom. S makes the norms of the rows of SQ and G identically distributed. Denote the approximate kernel based on the above WORF as KORF(x, y). The following shows that KORF(x, y) is an unbiased estimator of the kernel, and it has lower variance in comparison to RFF. Theorem 1. KORF(x, y) is an unbiased estimator of the Gaussian kernel, i.e., E(KORF(x, y)) = e ||x y||2/2σ2. 1We first generate the random Gaussian matrix G in (1). Q is the orthogonal matrix obtained from the QR decomposition of G. Q is distributed uniformly on the Stiefel manifold (the space of all orthogonal matrices) based on the Bartlett decomposition theorem [16]. 0 1 2 3 4 0 variance ratio (a) Variance ratio (when d is large) 0 1 2 3 4 0 variance ratio d=2 d=4 d=8 d=16 d=32 d= (b) Variance ratio (simulation) 0 1 2 3 4 0 letter forest usps cifar mnist gisette (c) Empirical distribution of z Figure 2: (a) Var(KORF(x, y))/Var(KRFF(x, y)) when d is large and d = D. z = ||x y||/σ. (b) Simulation of Var(KORF(x, y))/Var(KRFF(x, y)) when D = d. Note that the empirical variance is the Mean Squared Error (MSE). (c) Distribution of z for several datasets, when we set σ as the mean distance to 50th-nearest neighbor for samples from the dataset. The count is normalized such that the area under curve for each dataset is 1. Observe that most points in all the datasets have z < 2. As shown in (a), for these values of z, ORF has much smaller variance compared to the standard RFF. Let D d, and z = ||x y||/σ. There exists a function f such that for all z, the variance of KORF(x, y) is bounded by Var (KORF(x, y)) 1 2D Proof. We first show the proof of the unbiasedness. Let z = x y σ , and z = ||z||, then E(KORF (x, y)) = E i=1 cos(w T . Based on the definition of ORF, w1, w2, . . . , w D are D random vectors given by wi = siui, with u1, u2, . . . , ud a uniformly chosen random orthonormal basis for Rd, and si s are independent χ-distributed random variables with d degrees of freedom. It is easy to show that for each i, wi is distributed according to N(0, Id), and hence by Bochner s theorem, E[cos(w T z)] = e z2/2. We now show a proof sketch of the variance. Suppose, ai = cos(w T i ] E[ai]2( (E[aiaj] E[ai]E[aj]) 2D + D(D 1) E[a1a2] e z2 where the last equality follows from symmetry. The first term in the resulting expression is exactly the variance of RFF. In order to have lower variance, E[a1a2] e z2 must be negative. We use the following lemma to quantify this term. Lemma 2. (Appendix A.3) There is a function f such that for any z, E[aiaj] e z2 e z2 z4 Therefore, for a large d, and D d, the ratio of the variance of ORF and RFF is Var(KORF(x, y)) Var(KRFF(x, y)) 1 (D 1)e z2z4 d(1 e z2)2 . (3) Figure 2(a) shows the ratio of the variance of ORF to that of RFF when D = d and d is large. First notice that this ratio is always smaller than 1, and hence ORF always provides improvement over 0 2 4 6 8 10 0.5 d=2 d=4 d=8 d=16 d=32 (a) Bias of ORF0 0 2 4 6 8 10 d=2 d=4 d=8 d=16 d=32 (b) Bias of SORF 0 1 2 3 4 0 variance ratio d=2 d=4 d=8 d=16 d=32 d= (c) Variance ratio of ORF0 0 1 2 3 4 0 variance ratio d=16 d=32 d=64 d= (d) Variance ratio of SORF Figure 3: Simulations of bias and variance of ORF0and SORF. z = ||x y||/σ. (a) E(KORF0(x, y)) e z2/2. (b) E(KSORF(x, y)) e z2/2. (c) Var(KORF0(x, y))/Var(KRFF(x, y)). (d) Var(KSORF(x, y))/Var(KRFF(x, y)). Each point on the curve is based on 20,000 choices of the random matrices and two fixed points with distance z. For both ORF and ORF0, even at d = 32, the bias is close to 0 and the variance is close to that of d = 1 (Figure 2(a)). the conventional RFF. Interestingly, we gain significantly for small values of z. In fact, when z ! 0 and d ! 1, the ratio is roughly z2 (note ex 1 + x when x ! 0), and ORF exhibits infinitely lower error relative to RFF. Figure 2(b) shows empirical simulations of this ratio. We can see that the variance ratio is close to that of d = 1 (3), even when d = 32, a fairly low-dimensional setting in real-world cases. Recall that z = ||x y||/σ. This means that ORF preserves the kernel value especially well for data points that are close, thereby retaining the local structure of the dataset. Furthermore, empirically σ is typically not set too small in order to prevent overfitting a common rule of thumb is to set σ to be the average distance of 50th-nearest neighbors in a dataset. In Figure 2(c), we plot the distribution of z for several datasets with this choice of σ. These distributions are all concentrated in the regime where ORF yields substantial variance reduction. The above analysis is under the assumption that D d. Empirically, for RFF, D needs to be larger than d in order to achieve low approximation error. In that case, we independently generate and apply the transformation (2) multiple times. The next lemma bounds the variance for this case. Corollary 1. Let D = m d, for an integer m and z = ||x y||/σ. There exists a function f such that for all z, the variance of KORF(x, y) is bounded by Var (KORF(x, y)) 1 2D 4 Structured Orthogonal Random Features In the previous section, we presented Orthogonal Random Features (ORF) and provided a theoretical explanation for their effectiveness. Since generating orthogonal matrices in high dimensions can be expensive, here we propose a fast version of ORF by imposing structure on the orthogonal matrices. This method can provide drastic memory and time savings with minimal compromise on kernel approximation quality. Note that the previous works on fast kernel approximation using structured matrices do not use structured orthogonal matrices [13, 28, 7]. Let us first introduce a simplified version of ORF: replace S in (2) by a scalar d. Let us call this method ORF0. The transformation matrix thus has the following form: Theorem 2. (Appendix B) Let KORF0(x, y) be the approximate kernel computed with linear transformation matrix (4). Let D d and z = ||x y||/σ. There exists a function f such that the bias of KORF0(x, y) satisfies ,,,E(KORF0(x, y)) e z2/2,,, e z2/2 z4 1 2 3 4 5 6 7 8 9 10 0 RFF ORF SORF QMC(digitalnet) circulant fastfood (a) LETTER (d = 16) 1 2 3 4 5 6 7 8 9 10 0 RFF ORF SORF QMC(digitalnet) circulant fastfood (b) FOREST (d = 64) 1 2 3 4 5 6 7 8 9 10 0 RFF ORF SORF QMC(digitalnet) circulant fastfood (c) USPS (d = 256) 1 2 3 4 5 6 7 8 9 10 0 RFF ORF SORF QMC(digitalnet) circulant fastfood (d) CIFAR (d = 512) 1 2 3 4 5 6 7 8 9 10 0 RFF ORF SORF QMC(digitalnet) circulant fastfood (e) MNIST (d = 1024) 1 2 3 4 5 6 7 8 9 10 0 RFF ORF SORF QMC(digitalnet) circulant fastfood (f) GISETTE (d = 4096) Figure 4: Kernel approximation mean squared error (MSE) for the Gaussian kernel K(x, y) = e ||x y||2/2σ2. D: number of transformations. d: input feature dimension. For each dataset, σ is chosen to be the mean distance of the 50th 2 nearest neighbor for 1,000 sampled datapoints. Empirically, this yields good classification results. The curves for SORF and ORF overlap. and the variance satisfies Var (KORF0(x, y)) 1 2D (1 e z2)2 D 1 The above implies that when d is large KORF0(x, y) is a good estimation of the kernel with low variance. Figure 3(a) shows that even for relatively small d, the estimation is almost unbiased. Figure 3(c) shows that when d 32, the variance ratio is very close to that of d = 1. We find empirically that ORF0also provides very similar MSE in comparison with ORF in real-world datasets. We now introduce Structured Orthogonal Random Features (SORF). It replaces the random orthogonal matrix Q of ORF0in (4) by a special type of structured matrix HD1HD2HD3: d σ HD1HD2HD3, (5) where Di 2 Rd d, i = 1, 2, 3 are diagonal sign-flipping matrices, with each diagonal entry sampled from the Rademacher distribution. H is the normalized Walsh-Hadamard matrix. Computing WSORFx has the time cost O(d log d), since multiplication with D takes O(d) time and multiplication with H takes O(d log d) time using fast Hadamard transformation. The computation of SORF can also be carried out with almost no extra memory due to the fact that both sign flipping and the Walsh-Hadamard transformation can be efficiently implemented as in-place operations [9]. Figures 3(b)(d) show the bias and variance of SORF. Note that although the curves for small d are different from those of ORF, when d is large (d > 32 in practice), the kernel estimation is almost unbiased, and the variance ratio converges to that of ORF. In other words, it is clear that SORF can provide almost identical kernel approximation quality as that of ORF. This is also confirmed by the experiments in Section 5. In Section 6, we provide theoretical discussions to show that the structure of (5) can also be generally applied to many scenarios where random Gaussian matrices are used. Dataset Method D = 2d D = 4d D = 6d D = 8d D = 10d Exact letter d = 16 RFF 76.44 1.04 81.61 0.46 85.46 0.56 86.58 0.99 87.84 0.59 90.10 ORF 77.49 0.95 82.49 1.16 85.41 0.60 87.17 0.40 87.73 0.63 SORF 76.18 1.20 81.63 0.77 84.43 0.92 85.71 0.52 86.78 0.53 forest d = 64 RFF 77.61 0.23 78.92 0.30 79.29 0.24 79.57 0.21 79.85 0.10 80.43 ORF 77.88 0.24 78.71 0.19 79.38 0.19 79.63 0.21 79.54 0.15 SORF 77.64 0.20 78.88 0.14 79.31 0.12 79.50 0.14 79.56 0.09 usps d = 256 RFF 94.27 0.38 94.98 0.10 95.43 0.22 95.66 0.25 95.71 0.18 95.57 ORF 94.21 0.51 95.26 0.25 96.46 0.18 95.52 0.20 95.76 0.17 SORF 94.45 0.39 95.20 0.43 95.51 0.34 95.46 0.34 95.67 0.15 cifar d = 512 RFF 73.19 0.23 75.06 0.33 75.85 0.30 76.28 0.30 76.54 0.31 78.71 ORF 73.59 0.44 75.06 0.28 76.00 0.26 76.29 0.26 76.69 0.09 SORF 73.54 0.26 75.11 0.21 75.76 0.21 76.48 0.24 76.47 0.28 mnist d = 1024 RFF 94.83 0.13 95.48 0.10 95.85 0.07 96.02 0.06 95.98 0.05 97.14 ORF 94.95 0.25 95.64 0.06 95.85 0.09 95.95 0.08 96.06 0.07 SORF 94.98 0.18 95.48 0.08 95.77 0.09 95.98 0.05 96.02 0.07 gisette d = 4096 RFF 97.68 0.28 97.74 0.11 97.66 0.25 97.70 0.16 97.74 0.05 97.60 ORF 97.56 0.17 97.72 0.15 97.80 0.07 97.64 0.09 97.68 0.04 SORF 97.64 0.17 97.62 0.04 97.64 0.11 97.68 0.08 97.70 0.14 Table 2: Classification Accuracy based on SVM. ORF and SORF provide competitive classification accuracy for a given D. Exact is based on kernel-SVM trained on the Gaussian kernel. Note that in all the settings SORF is faster than RFF and ORF by a factor of O(d/ log d). For example, on gisette with D = 2d, SORF provides 10 times speedup in comparison with RFF and ORF. 5 Experiments Kernel Approximation. We first show kernel approximation performance on six datasets. The input feature dimension d is set to be power of 2 by padding zeros or subsampling. Figure 4 compares the mean squared error (MSE) of all methods. For fixed D, the kernel approximation MSE exhibits the following ordering: SORF ' ORF < QMC [25] < RFF [19] < Other fast kernel approximations [13, 28]. By imposing orthogonality on the linear transformation matrix, Orthogonal Random Features (ORF) achieves significantly lower approximation error than Random Fourier Features (RFF). The Structured Orthogonal Random Features (SORF) have almost identical MSE to that of ORF. All other fast kernel approximation methods, such as circulant [28] and Fast Food [13] have higher MSE. We also include Digital Net, the best performing method among Quasi-Monte Carlo techniques [25]. Its MSE is lower than that of RFF, but still higher than that of ORF and SORF. The order of time cost for a fixed D is SORF ' Other fast kernel approximations [13, 28] ORF = QMC [25] = RFF [19]. Remarkably, SORF has both better computational efficiency and higher kernel approximation quality compared to other methods. We also apply ORF and SORF on classification tasks. Table 2 shows classification accuracy for different kernel approximation techniques with a (linear) SVM classifier. SORF is competitive with or better than RFF, and has greatly reduced time and space costs. The Role of σ. Note that a very small σ will lead to overfitting, and a very large σ provides no discriminative power for classification. Throughout the experiments, σ for each dataset is chosen to be the mean distance of the 50th 2 nearest neighbor, which empirically yields good classification results [28]. As shown in Section 3, the relative improvement over RFF is positively correlated with σ. Figure 5(a)(b) verify this on the mnist dataset. Notice that the proposed methods (ORF and SORF) consistently improve over RFF. Simplifying SORF. The SORF transformation consists of three Hadamard-Diagonal blocks. A natural question is whether using fewer computations and randomness can achieve similar empirical performance. Figure 5(c) shows that reducing the number of blocks to two (HDHD) provides similar performance, while reducing to one block (HD) leads to large error. 6 Analysis and General Applicability of the Hadamard-Diagonal Structure We provide theoretical discussions of SORF in this section. We first show that for large d, SORF is an unbiased estimator of the Gaussian kernel. 1 2 3 4 5 6 7 8 9 10 0 RFF ORF SORF (a) σ = 0.5 50NN distance 1 2 3 4 5 6 7 8 9 10 0 RFF ORF SORF (b) σ = 2 50NN distance 1 2 3 4 5 6 7 8 9 10 0 HDHDHD HDHD HD (c) Variants of SORF Figure 5: (a) (b) MSE on mnist with different σ. (c) Effect of using less randomness on mnist. HDHDHD is the the proposed SORF method. HDHD reduces the number of Hadamard-Diagonal blocks to two, and HD uses only one such block. Theorem 3. (Appendix C) Let KSORF(x, y) be the approximate kernel computed with linear transformation matrix d HD1HD2HD3. Let z = ||x y||/σ. Then ,,,E(KSORF(x, y)) e z2/2,,, 6z p Even though SORF is nearly-unbiased, proving tight variance and concentration guarantees similar to ORF remains an open question. The following discussion provides a sketch in that direction. We first show a lemma of RFF. Lemma 3. Let W be a random Gaussian matrix as in RFF, for a given z, the distribution of Wz is N(0, ||z||2Id). Note that Wz in RFF can be written as Rg, where R is a scaled orthogonal matrix such that each row has norm ||z||2 and g is distributed according to N(0, Id). Hence the distribution of Rg is N(0, ||z||2Id), identical to Wz. The concentration results of RFF use the fact that the projections of a Gaussian vector g onto orthogonal directions R are independent. We show that d HD1HD2HD3z has similar properties. In particular, we show that it can be written as R g, where rows of R are near-orthogonal (with high probability) and have norm ||z||2, and the vector g is close to Gaussian ( g has independent sub-Gaussian elements), and hence the projections behave near-independently . Specifically, g = vec(D1) (vector of diagonal entries of D1), and R is a function of D2, D3 and z. Theorem 4. (Appendix D) For a given z, there exists a R (function of D2, D3, z), such that p d HD1HD2HD3z = Rvec(D1). Each row of R has norm ||z||2 and for any t 1/d, with probability 1 de c t2/3d1/3, the inner product between any two rows of R is at most t||z||2, where c is a constant. The above result can also be applied to settings not limited to kernel approximation. In the appendix, we show empirically that the same scheme can be successfully applied to angle estimation where the nonlinear map f is a non-smooth sign( ) function [4]. We note that the HD1HD2HD3 structure has also been recently used in fast cross-polytope LSH [2, 12, 6]. 7 Conclusions We have demonstrated that imposing orthogonality on the transformation matrix can greatly reduce the kernel approximation MSE of Random Fourier Features when approximating Gaussian kernels. We further proposed a type of structured orthogonal matrices with substantially lower computation and memory cost. We provided theoretical insights indicating that the Hadamard-Diagonal block structure can be generally used to replace random Gaussian matrices in a broader range of applications. Our method can also be generalized to other types of kernels such as general shift-invariant kernels and polynomial kernels based on Schoenberg s characterization as in [18]. [1] N. Ailon and B. Chazelle. Approximate nearest neighbors and the fast Johnson-Lindenstrauss transform. In STOC, 2006. [2] A. Andoni, P. Indyk, T. Laarhoven, I. Razenshteyn, and L. Schmidt. Practical and optimal lsh for angular distance. In NIPS, 2015. [3] S. Bochner. Harmonic analysis and the theory of probability. Dover Publications, 1955. [4] M. S. Charikar. Similarity estimation techniques from rounding algorithms. In STOC, 2002. [5] Y. Cheng, F. X. Yu, R. S. Feris, S. Kumar, A. Choudhary, and S.-F. Chang. An exploration of parameter redundancy in deep networks with circulant projections. In ICCV, 2015. [6] K. Choromanski, F. Fagan, C. Gouy-Pailler, A. Morvan, T. Sarlos, and J. Atif. Triplespin-a generic compact paradigm for fast machine learning computations. ar Xiv, 2016. [7] K. Choromanski and V. Sindhwani. Recycling randomness with structure for sublinear time kernel expansions. ICML, 2015. [8] C. Cortes and V. Vapnik. Support-vector networks. Machine Learning, 20(3):273 297, 1995. [9] B. J. Fino and V. R. Algazi. Unified matrix treatment of the fast walsh-hadamard transform. IEEE Transactions on Computers, (11):1142 1146, 1976. [10] T. Joachims. Training linear SVMs in linear time. In KDD, 2006. [11] P. Kar and H. Karnick. Random feature maps for dot product kernels. In AISTATS, 2012. [12] C. Kennedy and R. Ward. Fast cross-polytope locality-sensitive hashing. ar Xiv, 2016. [13] Q. Le, T. Sarlós, and A. Smola. Fastfood approximating kernel expansions in loglinear time. In ICML, [14] F. Li, C. Ionescu, and C. Sminchisescu. Random fourier approximations for skewed multiplicative histogram kernels. Pattern Recognition, pages 262 271, 2010. [15] S. Maji and A. C. Berg. Max-margin additive classifiers for detection. In ICCV, 2009. [16] R. J. Muirhead. Aspects of multivariate statistical theory, volume 197. John Wiley & Sons, 2009. [17] H. Niederreiter. Quasi-Monte Carlo Methods. Wiley Online Library, 2010. [18] J. Pennington, F. Yu, and S. Kumar. Spherical random features for polynomial kernels. In NIPS, 2015. [19] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPS, 2007. [20] A. Rudi, R. Camoriano, and L. Rosasco. Generalization properties of learning with random features. ar Xiv:1602.04474, 2016. [21] S. Shalev-Shwartz, Y. Singer, N. Srebro, and A. Cotter. Pegasos: Primal estimated sub-gradient solver for SVM, volume = 127, year = 2011. Mathematical Programming, (1):3 30. [22] V. Sreekanth, A. Vedaldi, A. Zisserman, and C. Jawahar. Generalized RBF feature maps for efficient detection. In BMVC, 2010. [23] B. Sriperumbudur and Z. Szabó. Optimal rates for random fourier features. In NIPS, 2015. [24] A. Vedaldi and A. Zisserman. Efficient additive kernels via explicit feature maps. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(3):480 492, 2012. [25] J. Yang, V. Sindhwani, H. Avron, and M. Mahoney. Quasi-monte carlo feature maps for shift-invariant kernels. In ICML, 2014. [26] T. Yang, Y.-F. Li, M. Mahdavi, R. Jin, and Z.-H. Zhou. Nyström method vs random fourier features: A theoretical and empirical comparison. In NIPS, 2012. [27] F. X. Yu, S. Kumar, Y. Gong, and S.-F. Chang. Circulant binary embedding. In ICML, 2014. [28] F. X. Yu, S. Kumar, H. Rowley, and S.-F. Chang. Compact nonlinear maps and circulant extensions. ar Xiv:1503.03893, 2015. [29] X. Zhang, F. X. Yu, R. Guo, S. Kumar, S. Wang, and S.-F. Chang. Fast orthogonal projection based on kronecker product. In ICCV, 2015.