# subspace_embeddings_for_the_polynomial_kernel__4f53d574.pdf Subspace Embeddings for the Polynomial Kernel Haim Avron IBM T.J. Watson Research Center Yorktown Heights, NY 10598 haimav@us.ibm.com Huy L. Nguy ˆen Simons Institute, UC Berkeley Berkeley, CA 94720 hlnguyen@cs.princeton.edu David P. Woodruff IBM Almaden Research Center San Jose, CA 95120 dpwoodru@us.ibm.com Sketching is a powerful dimensionality reduction tool for accelerating statistical learning algorithms. However, its applicability has been limited to a certain extent since the crucial ingredient, the so-called oblivious subspace embedding, can only be applied to data spaces with an explicit representation as the column span or row span of a matrix, while in many settings learning is done in a high-dimensional space implicitly defined by the data matrix via a kernel transformation. We propose the first fast oblivious subspace embeddings that are able to embed a space induced by a non-linear kernel without explicitly mapping the data to the highdimensional space. In particular, we propose an embedding for mappings induced by the polynomial kernel. Using the subspace embeddings, we obtain the fastest known algorithms for computing an implicit low rank approximation of the higher-dimension mapping of the data matrix, and for computing an approximate kernel PCA of the data, as well as doing approximate kernel principal component regression. 1 Introduction Sketching has emerged as a powerful dimensionality reduction technique for accelerating statistical learning techniques such as ℓp-regression, low rank approximation, and principal component analysis (PCA) [12, 5, 14]. For natural settings of parameters, this technique has led to the first asymptotically optimal algorithms for a number of these problems, often providing considerable speedups over exact algorithms. Behind many of these remarkable algorithms is a mathematical apparatus known as an oblivious subspace embedding (OSE). An OSE is a data-independent random transform which is, with high probability, an approximate isometry over the embedded subspace, i.e. Sx = (1 ϵ) x simultaneously for all x V where S is the OSE, V is the embedded subspace and is some norm of interest. For the OSE to be useful in applications, it is crucial that applying it to a vector or a collection of vectors (a matrix) can be done faster than the intended downstream use. So far, all OSEs proposed in the literature are for embedding subspaces that have a representation as the column space or row space of an explicitly provided matrix, or close variants of it that admit a fast multiplication given an explicit representation (e.g. [1]). This is quite unsatisfactory in many statistical learning settings. In many cases the input may be described by a moderately sized n-byd sample-by-feature matrix A, but the actual learning is done in a much higher (possibly infinite) dimensional space, by mapping each row of A to an high dimensional feature space. Using the kernel trick one can access the high dimensional mapped data points through an inner product space, and thus avoid computing the mapping explicitly. This enables learning in the high-dimensional space even if explicitly computing the mapping (if at all possible) is prohibitive. In such a setting, computing the explicit mapping just to compute an OSE is usually unreasonable, if not impossible (e.g., if the feature space is infinite-dimensional). The main motivation for this paper is the following question: is it possible to design OSEs that operate on the high-dimensional space without explicitly mapping the data to that space? We propose the first fast oblivious subspace embeddings for spaces induced by a non-linear kernel without explicitly mapping the data to the high-dimensional space. In particular, we propose an OSE for mappings induced by the polynomial kernel. We then show that the OSE can be used to obtain faster algorithms for the polynomial kernel. Namely, we obtain faster algorithms for approximate kernel PCA and principal component regression. We now elaborate on these contributions. Subspace Embedding for Polynomial Kernel Maps. Let k(x, y) = ( x, y + c)q for some constant c 0 and positive integer q. This is the degree q polynomial kernel function. Without loss of generality we assume that c = 0 since a non-zero c can be handled by adding a coordinate of value c to all of the data points. Let φ(x) denote the function that maps a d-dimensional vector x to the dq-dimensional vector formed by taking the product of all subsets of q coordinates of x, i.e. φ(v) = v . . . v (doing q times), and let φ(A) denote the application of φ to the rows of A. φ is the map that corresponds to the polynomial kernel, that is k(x, y) = φ(x), φ(y) , so learning with the data matrix A and the polynomial kernel corresponds to using φ(A) instead of A in a method that uses linear modeling. We describe a distribution over dq O(3qn2/ϵ2) sketching matrices S so that the mapping φ(A) S can be computed in O(nnz(A)q) + poly(3qn/ϵ) time, where nnz(A) denotes the number of nonzero entries of A. We show that with constant probability arbitrarily close to 1, simultaneously for all n-dimensional vectors z, z φ(A) S 2 = (1 ϵ) z φ(A) 2, that is, the entire row-space of φ(A) is approximately preserved. Additionally, the distribution does not depend on A, so it defines an OSE. It is important to note that while the literature has proposed transformations for non-linear kernels that generate an approximate isometry (e.g. Kernel PCA), or methods that are data independent (like the Random Fourier Features [17]), no method previously had both conditions, and thus they do not constitute an OSE. These conditions are crucial for the algorithmic applications we propose (which we discuss next). Applications: Approximate Kernel PCA, PCR. We say an n k matrix V with orthonormal columns spans a rank-k (1 + ϵ)-approximation of an n d matrix A if A V V T A F (1 + ϵ) A Ak F , where A F is the Frobenius norm of A and Ak = arg min X of rank k A X F . We state our results for constant q. In O(nnz(A))+n poly(k/ϵ) time an n k matrix V with orthonormal columns can be computed, for which φ(A) V V T φ(A) F (1+ϵ) φ(A) [φ(A)]k F , where [φ(A)]k denotes the best rank-k approximation to φ(A). The k-dimensional subspace V of Rn can be thought of as an approximation to the top k left singular vectors of φ(A). The only alternative algorithm we are aware of, which doesn t take time at least dq, would be to first compute the Gram matrix φ(A) φ(A)T in O(n2d) time, and then compute a low rank approximation, which, while this computation can also exploit sparsity in A, is much slower since the Gram matrix is often dense and requires Ω(n2) time just to write down. Given V , we show how to obtain a low rank approximation to φ(A). Our algorithm computes three matrices V, U, and R, for which φ(A) V U φ(R) F (1 + ϵ) φ(A) [φ(A)]k F . This representation is useful, since given a point y Rd, we can compute φ(R) φ(y) quickly using the kernel trick. The total time to compute the low rank approximation is O(nnz(A)) + (n + d) poly(k/ϵ). This is considerably faster than standard kernel PCA which first computes the Gram matrix of φ(A). We also show how the subspace V can be used to regularize and speed up various learning algorithms with the polynomial kernel. For example, we can use the subspace V to solve regression problems of the form minx V x b 2, an approximate form of principal component regression [8]. This can serve as a form of regularization, which is required as the problem minx φ(A)x b 2 is usually underdetermined. A popular alternative form of regularization is to use kernel ridge regression, which requires O(n2d) operations. As nnz(A) nd, our method is again faster. Our Techniques and Related Work. Pagh recently introduced the TENSORSKETCH algorithm [14], which combines the earlier COUNTSKETCH of Charikar et al. [3] with the Fast Fourier Transform (FFT) in a clever way. Pagh originally applied TENSORSKETCH for compressing matrix multiplication. Pham and Pagh then showed that TENSORSKETCH can also be used for statistical learning with the polynomial kernel [16]. However, it was unclear whether TENSORSKETCH can be used to approximately preserve entire subspaces of points (and thus can be used as an OSE). Indeed, Pham and Pagh show that a fixed point v Rd has the property that for the TENSORSKETCH sketching matrix S, φ(v) S 2 = (1 ϵ) φ(v) 2 with constant probability. To obtain a high probability bound using their results, the authors take a median of several independent sketches. Given a high probability bound, one can use a net argument to show that the sketch is correct for all vectors v in an n-dimensional subspace of Rd. The median operation results in a non-convex embedding, and it is not clear how to efficiently solve optimization problems in the sketch space with such an embedding. Moreover, since n independent sketches are needed for probability 1 exp( n), the running time will be at least n nnz(A), whereas we seek only nnz(A) time. Recently, Clarkson and Woodruff [5] showed that COUNTSKETCH can be used to provide a subspace embedding, that is, simultaneously for all v V , φ(v) S 2 = (1 ϵ) φ(v) 2. TENSORSKETCH can be seen as a very restricted form of COUNTSKETCH, where the additional restrictions enable its fast running time on inputs which are tensor products. In particular, the hash functions in TENSORSKETCH are only 3-wise independent. Nelson and Nguyen [13] showed that COUNTSKETCH still provides a subspace embedding if the entries are chosen from a 4-wise independent distribution. We significantly extend their analysis, and in particular show that 3-wise independence suffices for COUNTSKETCH to provide an OSE, and that TENSORSKETCH indeed provides an OSE. We stress that all previous work on sketching the polynomial kernel suffers from the drawback described above, that is, it provides no provable guarantees for preserving an entire subspace, which is needed, e.g., for low rank approximation. This is true even of the sketching methods for polynomial kernels that do not use TENSORSKETCH [10, 7], as it only provides tail bounds for preserving the norm of a fixed vector, and has the aforementioned problems of extending it to a subspace, i.e., boosting the probability of error to be enough to union bound over net vectors in a subspace would require increasing the running time by a factor equal to the dimension of the subspace. After we show that TENSORSKETCH is an OSE, we need to show how to use it in applications. An unusual aspect is that for a TENSORSKETCH matrix S, we can compute φ(A) S very efficiently, as shown by Pagh [14], but computing S φ(A) is not known to be efficiently computable, and indeed, for degree-2 polynomial kernels this can be shown to be as hard as general rectangular matrix multiplication. In general, even writing down S φ(A) would take a prohibitive dq amount of time. We thus need to design algorithms which only sketch on one side of φ(A). Another line of research related to ours is that on random features maps, pioneered in the seminal paper of Rahimi and Recht [17] and extended by several papers a recent fast variant [11]. The goal in this line of research is to construct randomized feature maps Ψ( ) so that the Euclidean inner product Ψ(u), Ψ(v) closely approximates the value of k(u, v) where k is the kernel; the mapping Ψ( ) is dependent on the kernel. Theoretical analysis has focused so far on showing that Ψ(u), Ψ(v) is indeed close to k(u, v). This is also the kind of approach that Pham and Pagh [16] use to analyze TENSORSKETCH. The problem with this kind of analysis is that it is hard to relate it to downstream metrics like generalization error and thus, in a sense, the algorithm remains a heuristic. In contrast, our approach based on OSEs provides a mathematical framework for analyzing the mappings, to reason about their downstream use, and to utilize various tools from numerical linear algebra in conjunction with them, as we show in this paper. We also note that in to contrary to random feature maps, TENSORSKETCH is attuned to taking advantage of possible input sparsity. e.g. Le et al. [11] method requires computing the Walsh-Hadamard transform, whose running time is independent of the sparsity. 2 Background: COUNTSKETCH and TENSORSKETCH We start by describing the COUNTSKETCH transform [3]. Let m be the target dimension. When applied to d-dimensional vectors, the transform is specified by a 2-wise independent hash function h : [d] [m] and a 2-wise independent sign function s : [d] { 1, +1}. When applied to v, the value at coordinate i of the output, i = 1, 2, . . . , m is P j|h(j)=i s(j)vj. Note that COUNTSKETCH can be represented as a m d matrix in which the j-th column contains a single non-zero entry s(j) in the h(j)-th row. We now describe the TENSORSKETCH transform [14]. Suppose we are given a point v Rd and so φ(v) Rdq, and the target dimension is again m. The transform is specified using q 3wise independent hash functions h1, . . . , hq : [d] [m], and q 4-wise independent sign functions s1, . . . , sq : [d] {+1, 1}. TENSORSKETCH applied to v is then COUNTSKETCH applied to φ(v) with hash function H : [dq] [m] and sign function S : [dq] {+1, 1} defined as follows: H(i1, . . . , iq) = h1(i1) + h2(i2) + + hq(iq) mod m, and S(i1, . . . , iq) = s1(i1) s2(i1) sq(iq). It is well-known that if H is constructed this way, then it is 3-wise independent [2, 15]. Unlike the work of Pham and Pagh [16], which only used that H was 2-wise independent, our analysis needs this stronger property of H. The TENSORSKETCH transform can be applied to v without computing φ(v) as follows. First, compute the polynomials j|hℓ(j)=i vj sℓ(j), for ℓ= 1, 2, . . . , q. A calculation [14] shows ℓ=1 pℓ(x) mod (x B 1) = (j1,...,jq)|H(j1,...,jq)=i vj1 vjq S(j1, . . . , jq), that is, the coefficients of the product of the q polynomials mod (xm 1) form the value of TENSORSKETCH(v). Pagh observed that this product of polynomials can be computed in O(qm log m) time using the Fast Fourier Transform. As it takes O(q nnz(v)) time to form the q polynomials, the overall time to compute TENSORSKETCH(v) is O(q(nnz(v) + m log m)). 3 TENSORSKETCH is an Oblivious Subspace Embedding Let S be the dq m matrix such that TENSORSKETCH(v) is φ(v) S for a randomly selected TENSORSKETCH. Notice that S is a random matrix. In the rest of the paper, we refer to such a matrix as a TENSORSKETCH matrix with an appropriate number of columns i.e. the number of hash buckets. We will show that S is an oblivious subspace embedding for subspaces in Rdq for appropriate values of m. Notice that S has exactly one non-zero entry per row. The index of the non-zero in the row (i1, . . . , iq) is H(i1, . . . , iq) = Pq j=1 hj(ij) mod m. Let δa,b be the indicator random variable of whether Sa,b is non-zero. The sign of the non-zero entry in row (i1, . . . , iq) is S(i1, . . . , iq) = Qq j=1 sj(ij). Our main result is that the embedding matrix S of TENSORSKETCH can be used to approximate matrix product and is a subspace embedding (OSE). Theorem 1 (Main Theorem). Let S be the dq m matrix such that TENSORSKETCH(v) is φ(v)S for a randomly selected TENSORSKETCH. The matrix S satisfies the following two properties. 1. (Approximate Matrix Product:) Let A and B be matrices with dq rows. For m (2 + 3q)/(ϵ2δ), we have Pr[ AT SST B AT B 2 F ϵ2 A 2 F B 2 F ] 1 δ 2. (Subspace Embedding:) Consider a fixed k-dimensional subspace V . If m k2(2 + 3q)/(ϵ2δ), then with probability at least 1 δ, x S = (1 ϵ) x simultaneously for all x V . Algorithm 1 k-Space 1: Input: A Rn d, ϵ (0, 1], integer k. 2: Output: V Rn k with orthonormal columns which spans a rank-k (1 + ϵ)-approximation to φ(A). 3: Set the parameters m = Θ(3qk2 + k/ϵ) and r = Θ(3qm2/ϵ2). 4: Let S be a dq m TENSORSKETCH and T be an independent dq r TENSORSKETCH. 5: Compute φ(A) S and φ(A) T. 6: Let U be an orthonormal basis for the column space of φ(A) S. 7: Let W be the m k matrix containing the top k left singular vectors of U T φ(A)T. 8: Output V = UW. We establish the theorem via two lemmas. The first lemma proves the approximate matrix product property via a careful second moment analysis. Due to space constraints, a proof is included only in the supplementary material version of the paper. Lemma 2. Let A and B be matrices with dq rows. For m (2 + 3q)/(ϵ2δ), we have Pr[ AT SST B AT B 2 F ϵ2 A 2 F B 2 F ] 1 δ The second lemma proves that the subspace embedding property follows from the approximate matrix product property. Lemma 3. Consider a fixed k-dimensional subspace V . If m k2(2 + 3q)/(ϵ2δ), then with probability at least 1 δ, x S = (1 ϵ) x simultaneously for all x V . Proof. Let B be a dq k matrix whose columns form an orthonormal basis of V . Thus, we have BT B = Ik and B 2 F = k. The condition that x S = (1 ϵ) x simultaneously for all x V is equivalent to the condition that the singular values of BT S are bounded by 1 ϵ. By Lemma 2, for m (2 + 3q)/((ϵ/k)2δ), with probability at least 1 δ, we have BT SST B BT B 2 F (ϵ/k)2 B 4 F = ϵ2 Thus, we have BT SST B Ik 2 BT SST B Ik F ϵ. In other words, the squared singular values of BT S are bounded by 1 ϵ, implying that the singular values of BT S are also bounded by 1 ϵ. Note that A 2 for a matrix A denotes its operator norm. 4 Applications 4.1 Approximate Kernel PCA and Low Rank Approximation We say an n k matrix V with orthonormal columns spans a rank-k (1 + ϵ)-approximation of an n d matrix A if A V V T A F (1 + ϵ) A Ak F . Algorithm k-Space (Algorithm 1) finds an n k matrix V which spans a rank-k (1 + ϵ)-approximation of φ(A). Before proving the correctness of the algorithm, we start with two key lemmas. Proofs are included only in the supplementary material version of the paper. Lemma 4. Let S Rdq m be a randomly chosen TENSORSKETCH matrix with m = Ω(3qk2 + k/ϵ). Let UU T be the n n projection matrix onto the column space of φ(A) S. Then if [U T φ(A)]k is the best rank-k approximation to matrix U T φ(A), we have U[U T φ(A)]k φ(A) F (1 + O(ϵ)) φ(A) [φ(A)]k F . Lemma 5. Let UU T be as in Lemma 4. Let T Rdq r be a randomly chosen TENSORSKETCH matrix with r = O(3qm2/ϵ2), where m = Ω(3qk2 + k/ϵ). Suppose W is the m k matrix whose columns are the top k left singular vectors of U T φ(A)T. Then, UWW T U T φ(A) φ(A) F (1 + ϵ) φ(A) [φ(A)]k F . Theorem 6. (Polynomial Kernel Rank-k Space.) For the polynomial kernel of degree q, in O(nnz(A)q) + n poly(3qk/ϵ) time, Algorithm k-SPACE finds an n k matrix V which spans a rank-k (1 + ϵ)-approximation of φ(A). Proof. By Lemma 4 and Lemma 5, the output V = UW spans a rank-k (1 + ϵ)-approximation to φ(A). It only remains to argue the time complexity. The sketches φ(A) S and φ(A) T can be computed in O(nnz(A)q) + n poly(3qk/ϵ) time. In n poly(3qk/ϵ) time, the matrix U can be obtained from φ(A) S and the product U T φ(A)T can be computed. Given U T φ(A)T, the matrix W of top k left singular vectors can be computed in poly(3qk/ϵ) time, and in n poly(3qk/ϵ) time the product V = UW can be computed. Hence the overall time is O(nnz(A)q) + n poly(3qk/ϵ), and the theorem follows. We now show how to find a low rank approximation to φ(A). A proof is included in the supplementary material version of the paper. Theorem 7. (Polynomial Kernel PCA and Low Rank Factorization) For the polynomial kernel of degree q, in O(nnz(A)q)+(n+d) poly(3qk/ϵ) time, we can find an n k matrix V , a k poly(k/ϵ) matrix U, and a poly(k/ϵ) d matrix R for which V U φ(R) A F (1 + ϵ) φ(A) [φ(A)]k F . The success probability of the algorithm is at least .6, which can be amplified with independent repetition. Note that Theorem 7 implies the rowspace of φ(R) contains a k-dimensional subspace L with dq dq projection matrix LLT for which φ(A)LLT φ(A) F (1 + ϵ) φ(A) [φ(A)]k F , that is, L provides an approximation to the space spanned by the top k principal components of φ(A). 4.2 Regularizing Learning With the Polynomial Kernel Consider learning with the polynomial kernel. Even if d n it might be that even for low values of q we have dq n. This makes a number of learning algorithms underdetermined, and increases the chance of overfitting. The problem is even more severe if the input matrix A has a lot of redundancy in it (noisy features). To address this, many learning algorithms add a regularizer, e.g., ridge terms. Here we propose to regularize by using rank-k approximations to the matrix (where k is the regularization parameter that is controlled by the user). With the tools developed in the previous subsection, this not only serves as a regularization but also as a means of accelerating the learning. We now show that two different methods that can be regularized using this approach. 4.2.1 Approximate Kernel Principal Component Regression If dq > n the linear regression with φ(A) becomes underdetermined and exact fitting to the right hand side is possible, and in more than one way. One form of regularization is Principal Component Regression (PCR), which first uses PCA to project the data on the principal component, and then continues with linear regression in this space. We now introduce the following approximate version of PCR. Definition 8. In the Approximate Principal Component Regression Problem (Approximate PCR), we are given an n d matrix A and an n 1 vector b, and the goal is to find a vector x Rk and an n k matrix V with orthonormal columns spanning a rank-k (1 + ϵ)-approximation to A for which x = argminx V x b 2. Notice that if A is a rank-k matrix, then Approximate PCR coincides with ordinary least squares regression with respect to the column space of A. While PCR would require solving the regression problem with respect to the top k singular vectors of A, in general finding these k vectors exactly results in unstable computation, and cannot be found by an efficient linear sketch. This would occur, e.g., if the k-th singular value σk of A is very close (or equal) to σk+1. We therefore relax the definition to only require that the regression problem be solved with respect to some k vectors which span a rank-k (1 + ϵ)-approximation to A. The following is our main theorem for Approximate PCR. Theorem 9. (Polynomial Kernel Approximate PCR.) For the polynomial kernel of degree q, in O(nnz(A)q) + n poly(3qk/ϵ) time one can solve the approximate PCR problem, namely, one can output a vector x Rk and an n k matrix V with orthonormal columns spanning a rank-k (1 + ϵ)-approximation to φ(A), for which x = argminx V x b 2. Proof. Applying Theorem 6, we can find an n k matrix V with orthonormal columns spanning a rank-k (1+ϵ)-approximation to φ(A) in O(nnz(A)q)+n poly(3qk/ϵ) time. At this point, one can solve solve the regression problem argminx V x b 2 exactly in O(nk) time since the minimizer is x = V T b. 4.2.2 Approximate Kernel Canonical Correlation Analysis In Canonical Correlation Analysis (CCA) we are given two matrices A, B and we wish to find directions in which the spaces spanned by their columns are correlated. Due to space constraints, details appear only in the supplementary material version of the paper. 5 Experiments We report two sets of experiments whose goal is to demonstrate that the k-Space algorithm (Algorithm 1) is useful as a feature extraction algorithm. We use standard classification and regression datasets. In the first set of experiments, we compare ordinary ℓ2 regression to approximate principal component ℓ2 regression, where the approximate principal components are extracted using k-Space (we use RLSC for classification). Specifically, as explained in Section 4.2.1, we use k-Space to compute V and then use regression on V (in one dataset we also add an additional ridge regularization). To predict, we notice that V = φ(A) S R 1 W, where R is the R factor of φ(A) S, so S R 1 W defines a mapping to the approximate principal components. So, to predict on a matrix At we first compute φ(At) S R 1 W (using TENSORSKETCH to compute φ(At) S fast) and then multiply by the coefficients found by the regression. In all the experiments, φ( ) is defined using the kernel k(u, v) = (u T v + 1)3. While k-Space is efficient and gives an embedding in time that is faster than explicitly expanding the feature map, or using kernel PCA, there is still some non-negligible overhead in using it. Therefore, we also experimented with feature extraction using only a subset of the training set. Specifically, we first sample the dataset, and then use k-Space to compute the mapping S R 1 W. We apply this mapping to the entire dataset before doing regression. The results are reported in Table 1. Since k-Space is randomized, we report the mean and standard deviation of 5 runs. For all datasets, learning with the extracted features yields better generalized errors than learning with the original features. Extracting the features using only a sample of the training set results in only slightly worse generalization errors. With regards to the MNIST dataset, we caution the reader not to compare the generalization results to the ones obtained using the polynomial kernel (as reported in the literature). In our experiments we do not use the polynomial kernel on the entire dataset, but rather use it to extract features (i.e., do principal component regularization) using only a subset of the examples (only 5,000 examples out of 60,000). One can expect worse results, but this is a more realistic strategy for very large datasets. On very large datasets it is typically unrealistic to use the polynomial kernel on the entire dataset, and approximation techniques, like the ones we suggest, are necessary. We use a similar setup in the second set of experiments, now using linear SVM instead of regression (we run only on the classification datasets). The results are reported in Table 2. Although the gap is smaller, we see again that generally the extracted features lead to better generalization errors. We remark that it is not our goal to show that k-Space is the best feature extraction algorithm of the classification algorithms we considered (RLSC and SVM), or that it is the fastest, but rather that it can be used to extract features of higher quality than the original one. In fact, in our experiments, while for a fixed number of extracted features, k-Space produces better features than simply using TENSORSKETCH, it is also more expensive in terms of time. If that additional time is used to do learning or prediction with TENSORSKETCH with more features, we overall get better generalization error (we do not report the results of these experiments). However, feature extraction is widely applicable, and there can be cases where having fewer high quality features is beneficial, e.g. performing multiple learning on the same data, or a very expensive learning tasks. Table 1: Comparison of testing error with using regression with original features and with features extracted using k-Space. In the table, n is number of training instances, d is the number of features per instance and nt is the number of instances in the test set. Regression stands for ordinary ℓ2 regression. PCA Regression stand for approximate principal component ℓ2 regression. Sample PCA Regression stands approximate principal component ℓ2 regression where only ns samples from the training set are used for computing the feature extraction. In PCA Regression and Sample PCA Regression k features are extracted. In k-Space we use m = O(k) and r = O(k) with the ratio between m and k and r and k as detailed in the table. For classification tasks, the percent of testing points incorrectly predicted is reported. For regression tasks, we report yp y 2/ y where yp is the predicted values and y is the ground truth. Dataset Regression PCA Regression Sampled PCA Regression MNIST 14% Out of 7.9% 0.06% classification Memory k = 500, ns = 5000 n = 60, 000, d = 784 m/k = 2 nt = 10, 000 r/k = 4 CPU 12% 4.3% 1.0% 3.6% 0.1% regression k = 200 k = 200, ns = 2000 n = 6, 554, d = 21 m/k = 4 m/k = 4 nt = 819 r/k = 8 r/k = 8 ADULT 15.3% 15.2% 0.1% 15.2% 0.03% classification k = 500 k = 500, ns = 5000 n = 32, 561, d = 123 m/k = 2 m/k = 2 nt = 16, 281 r/k = 4 r/k = 4 CENSUS 7.1% 6.5% 0.2% 6.8% 0.4% regression k = 500 k = 500, ns = 5000 n = 18, 186, d = 119 m/k = 4 m/k = 4 nt = 2, 273 r/k = 8 r/k = 8 λ = 0.001 λ = 0.001 USPS 13.1% 7.0% 0.2% 7.5% 0.3% classification k = 200 k = 200, ns = 2000 n = 7, 291, d = 256 m/k = 4 m/k = 4 nt = 2, 007 r/k = 8 r/k = 8 Table 2: Comparison of testing error with using SVM with original features and with features extracted using k-Space.. In the table, n is number of training instances, d is the number of features per instance and nt is the number of instances in the test set. SVM stands for linear SVM. PCA SVM stand for using k-Space to extract features, and then using linear SVM. Sample PCA SVM stands for using only ns samples from the training set are used for computing the feature extraction. In PCA SVM and Sample PCA SVM k features are extracted. In k-Space we use m = O(k) and r = O(k) with the ratio between m and k and r and k as detailed in the table. For classification tasks, the percent of testing points incorrectly predicted is reported. Dataset SVM PCA SVM Sampled PCA SVM MNIST 8.4% Out of 6.1% 0.1% classification Memory k = 500, ns = 5000 n = 60, 000, d = 784 m/k = 2 nt = 10, 000 r/k = 4 ADULT 15.0% 15.1% 0.1% 15.2% 0.1% classification k = 500 k = 500, ns = 5000 n = 32, 561, d = 123 m/k = 2 m/k = 2 nt = 16, 281 r/k = 4 r/k = 4 USPS 8.3% 7.2% 0.2% 7.5% 0.3% classification k = 200 k = 200, ns = 2000 n = 7, 291, d = 256 m/k = 4 m/k = 4 nt = 2, 007 r/k = 8 r/k = 8 6 Conclusions and Future Work Sketching based dimensionality reduction has so far been limited to linear models. In this paper, we describe the first oblivious subspace embeddings for a non-linear kernel expansion (the polynomial kernel), opening the door for sketching based algorithms for a multitude of problems involving kernel transformations. We believe this represents a significant expansion of the capabilities of sketching based algorithms. However, the polynomial kernel has a finite-expansion, and this finiteness is quite useful in the design of the embedding, while many popular kernels induce an infinitedimensional mapping. We propose that the next step in expanding the reach of sketching based methods for statistical learning is to design oblivious subspace embeddings for non-finite kernel expansions, e.g., the expansions induced by the Gaussian kernel. [1] H. Avron, V. Sindhawni, and D. P. Woodruff. Sketching structured matrices for faster nonlinear regression. In Advances in Neural Information Processing Systems (NIPS), 2013. [2] L. Carter and M. N. Wegman. Universal classes of hash functions. J. Comput. Syst. Sci., 18(2):143 154, 1979. [3] M. Charikar, K. Chen, and M. Farach-Colton. Finding frequent items in data streams. Theor. Comput. Sci., 312(1):3 15, 2004. [4] K. L. Clarkson and D. P. Woodruff. Numerical linear algebra in the streaming model. In Proceedings of the 41th Annual ACM Symposium on Theory of Computing (STOC), 2009. [5] K. L. Clarkson and D. P. Woodruff. Low rank approximation and regression in input sparsity time. In Proceedings of the 45th Annual ACM Symposium on Theory of Computing (STOC), 2013. [6] P. Drineas, M. W. Mahoney, and S. Muthukrishnan. Relative-error CUR matrix decompositions. SIAM J. Matrix Analysis Applications, 30(2):844 881, 2008. [7] R. Hamid, Y. Xiao, A. Gittens, and D. De Coste. Compact random feature maps. In Proc. of the 31th International Conference on Machine Learning (ICML), 2014. [8] I. T. Jolliffe. A note on the use of principal components in regression. Journal of the Royal Statistical Society, Series C, 31(3):300 303, 1982. [9] R. Kannan, S. Vempala, and D. P. Woodruff. Principal component analysis and higher correlations for distributed data. In Proceedings of the 27th Conference on Learning Theory (COLT), 2014. [10] P. Kar and H. Karnick. Random feature maps for dot product kernels. In Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics (AISTATS), 2012. [11] Q. Le, T. Sarl os, and A. Smola. Fastfood Approximating kernel expansions in loglinear time. In Proc. of the 30th International Conference on Machine Learning (ICML), 2013. [12] M. W. Mahoney. Randomized algorithms for matrices and data. Foundations and Trends in Machine Learning, 3(2):123 224, 2011. [13] J. Nelson and H. Nguyen. OSNAP: Faster numerical linear algebra algorithms via sparser subspace embeddings. In 54th IEEE Annual Symposium on Foundations of Computer Science (FOCS), 2013. [14] R. Pagh. Compressed matrix multiplication. ACM Trans. Comput. Theory, 5(3):9:1 9:17, 2013. [15] M. Patrascu and M. Thorup. The power of simple tabulation hashing. J. ACM, 59(3):14, 2012. [16] N. Pham and R. Pagh. Fast and scalable polynomial kernels via explicit feature maps. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 13, pages 239 247, New York, NY, USA, 2013. ACM. [17] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In Advances in Neural Information Processing Systems (NIPS), 2007.