# regularized_weighted_low_rank_approximation__76b1dd8c.pdf Regularized Weighted Low Rank Approximation Frank Ban UC Berkeley / Google fban@berkeley.edu David Woodruff Carnegie Mellon University dwoodruf@cs.cmu.edu Qiuyi (Richard) Zhang UC Berkeley / Google qiuyi@berkeley.edu The classical low rank approximation problem is to find a rank k matrix UV (where U has k columns and V has k rows) that minimizes the Frobenius norm of A UV . Although this problem can be solved efficiently, we study an NP-hard variant of this problem that involves weights and regularization. A previous paper of [Razenshteyn et al. 16] derived a polynomial time algorithm for weighted low rank approximation with constant rank. We derive provably sharper guarantees for the regularized version by obtaining parameterized complexity bounds in terms of the statistical dimension rather than the rank, allowing for a rank-independent runtime that can be significantly faster. Our improvement comes from applying sharper matrix concentration bounds, using a novel conditioning technique, and proving structural theorems for regularized low rank problems. 1 Introduction In the weighted low rank approximation problem, one is given a matrix M 2 n d, a weight matrix W 2 n d, and an integer parameter k, and would like to find factors U 2 n k and V 2 k d so as to minimize k W (M U V )k2 i,j(Mi,j h Ui, , V ,ji)2, where Ui, denotes the i-th row of U and V ,j denotes the j-th column of V . W.l.o.g., we assume n d. This is a weighted version of the classical low rank approximation problem, which is a special case when Wi,j = 1 for all i and j. One often considers the approximate version of this problem, for which one is given an approximation parameter " 2 (0, 1) and would like to find U 2 n k and V 2 k d so that k W (M U V )k2 F (1 + ") min U 02 k d k W (M U 0 V 0)k2 Weighted low rank approximation extends the classical low rank approximation problem in many ways. While in principal component analysis, one typically first subtracts off the mean to make the matrix M have mean 0, this does not fix the problem of differing variances. Indeed, imagine one of the columns of M has much larger variance than the others. Then in classical low rank approximation with k = 1, it could suffice to simply fit this single high variance column and ignore the remaining entries of M. Weighted low rank approximation, on the other hand, can correct for this by re-weighting each of the entries of M to give them similar variance; this allows for the low rank approximation U V to capture more of the remaining data. This technique is often used in gene expression analysis and co-occurrence matrices; we refer the reader to [SJ03] and the Wikipedia entry on weighted low rank approximation1. The well-studied problem of matrix completion is 1https://en.wikipedia.org/wiki/Low-rank_approximation#Weighted_low-rank_ approximation_problems 33rd Conference on Neural Information Processing Systems (Neur IPS 2019), Vancouver, Canada. also a special case of weighted low rank approximation, where the entries Wi,j are binary, and has numerous applications in recommendation systems and other settings with missing data. Unlike its classical variant, weighted low rank approximation is known to be NP-hard [GG11]. Classical low rank approximation can be solved quickly via the singular value decomposition, which is often sped up with sketching techniques [Woo14, PW15, TYUC17]. However, in the weighted setting, under a standard complexity-theoretic assumption known as the Random Exponential Time Hypothesis (see, e.g., Assumption 1.3 in [RSW16] for a discussion), there is a fixed constant "0 2 (0, 1) for which any algorithm achieving (1) with constant probability and for " = "0, and even for k = 1, requires 2 (r) time, where r is the number of distinct columns of the weight matrix W. Furthermore, as shown in Theorem 1.4 of [RSW16], this holds even if W has both at most r distinct rows and r distinct columns. Despite the above hardness, in a number of applications the parameter r may be small. Indeed, in a matrix in which the rows correspond to users and the columns correspond to ratings of a movie, such as in the Netflix matrix, one may have a small number of categories of movies. In this case, one may want to use the same column in W for each movie in the same category. It may thus make sense to renormalize user ratings based on the category of movies being watched. Note that any number of distinct rows of W is possible here, as different users may have completely different ratings, but there is just one distinct column of W per category of movie. In some settings one may simultaneously have a small number of distinct rows and a small number of distinct columns. This may occur if say, the users are also categorized into a small number of groups. For example, the users may be grouped by age and one may want to weight ratings of different categories of movies based on age. That is, maybe cartoon ratings of younger users should be given higher weight, while historical films rated by older users should be given higher weight. Motivated by such applications when r is small, [RSW16] propose several parameterized complexity algorithms. They show that in the case that W has at most r distinct rows and r distinct columns, there is an algorithm solving (1) in 2O(k2r/")poly(n) time. If W has at most r distinct columns but any number of distinct rows, there is an algorithm achieving (1) in 2O(k2r2/")poly(n) time. Note that these bounds imply that for constant k and ", even if r is as large as (log n) in the first case, and (plog n) in the second case, the corresponding algorithm is polynomial time. In [RSW16], the authors also consider the case when the rank of the weight matrix W is at most r, which includes the r distinct rows and columns, as well as the r distinct column settings above, as special cases. In this case the authors achieve an n O(k2r/") time algorithm. Note that this is only polynomial time if k, r, and " are each fixed constants, and unlike the algorithms for the other two settings, this algorithm is not fixed parameter tractable, meaning its running time cannot be written as f(k, r, 1/") poly(nd), where f is a function that is independent of n and d. There are also other algorithms for weighted low rank approximation, though they do not have provable guarantees, or require strong assumptions on the input. There are gradient-based approaches of Shpak [Shp90] and alternating minimization approaches of Lu et al. [LPW97, LA03], which were refined and used in practice by Srebro and Jaakkola [SJ03]. However, neither of these has provable gurantees. While there is some work that has provable guarantees, it makes incoherence assumptions on the low rank factors of M, as well as assumptions that the weight matrix W is spectrally close to the all ones matrix [LLR16] and that there are no 0 weights. 1.1 Our Contributions The only algorithms with provable guarantees that do not make assumptions on the inputs are slow, and inherently so given the above hardness results. Motivated by this and the widespread use of regularization in machine learning, we propose to look at provable guarantees for regularized weighted low rank approximation. In one version of this problem, where the parameter r corresponds to the rank of the weight matrix W, we are given a matrix M 2 n d, a weight matrix W 2 n d with rank r, and a target integer k > 0, and we consider the problem k d k W (UV M)k2 F + λk V k2 Let U , V minimize k W (UV M)k2 F + λk V k2 F and OPT be the minimum value. Regularization is a common technique to avoid overfitting and to solve an ill-posed problem. It has been applied in the context of weighted low rank approximation [DN11], though so far the only such results known for weighted low rank approximation with regularization are heuristic. In this paper we give the first provable bounds, without any assumptions on the input, on regularized weighted low rank approximation. Importantly, we show that regularization improves our running times for weighted low rank approximation, as specified below. Intuitively, the complexity of regularized problems depends on the statistical dimension or effective dimension of the underlying problem, which is often significantly smaller than the number of parameters in the regularized setting. Let U and V denote the optimal low-rank matrix approximation factors, DWi,: denote the diagonal matrix with the i-th row of W along the diagonal, and DW:,j denote the diagonal matrix with the j-th column of W along the diagonal. Improving the Exponent: We first show how to improve the n O(k2r/") time algorithm of [RSW16] to a running time of n O((s+log(1/"))rk/"). Here s is defined to be the maximum statistical dimension of V DWi,: and DW:,j U , over all i = 1, . . . , n, and j = 1, . . . , d, where the statistical dimension of a matrix M is: Definition 1. Let sdλ(M) = P i 1/(1 + λ/σ2 i ) denote the statistical dimension of M with regularizing weight λ (here σi are the singular values of M). Note that this maximum value s is always at most k and for any s log(1/"), our bound directly improves upon the previous time bound. Our improvement requires us to sketch matrices with k columns down to s/" rows where s/" is potentially smaller than k. This is particularly interesting since most previous provable sketching results for low rank approximation cannot have sketch sizes that are smaller than the rank, as following the standard analysis would lead to solving a regression problem on a matrix with fewer rows than columns. Thus, we introduce the notion of an upper and lower distortion factor (KS and S below) and show that the lower distortion factor will satisfy tail bounds only on a smaller-rank subspace of size s/", which can be smaller than k. Directly following the analysis of [RSW16] will cause the lower distortion factor to be infinite. The upper distortion factor also satisfies tail bounds via a more powerful matrix concentration result not used previously. Furthermore, we apply a novel conditioning technique that conditions on the product of the upper and lower distortion factors on separate subspaces, whereas previous work only conditions on the condition number of a specific subspace. We next considerably strengthen the above result by showing an n O(r2(s+log(1/"))2/"2)) time algorithm. This shows that the rank k need not be in the exponent of the algorithm at all! We do this via a novel projection argument in the objective (sketching on the right), which was not done in [RSW16] and also improves a previous result for the classical setting in [ACW17]. To gain some perspective on this result, suppose " is a large constant, close to 1, and r is a small constant. Then our algorithm runs in n O(s2) time as opposed to the algorithm of [RSW16] which runs in n O(k2) time. We stress in a number of applications, the effective dimension s may be a very small constant, close to 1, even though the rank parameter k can be considerably larger. This occurs, for example, if there is a single dominant singular value, or if the singular values are geometrically decaying. Concretely, it is realistic that k could be (log n), while s = (1), in which case our algorithm is the first polynomial time algorithm for this setting. Improving the Base: We can further optimize by removing our dependence on n in the base. The non-negative rank of a n d matrix A is defined to be the least r such that there exist factors U 2 Rn r and V 2 Rr d where A = U V and both U and V have non-negative entries. By applying a novel rounding procedure, if in addition the non-negative rank of W is at most r0, then we can obtain a fixed-parameter tractable algorithm running in time 2r0r2(s+log(1/"))2/"2)poly(n). Note that r r0, where r is the rank of W. Note also that if W has at most r distinct rows or columns, then its non-negative rank is also at most r since we can replace the entries of W with their absolute values without changing the objective function, while still preserving the property of at most r distinct rows and/or columns. Consequently, we significantly improve the algorithms for a small number of distinct rows and/or columns of [RSW16], as our exponent is independent of k. Thus, even if k = (n) but the statistical dimension s = O(plog n), for constant r0 and " our algorithm is polynomial time, while the best previous algorithm would be exponential time. We also give ways, other than non-negative rank, for improving the running time. Supposing that the rank of W is r again, we apply iterative techniques in linear system solving like Richardson s Iteration and preconditioning to further improve the running time. We are able to show that instead of an npoly(rs/") time algorithm, we are able to obtain algorithms that have running time roughly (σ2/λ)poly(rs/")poly(n) or (u W /l W )poly(rs/")poly(n), where σ2 is defined to be the maximum singular value of V DWi,: and DW:,j U , over all i = 1, . . . , n, and j = 1, . . . , d, while u W is defined to be the maximum absolute value of an entry of W and l W the minimum absolute value of an entry. In a number of settings one may have σ2/λ = O(1) or u W /l W = O(1) in which case we again obtain fixed parameter tractable algorithms. Empirical Evaluation: Finally, we give an empirical evaluation of our results. While the main goal of our work is to obtain the first algorithms with provable guarantees for regularized weighted low rank approximation, we can also use them to guide heuristics in practice. In particular, alternating minimization is a common heuristic for weighted low rank approximation. We consider a sketched version of alternating minimization to speed up each iteration. We show that in the regularized case, the dimension of the sketch can be significantly smaller if the statistical dimension is small, which is consistent with our theoretical results. 2 Preliminaries We let k k F denote the Frobenius norm of a matrix and let be the elementwise matrix multiplication operator. We denote x 2 [a, b] y if ay x by. For a matrix M, let Mi,; denote its ith row and let M;,j denote its jth column. For v 2 n, let Dv denote the n n diagonal matrix with its i-th diagonal entry equal to vi. For a matrix M with non-negative Mij, let nnr(M) denote the nonnegative rank of M. Let sr(M) = k Mk2 F /k Mk2 denote the stable rank of M. Let D denote a distribution over r n matrices; in our setting, there are matrices with entries that are Gaussian random variables with mean 0 and variance 1/r (or r n Count Sketch matrices [Woo14]). Definition 2. For S sampled from a distribution of matrices D and a matrix M with n rows, let c S(M) 1 denote the smallest (possibly infinite) number such that k SMvk2 2 [c S(M) 1, c S(M)]k Mvk2 for all v. Definition 3. For S sampled from a distribution of matrices D and a matrix M, let KS(M) 1 denote the smallest number such that k SMvk2 KS(M)k Mvk2 for all v. Definition 4. For S sampled from a distribution of matrices D and a matrix M, let S(M) 1 denote the largest number such that k SMvk2 S(M)k Mvk2 for all v. Note that by definition, cs(M) equals the max of KS(M) and 1 S(M). We define the condition number of a matrix A to be c A = KA(I)/ A(I). 2.1 Previous Techniques Building upon the initial framework established in [RSW16], we apply a polynomial system solver to solve weighted regularized LRA to high accuracy. By applying standard sketching guarantees, v can be made a polynomial function of k, 1/", r that is independent of n. Theorem 1 ([Ren92a][Ren92b][BPR96]). Given a real polynomial system P(x1, x2, ..., xv) having v variables and m polynomial constraints fi(x1, ..., xv) i0, where i 2 { , =, }, d is the maximum degree of all polynomials, and H is the maximum bitsize of the coefficients of the polynomials, one can determine if there exists a solution to P in (md)O(v)poly(H) time. Intuitively, the addition of regularization requires us to only preserve directions with high spectral weight in order to preserve our low rank approximation well enough. This dimension of the subspace spanned by these important directions is exactly the statistical dimension of the problem, allowing us to sketch to a size less than k that could provably preserve our low rank approximation well enough. In line with this intuition, we use an important lemma from [CNW16] Lemma 2.1. Let A, B be matrices with n rows and let S, sampled from D, have = ( 1 γ2 (K + log(1/"))) rows and n columns. Then k AT ST SB AT Bk > γ (k Ak2 + k Ak2 F /K) (k Bk2 + k Bk2 In particular, if we choose K > (sr(A) + sr(B)), then we have for some small constant γ0, k AT ST SB AT Bk > γ0k Akk Bk 3 Multiple Regression Sketches In this section, we prove our main structural theorem which allows us to sketch regression matrices to the size of the statistical dimension of the matrices while maintaining provable guarantees. Specifically, to approximately solve a sum of regression problems, we are able to reduce the dimension of the problem to the maximum statistical dimension of the regression matrices. Theorem 2. Let M (1), . . . , M (d) 2 n k and b(1), . . . , b(d) 2 n be column vectors. Let S 2 n be sampled from D with = ( 1 "(s + log(1/"))) and s = maxi{sdλ(M (i))}. Define x(i) = argminx k M (i)x b(i)k2 +λkxk2 and y(i) = argminy k S(M (i)y b(i))k2 +λkyk2. Then, with constant probability, k M (i)y(i) b(i)k2 + λky(i)k2 (1 + ") k M (i)x(i) b(i)k2 + λkx(i)k2 We note that a simple union bound would incur a dependence of a factor of log(d) in the sketching dimension l. While this might seem mild at first, the algorithms we consider are exponential in l, implying that we would be unable to derive polynomial time algorithms for solving weighted low rank approximation even when the input and weight matrix are both of constant rank. Therefore, we need an average case version of sketching guarantees to hold; however, this is not always the case since l is small and applying Lemma 2.1 na ıvely only gives a probability bound. Ultimately, we must condition on the event of a combination of sketching guarantees holding and carefully analyzing the expectation in separate cases. 4 Algorithms In this section, we present a fast algorithm for solving regularized weighted low rank approximation. Our algorithm exploits the structure of low-rank approximation as a sum of regression problems and applies the main structural theorem of our previous section to significantly reduce the number of variables in the optimization process. Note that we can write k W (UV A)k2 k Ui,;V DWi,; Ai,;DWi,;k2 = k DW;,j UV;,j DW;,j A;,jk2 4.1 Using the Polynomial Solver with Sketching Now we sample Gaussian sketch matrices S0 from " ) log(1/") and S00 from " ) log(1/") n. We let P (i) denote V DWi,;S0 and Q(j) denote S00DW;,j U. The matrices P (i) and Q(j) can be encoded using ( s+log(1/") " )kr variables. For fixed P (i) and Q(j) we can define k Ui,;P (i) Ai,;DWi,;S0k2 + λk Ui,;k2 k Q(j)V;,j S00DW;,j A;,jk2 + λk V;,jk2 Algorithm 1 Regularized Weighted Low Rank Approximation public : procedure REGWEIGHTEDLOWRANK(A, W, λ, s, k, ") Sample Gaussian sketch S0 2 " ) log(1/") from D Sample Gaussian sketch S00 2 " ) log(1/") n from D. Create matrix variables P (i) 2 " ) log(1/"), Q(j) 2 " ) log(1/") for i, j from 1 to r . Variables used in polynomial system solver Use Cramer s Rule to express Ui,; = Ai,;DWi,;S0(P (i))T (P (i)(P (i))T + λIk) 1 as a rational function of variables P (i); similarly, V;,j = ((Q(j))T Q(j) + λIk) 1(Q(j))T S00DW;,j A;,j . U, V are now rational function of variables P, Q Solve min U, V k W ( U V A)k2 F + λk V k2 F and apply binary search to find U, V . Optimization with polynomial solver of Theorem 1 in variables P, Q return U, V Ui,; = Ai,;DWi,;S0(P (i))T (P (i)(P (i))T + λIk) 1 V;,j = ((Q(j))T Q(j) + λIk) 1(Q(j))T S00DW;,j A;,j so U and V can be encoded as rational functions over ( (s+log(1/"))kr " ) variables by Cramer s Rule. By Theorem 2, we can argue that min U, V k W ( U V A)k2 F is a good approximation for k W (U V A)k2 F with constant probability, and so in particular such a good approximation exists. By using the polynomial system feasibility checkers described in Theorem 1 and following similar procedures and doing binary search, we get an polynomial system with O(nk)-degree and O( s+log(1/") " kr) variables after simplifying and so our polynomial solver runs in time n O((s+log(1/"))kr/") log O(1)( /δ). Theorem 3. Given matrices A, W 2 n d and " < 0.1 such that 1. rank(W) = r 2. non-zero entries of A, W are multiples of δ > 0 3. all entries of A, W are at most in absolute value 4. s = maxi,j{sdλ(V DWi,;), sdλ(DW;,j U )} < k there is an algorithm to find U 2 k d in time n O((s+log(1/"))kr/") log O(1)( /δ) such that k W (UV A)k2 F + λk V k2 F (1 + ")OPT. 4.2 Removing Rank Dependence Note that the running time of our algorithm still depends on k, the dimension that we are reducing to. To remove this, we prove a structural theorem about low rank approximation of low statistical dimension matrices. Theorem 4. Given matrices A, W in n d and " < 0.1 such that rank(W) is r, and letting s equal maxi,j{sdλ(V DWi,;), sdλ(DW;,j U )} < k, if we let OPT(k) denote k d k W (UV A)k2 F + λk V k2 then OPT(O(r(s + log(1/"))/")) (1 + ")OPT(k) Combining Theorem 3 and Theorem 4, we have our final theorem. We note that this also improves running time bounds of un-weighted regularized low rank approximation in Section 3 of [ACW17]. Theorem 5. Given matrices A, W 2 n d and " < 0.1 and the conditions of Theorem 3, there is an algorithm to find U 2 k d in time n O(r2(s+log(1/"))2/"2) log O(1)( /δ) such that k W (UV A)k2 F + λk V k2 F (1 + ")OPT. 5 Reducing the Degree of the Solver 5.1 Non-negative Weight Matrix and Non-Negative Rank Under the case where W is rank r with only r distinct columns (up to scaling), we are able to improve the running time to poly(n)2r3(s+log(1/"))2/"2 by showing that the degree of the solver is O(rk) as opposed to O(nk). Specifically, the O(nk) degree comes from clearing the denominator of the rational expressions that come from na ıvely using and analyzing Cramer s Rule; in this section, we demonstrate different techniques to avoid the dependence on n. We also show the same running time bound under a more relaxed assumption of non-negative rank, which is always less than or equal to the number of distinct columns. Theorem 6. Given matrices A, W 2 n d and " < 0.1 and suppose the conditions of Theorem 3 hold. Furthermore, we are given Y, Z 0 such that W = Y Z and Y, ZT has nnr(W) = r0 columns. Then, there is an algorithm to find U 2 k d in time poly(n) 2O(r0r2(s+log( 1 "2 ) log O(1) * such that k W (UV A)k2 F + λk V k2 F (1 + ")OPT. 5.2 Richardson s Iteration Note that the current polynomial solver uses Cramer s rule to solve k Ui,;P (i) Ai,;DWi,;S0k2 + λk Ui,;k2 Ui,; = Ai,;DWi,;S0(P (i))T (P (i)(P (i))T + λIk) 1. We want to use Richardson s iteration instead to avoid rational expressions and the dependence on n in the degree that comes from clearing the denominator. Theorem 7 (Preconditioned Richardson [CKP+17]). Let A, B be symmetric PSD matrices such that ker(A) = ker(B) and A B A. Then, for any b, if x0 = 0 and xi+1 = xi B 1(Axi b), kxt A 1bk "k A 1bk for t = (log(c B/")/ ). Furthermore, we may express xt as a polynomial of degree O(t) in terms of the entries of B 1 and A. Theorem 8. Given matrices A, W 2 n d and " < 0.1 and suppose the conditions of Theorem 3 hold. Furthermore, let σ = maxi,j{σ1(V DWi,;), σ1(DW;,j U )}. There is an algorithm to find U 2 k d in time poly(n) , where l = O((s + log( 1 "2 ), such that k W (UV A)k2 F + λk V k2 F (1 + ")OPT + . 5.3 Preconditioned GD Instead of directly using Richardson s iteration, we may use a preconditioner first instead. The right preconditioner can also be guessed at a cost of increasing the number of variables. Note that multiple preconditioners may be used, but for now, we demonstrate the power of a single preconditioner. Theorem 9. Given matrices A, W 2 n d and " < 0.1 and suppose the conditions of Theorem 8 hold. Furthermore, 0 < l W |W| u W . Then, there is an algorithm to find U 2 k d in time poly(n) , where l = O((s + log( 1 such that k W (UV A)k2 F + λk V k2 F (1 + ")OPT + . 6 Experiments The goal of our experiments was to show that sketching down to the statistical dimension can be applied to regularized weighted low rank approximation without sacrificing overall accuracy in the objective function, as our theory predicts. We combine sketching with a common practical alternating minimization heuristic for solving regularized weighted low rank approximation, rather than implementing a polynomial system solver. At each step in the algorithm, we have a candidate U and V and we perform a best-response where we either update U to give the best regularized weighted low rank approximation cost for V or we update V to give the best regularized weighted low rank approximation cost for U. We used a synthetic dataset and several real datasets (connectus, NIPS, landmark, and language) [DH11, PJST17]. All our experiments ran on a Mac Book Pro 2012 with 8GB RAM and a 2.5GHz Intel Core i5 processor. Figure 1: Regularized weighted low-rank approximations with λ = 0.556 for landmark, λ = 314 for NIPS, and λ = 1 for the synthetic dataset. For all datasets, the task was to find a rank k = 50 decomposition of a given matrix A. For the experiments of Figure 1 and Figure 2, we generated dense weight matrices W with the same shape as A and with each entry being a 1 with probability 0.8, a 0.1 with probability 0.15, and a 0.01 with probability 0.05. For the experiments of Figure 3, we generated binary weight matrices where each entry was 1 with probability 0.9. Note that this setting corresponds to a regularized form of matrix completion. We set the regularization parameter λ to a variety of values (described in the Figure captions) to illustrate the performance of the algorithm in different settings. For the synthetic dataset, we generated matrices A with dimensions 10000 1000 by picking random orthogonal vectors as its singular vectors and having one singular value equal to 10000 and making the rest small enough so that the statistical dimension of A would be approximately 2. For the real datasets, we chose the connectus, landmark, and language datasets [DH11] and the NIPS dataset [PJST17]. We sampled 1000 rows from each adjacency or word count matrix to form a matrix B and then let A be the radial basis function kernel of B. We performed three algorithms on each dataset: Singular Value Decomposition, Alternating Minimization without Sketching, and Alternating Minimization with Sketching. We parameterized the experiments by t, the sketch size, which took values in {10, 15, 20, 25, 30, 35, 40, 45, 50}. For each value of t we generated a weight matrix and either generated a synthetic dataset or sampled a real dataset as described in the above paragraphs, then tested our three algorithms. For the SVD, we just took the best rank k approximation to A as given by the top k singular vectors. We used the built-in svd function in numpy s linear algebra package. For Alternating Minimization without Sketching, we initialized the low rank matrix factors U and V to be random subsets of the rows and columns of A respectively, then performed n = 25 steps of alternating minimization. For Alternating Minimization with Sketching, we initialized U and V the same way, but performed n = 25 best response updates in the sketched space, as in Theorem 3. The sketch S was chosen to be a Count Sketch matrix with t. Based on Theorem 5, we calculated a rank t < k approximation of A whenever we used a sketch of size t. We plotted the objective value of the low rank approximation for the connectus, NIPS, and synthetic datasets (the other datasets as well as a different family of weight matrices are discussed in the supplementary material) for each value of t and each algorithm in Figure 1. The experiment with the landmark dataset in Figure 1 used a regularization parameter value of λ = 0.556, while the experiments with the NIPS and synthetic datasets used a value of λ = 1. Objective values were given in 1000 s in the Frobenius norm. Both forms of alternating minimization greatly outperform the low rank approximation given by the SVD. Alternating minimization with sketching comes within a factor of 1.5 approximation to alternating minimization without sketching and can sometimes slightly outperform alternating minimization without sketching2, showing that performing Count Sketch at each best response step does not result in a critically suboptimal objective value. The runtime of alternating minimization with sketching varies from being around 2 times as fast as alternating minimization without sketching (when the sketch size t = 10) to being around 1.4 times as fast (when the sketch size t = 50). Table 1 shows the runtimes for the non-synthetic experiments of Figure 1. Figure 2: Regularized weighted low-rank approximations with λ = 2.754 for language, λ = 1 for NIPS, and λ = 1.982 for landmark. Figure 3: Regularized weighted low-rank approximations with binary weights and λ = 1. Runtimes w/ sketching t landmark NIPS 10 54.31 49.1 15 53.58 50.33 20 57.65 51.8 25 65.53 56.43 30 68.68 57.34 35 72.22 62.66 40 79.94 63.48 45 81.22 67.73 50 72.77 73.11 Runtimes wo/ sketching t landmark NIPS 10 126.22 104.5 15 113.8 105.75 20 119.17 104.28 25 121.69 104.35 30 123.51 105.42 35 129.87 100.5 40 123.65 101.75 45 109.02 104.93 50 100.61 101.77 Table 1: Runtimes in seconds for alternating minimization with and without sketching. Acknowledgements: Part of this work was done while D. Woodruff was visiting Google Mountain View as well as the Simons Institute for the Theory of Computing, and was supported in part by an Office of Naval Research (ONR) grant N00014-18-1-2562. 2See the supplementary material for additional discussion. [ACW17] Haim Avron, Kenneth L. Clarkson, and David P. Woodruff. Sharper bounds for regular- ized data fitting. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2017, August 16-18, 2017, Berkeley, CA, USA, pages 27:1 27:22, 2017. 1.1, 4.2, A [BPR96] Saugata Basu, Richard Pollack, and Marie-Franc oise Roy. On the combinatorial and al- gebraic complexity of quantifier elimination. Journal of the ACM (JACM), 43(6):1002 1045, 1996. 1 [CD08] Zizhong Chen and Jack J. Dongarra. Condition numbers of gaussian random matrices. Co RR, abs/0810.0800, 2008. A [CKP+17] Michael B Cohen, Jonathan Kelner, John Peebles, Richard Peng, Anup B Rao, Aaron Sidford, and Adrian Vladu. Almost-linear-time algorithms for markov chains and new spectral primitives for directed graphs. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, pages 410 419. ACM, 2017. 7 [CNW16] Michael B. Cohen, Jelani Nelson, and David P. Woodruff. Optimal Approximate Ma- trix Product in Terms of Stable Rank. In Ioannis Chatzigiannakis, Michael Mitzenmacher, Yuval Rabani, and Davide Sangiorgi, editors, 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016), volume 55 of Leibniz International Proceedings in Informatics (LIPIcs), pages 11:1 11:14, Dagstuhl, Germany, 2016. Schloss Dagstuhl Leibniz-Zentrum fuer Informatik. 2.1 [DH11] Timothy A. Davis and Yifan Hu. The university of florida sparse matrix collection. ACM Trans. Math. Softw., 38(1):1:1 1:25, December 2011. 6, 6 [DN11] Saptarshi Das and Arnold Neumaier. Regularized low rank approximation of weighted data sets. Preprint, 2011. 1.1 [GG11] Nicolas Gillis and Francois Glineur. Low-rank matrix approximation with weights or missing data is np-hard. SIAM Journal on Matrix Analysis and Applications, 32(4):1149 1165, 2011. 1 [LA03] Wu-Sheng Lu and Andreas Antoniou. New method for weighted low-rank approxima- tion of complex-valued matrices and its application for the design of 2-d digital filters. In ISCAS (3), pages 694 697, 2003. 1 [LLR16] Yuanzhi Li, Yingyu Liang, and Andrej Risteski. Recovery guarantee of weighted low- rank approximation via alternating minimization. In International Conference on Machine Learning, pages 2358 2367, 2016. 1 [LPW97] W.-S Lu, S.-C Pei, and P.-H Wang. Weighted low-rank approximation of general com- plex matrices and its application in the design of 2-d digital filters. In IEEE Transactions on Circuits and Systems, volume 44, pages 650 655, 1997. 1 [PJST17] Valerio Perrone, Paul A. Jenkins, Dario Span o, and Yee Whye Teh. Poisson random fields for dynamic feature models. Journal of Machine Learning Research, 18:127:1 127:45, 2017. 6, 6 [PW15] Mert Pilanci and Martin J Wainwright. Randomized sketches of convex programs with sharp guarantees. IEEE Transactions on Information Theory, 61(9):5096 5115, 2015. 1 [Ren92a] James Renegar. On the computational complexity and geometry of the first-order theory of the reals. part i: Introduction. preliminaries. the geometry of semi-algebraic sets. the decision problem for the existential theory of the reals. Journal of symbolic computation, 13(3):255 299, 1992. 1 [Ren92b] James Renegar. On the computational complexity and geometry of the first-order theory of the reals. part ii: The general decision problem. preliminaries for quantifier elimination. Journal of Symbolic Computation, 13(3):301 327, 1992. 1 [RSW16] Ilya P. Razenshteyn, Zhao Song, and David P. Woodruff. Weighted low rank approx- imations with provable guarantees. In Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, USA, June 18-21, 2016, pages 250 263, 2016. 1, 1.1, 1.1, 2.1, A [Shp90] D. Shpak. A weighted-least-squares matrix decomposition method with applications to the design of two-dimensional digital filters. In IEEE Thirty Third Midwest Symposium on Circuits and Systems, 1990. 1 [SJ03] Nathan Srebro and Tommi S. Jaakkola. Weighted low-rank approximations. In Machine Learning, Proceedings of the Twentieth International Conference (ICML 2003), August 21-24, 2003, Washington, DC, USA, pages 720 727, 2003. 1 [TYUC17] Joel A Tropp, Alp Yurtsever, Madeleine Udell, and Volkan Cevher. Practical sketching algorithms for low-rank matrix approximation. SIAM Journal on Matrix Analysis and Applications, 38(4):1454 1485, 2017. 1 [Woo14] David P. Woodruff. Sketching as a tool for numerical linear algebra. Foundations and Trends in Theoretical Computer Science, 10(1-2):1 157, 2014. 1, 2