# median_matrix_completion_from_embarrassment_to_optimality__a3607aec.pdf Median Matrix Completion: from Embarrassment to Optimality Weidong Liu 1 Xiaojun Mao 2 Raymond K. W. Wong 3 In this paper, we consider matrix completion with absolute deviation loss and obtain an estimator of the median matrix. Despite several appealing properties of median, the non-smooth absolute deviation loss leads to computational challenge for large-scale data sets which are increasingly common among matrix completion problems. A simple solution to large-scale problems is parallel computing. However, embarrassingly parallel fashion often leads to inefficient estimators. Based on the idea of pseudo data, we propose a novel refinement step, which turns such inefficient estimators into a rate (near-)optimal matrix completion procedure. The refined estimator is an approximation of a regularized least median estimator, and therefore not an ordinary regularized empirical risk estimator. This leads to a nonstandard analysis of asymptotic behaviors. Empirical results are also provided to confirm the effectiveness of the proposed method. 1. Introduction Matrix completion (MC) has recently gained a substantial amount of popularity among researchers and practitioners due to its wide applications; as well as various related theoretical advances Cand es & Recht (2009); Cand es & Plan (2010); Koltchinskii et al. (2011); Klopp (2014). Perhaps the most well-known example of a MC problem is the Netflix prize problem (Bennett & Lanning, 2007), of which the goal is to predict missing entries of a partially observed matrix of movie ratings. Two commonly shared challenges among MC problems are high dimensionality of the matrix and a huge proportion of missing entries. For instance, Net- *Equal contribution 1School of Mathematical Sciences and Mo E Key Lab of Artificial Intelligence Shanghai Jiao Tong University, Shanghai, 200240, China 2School of Data Science, Fudan University, Shanghai, 200433, China 3Department of Statistics, Texas A&M University, College Station, TX 77843, U.S.A.. Correspondence to: Xiaojun Mao . Proceedings of the 37 th International Conference on Machine Learning, Online, PMLR 119, 2020. Copyright 2020 by the author(s). flix data has less than 1% of observed entries of a matrix with around 5 105 rows and 2 104 customers. With technological advances in data collection, we are confronting increasingly large matrices nowadays. Without any structural assumption on the target matrix, it is well-known that MC is an ill-posed problem. A popular and often well-justified assumption is low rankness, which however leads to a challenging and non-convex rank minimization problem (Srebro et al., 2005). The seminal works of Cand es & Recht (2009); Cand es & Tao (2010); Gross (2011) showed that, when the entries are observed without noise, a perfect recovery of a low-rank matrix can be achieved by a convex optimization via near minimal order of sample size, with high probability. As for the noisy setting, some earlier work (Cand es & Plan, 2010; Keshavan et al., 2010; Chen et al., 2019b) focused on arbitrary, not necessarily random, noise. In general, the arbitrariness may prevent asymptotic recovery even in a probability sense. Recently, a significant number of works (e.g. Bach, 2008; Koltchinskii et al., 2011; Negahban & Wainwright, 2011; Rohde & Tsybakov, 2011; Negahban & Wainwright, 2012; Klopp, 2014; Cai & Zhou, 2016; Fan et al., 2019; Xia & Yuan, 2019) targeted at more amenable random error models, under which (near-)optimal estimators had been proposed. Among these work, trace regression model is one of the most popular models due to its regression formulation. Assume N independent pairs (Xk, Yk), for k = 1, . . . , N, are observed, where Xk s are random design matrices of dimension n1 n2 and Yk s are response variables in R. The trace regression model assumes Yk = tr XT k A + ϵk, k = 1, . . . , N, (1.1) where tr(A) denotes the trace of a matrix A, and A Rn1 n2 is an unknown target matrix. Moreover, the elements of ϵ = (ϵ1, . . . , ϵN) are N i.i.d. random noise variables independent of the design matrices. In MC setup, the design matrices Xk s are assumed to lie in the set of canonical bases X = {ej(n1)ek(n2)T : j = 1, . . . , n1; k = 1, . . . , n2}, (1.2) where ej(n1) is the j-th unit vector in Rn1, and ek(n2) is the k-th unit vector in Rn2. Most methods then apply a regularized empirical risk minimization (ERM) framework Distributed Median Matrix Completion with a quadratic loss. It is well-known that the quadratic loss is most suitable for light-tailed (sub-Gaussian) error, and leads to non-robust estimations. In the era of big data, a thorough and accurate data cleaning step, as part of data preprocessing, becomes virtually impossible. In this regard, one could argue that robust estimations are more desirable, due to their reliable performances even in the presence of outliers and violations of model assumptions. While robust statistics is a well-studied area with a rich history (Davies, 1993; Huber, 2011), many robust methods were developed for small data by today s standards, and are deemed too computationally intensive for big data or complex models. This work can be treated as part of the general effort to broaden the applicability of robust methods to modern data problems. 1.1. Related Work Many existing robust MC methods adopt regularized ERM and assume observations are obtained from a low-rank-plussparse structure A +S+E, where the low-rank matrix A is the target uncontaminated component; the sparse matrix S models the gross corruptions (outliers) locating at a small proportion of entries; and E is an optional (dense) noise component. As gross corruptions are already taken into account, many methods with low-rank-plus-sparse structure are based on quadratic loss. Chandrasekaran et al. (2011); Cand es et al. (2011); Chen et al. (2013); Li (2013) considered the noiseless setting (i.e., no E) with an elementwisely sparse S. Chen et al. (2011) studied the noiseless model with column-wisely sparse S. Under the model with element-wisely sparse S, Wong & Lee (2017) looked into the setting of arbitrary (not necessarily random) noise E, while Klopp et al. (2017) and Chen et al. (2020) studied random (sub-Gaussian) noise model for E. In particular, it was shown in Proposition 3 of Wong & Lee (2017) that in the regularized ERM framework, a quadratic loss with element-wise ℓ1 penalty on the sparse component is equivalent to a direct application of a Huber loss without the sparse component. Roughly speaking, this class of robust methods, based on the low-rank-plus-sparse structure, can be understood as regularized ERMs with Huber loss. Another class of robust MC methods is based on the absolute deviation loss, formally defined in (2.1). The minimizer of the corresponding risk has an interpretation of median (see Section 2.1), and so the regularized ERM framework that applies absolute deviation loss is coined as median matrix completion (Elsener & van de Geer, 2018; Alquier et al., 2019). In the trace regression model, if the medians of the noise variables are zero, the median MC estimator can be treated as a robust estimation of A . Although median is one of the most commonly used robust statistics, the median MC methods have not been studied until recently. Elsener & van de Geer (2018) derived the asymptotic behavior of the trace-norm regularized estimators under both the absolute deviation loss and the Huber loss. Their convergence rates match with the rate obtained in Koltchinskii et al. (2011) under certain conditions. More complete asymptotic results have been developed in Alquier et al. (2019), which derives the minimax rates of convergence with any Lipschitz loss functions including absolute deviation loss. To the best of our knowledge, the only existing computational algorithm of median MC in the literature is proposed by Alquier et al. (2019), which is an alternating direction method of multiplier (ADMM) algorithm developed for the quantile MC with median MC being a special case. However, this algorithm is slow and not scalable to large matrices due to the non-smooth nature of both the absolute deviation loss and the regularization term. Despite the computational challenges, the absolute deviation loss has a few appealing properties as compared to the Huber loss. First, absolute deviation loss is tuning-free while Huber loss has a tuning parameter, which is equivalent to the tuning parameter in the entry-wise ℓ1 penalty in the low-rank-plus-sparse model. Second, absolute deviation loss is generally more robust than Huber loss. Third, the minimizer of expected absolute deviation loss is naturally tied to median, and is generally more interpretable than the minimizer of expected Huber loss (which may vary with its tuning parameter). 1.2. Our Goal and Contributions Our goal is to develop a robust and scalable estimator for median MC, in large-scale problems. The proposed estimator approximately solves the regularized ERM with the non-differentiable absolute deviation loss. It is obtained through two major stages (1) a fast and simple initial estimation via embarrassingly parallel computing and (2) a refinement stage based on pseudo data. As pointed out earlier (with more details in Section 2.2), existing computational strategy (Alquier et al., 2019) does not scale well with the dimensions of the matrix. Inspired by Mackey et al. (2015), a simple strategy is to divide the target matrix into small sub-matrices and perform median MC on every sub-matrices in an embarrassingly parallel fashion, and then naively concatenate all estimates of these sub-matrices to form an initial estimate of the target matrix. Therefore, most computations are done on much smaller sub-matrices, and hence this computational strategy is much more scalable. However, since low-rankness is generally a global (wholematrix) structure, the lack of communications between the computations of different sub-matrices lead to sub-optimal estimation (Mackey et al., 2015). The key innovation of this paper is a fast refinement stage, which transforms the regularized ERM with absolute deviation loss into a regularized ERM with quadratic loss, for which many fast Distributed Median Matrix Completion algorithms exist, via the idea of pseudo data. Motivated by Chen et al. (2019a), we develop the pseudo data based on a Newton-Raphson iteration in expectation. The construction of the pseudo data requires only a rough initial estimate (see Condition (C6) in Section 3), which is obtained in the first stage. As compared to Huber-loss-based methods (sparse-plus-low-rank model), the underlying absolute deviation loss is non-differentiable, leading to computational difficulty for large-scale problems. The proposed strategy involves a novel refinement stage to efficiently combine and improve the embarrassingly parallel sub-matrix estimations. We are able to theoretically show that this refinement stage can improve the convergence rate of the sub-optimal initial estimator to near-optimal order, as good as the computationally expensive median MC estimator of Alquier et al. (2019). To the best of our knowledge, this theoretical guarantee for distributed computing is the first of its kind in the literature of matrix completion. 2. Model and Algorithms 2.1. Regularized Least Absolute Deviation Estimator Let A = (A ,ij)n1,n2 i,j=1 Rn1 n2 be an unknown highdimensional matrix. Assume the N pairs of observations {(Xk, Yk)}N k=1 satisfy the trace regression model (1.1) with noise {εk}N k=1. The design matrices are assumed to be i.i.d. random matrices that take values in X (1.2). Let πst = Pr(Xk = es(n1)e T t (n2)) be the probability of observing (a noisy realization of) the (s, t)-th entry of A and denote Π = (π1,1, . . . , πn1,n2)T. Instead of the uniform sampling where πst π (Koltchinskii et al., 2011; Rohde & Tsybakov, 2011; Elsener & van de Geer, 2018), out setup allows sampling probabilities to be different across entries, such as in Klopp (2014); Lafond (2015); Cai & Zhou (2016); Alquier et al. (2019). See Condition (C1) for more details. Overall, (Y1, X1, ε1), . . . , (YN, XN, εN) are i.i.d. tuples of random variables. For notation s simplicity, we let (Y, X, ε) be a generic independent tuple of random variables that have the same distribution as (Y1, X1, ε1). Without additional specification, the noise variable ε is not identifiable. For example, one can subtract a constant from all entries of A and add this constant to the noise. To identify the noise, we assume P(ϵ 0) = 0.5, which naturally leads to an interpretation of A as median, i.e., A ,ij is the median of Y | X = ei(n1)ej(n2)T. If the noise distribution is symmetric and light-tailed (so that the expectation exists), then E(εk) = 0, and A is the also the mean matrix (A ,ij = E(Y | X = ei(n1)ej(n2)T)), which aligns with the target of common MC techniques (Elsener & van de Geer, 2018). Let f be the probability density function of the noise. For the proposed method, the required condition of f is specified in Condition (C3) of Section 3, which is fairly mild and is satisfied by many heavy-tailed distribu- tions whose expectation may not exist. Define a hypothesis class B(a, n, m) = {A Rn m : A a} where a > 0 such that A B(a, n, m). In this paper, we use the absolute deviation loss instead of the common quadratic loss (e.g., Cand es & Plan, 2010; Koltchinskii et al., 2011; Klopp, 2014). According to Section 4 of the Supplementary Material (Elsener & van de Geer, 2018), A is also characterized as the minimizer of the population risk: A = arg min A B(a,n1,n2) E Y tr(XTA) . (2.1) To encourage a low-rank solution, one natural candidate is the following regularized empirical risk estimator (Elsener & van de Geer, 2018; Alquier et al., 2019): b ALADMC = arg min A B(a,n1,n2) Yk tr(XT k A) + λ N A , (2.2) where A denotes the nuclear norm and λ N 0 is a tuning parameter. The nuclear norm is a convex relaxation of the rank which flavors the optimization and analysis of the statistical property (Cand es & Recht, 2009). Due to non-differentiability of the absolute deviation loss, the objective function in (2.1) is the sum of two non-differentiable terms, rendering common computational strategies based on proximal gradient method (e.g., Mazumder et al., 2010; Wong & Lee, 2017) inapplicable. To the best of our knowledge, there is only one existing computational algorithm for (2.1), which is based on a direct application of alternating direction method of multiplier (ADMM) (Alquier et al., 2019). However, this algorithm is slow and not scalable in practice, when the sample size and the matrix dimensions are large, possibly due to the non-differentiable nature of the loss. We aim to derive a computationally efficient method for estimating the median matrix A in large-scale MC problems. More specifically, the proposed method consists of two stages: (1) an initial estimation via distributed computing (Section 2.2) and (2) a refinement stage to achieve near-optimal estimation (Section 2.3). 2.2. Distributed Initial Estimator Similar to many large-scale problems, it is common to harness distributed computing to overcome computational barriers. Motivated by Mackey et al. (2015), we divide the underlying matrix into several sub-matrices, estimate each sub-matrix separately in an embarrassingly parallel fashion and then combine them to form a computationally efficient (initial) estimator of A . Distributed Median Matrix Completion ? 2 ? ? 3 ? 5 ? ? ? ? 4 ? 3 ? ? 1 ? ... ... ... ... ... ... ... ... ... ? 4 ? ? 2 ? 2 5 ? ? 1 5 ? 1 ? ? 3 ? Figure 1. An example of dividing a matrix into sub-matrices. For the convenience of notations, suppose there exist integers m1, m2, l1 and l2 such that l1 = n1/m1 and l2 = n2/m2. (Otherwise, the following description can be easily extended with l1 = n1/m1 and l2 = n2/m2 which leads to slightly different sizes in several sub-matrices.) We divide the row indices 1, . . . , n1 into l1 subsets evenly where each subset contains m1 index and similarly divide the column indices 1, . . . , n2 into l2 subsets evenly. Then we obtain l1l2 sub-matrices, denoted by A ,l Rm1 m2 for l = 1, . . . , l1l2. See Figure 1 for a pictorial illustration. Let Ωl be the index set of the observed entries within the l-th sub-matrix A ,l, and Nl be the corresponding number of observed entries. Next, we apply the ADMM algorithm of Alquier et al. (2019) to each sub-matrix A ,l and obtain corresponding median estimator: b ALADMC,l = arg min Al B(a,m1,m2) Yk tr(XT l,k Al) + λNl,l Al , (2.3) where Xl,k is a corresponding sub-design matrix of dimensions m1 m2 and λNl,l 0 is a tuning parameter. Note that the most computationally intensive sub-routine in the ADMM algorithm of Alquier et al. (2019) is (repeated applications of) SVD. For sub-matrices of dimension m1 m2, the computational complexity of a single SVD reduced from O(n2 1n2 + n1n2 2) to O(m2 1m2 + m1m2 2). After we have all the b ALADMC,l for l = 1, . . . , l1l2, we can put these estimators of the sub-matrices back together according to their original positions in the target matrix (see Figure 1), and form an initial estimator b ALADMC,0. This computational strategy is conceptually simple and easily implementable. However, despite the low-rank estimations for each sub-matrix, combining them directly can- not guarantee low-rankness of b ALADMC,0. Also, the convergence rate of b ALADMC,0 is not guaranteed to be (near- )optimal, as long as m1, m2 are of smaller order than n1, n2 respectively. See Theorem 1(i) in Section 3. However, for computational benefits, it is desirable to choose small m1, m2. In the next section, we leverage this initial estimator and formulate a refinement stage. 2.3. The Idea of Refinement The proposed refinement stage is based on a form of pseudo data, which leverages the idea from the Newton-Raphson iteration. To describe this idea, we start from the stochastic optimization problem (2.1). Write the loss function as L(A; {Y, X}) = |Y tr(XTA)|. To solve this stochastic optimization problem, the population version of the Newton Raphson iteration takes the following form vec(A1) = vec( b A0) H( b A0) 1E(Y,X) h l( b A0; {Y, X}) i , (2.4) where (Y, X) is defined in Section 2.1 to be independent of the data; vec(A) is the vectorization of the matrix A; b A0 is an initial estimator (to be specified below); and l(A; {Y, X}) is the sub-gradient of L(A; {Y, X}) with respect to vec(A). One can show that the population Hessian matrix takes the form H(A) = 2E(Y,X)(f{tr(XT(A A )})diag(Π), where we recall that Π = (π1,1, . . . , πn1,n2)T is the vector of observation probabilities; and diag( ) transforms a vector into a diagonal matrix whose diagonal is the vector. Also, it can be shown that E(Y,X)[l(A; {Y, X})] = ΠE(Y,X){2I Y tr(XTA) 0 1}. Recall that f(x) is the density function of the noise ϵ. By using H(A ) = 2f(0)diag(Π) in (2.4), we obtain the following approximation. When the initial estimator b A0 is close to the minimizer A , vec(A1) vec( b A0) [2f(0)diag(Π)] 1E(Y,X)[l( b A0; {Y, X})] = E(Y,X) n vec( b A0) [f(0)] 1 I h Y tr(XT b A0) i 1 = [diag(Π)] 1E(Y,X) h vec(X) n vec(X)Tvec( b A0) [f(0)] 1 I h Y tr(XT b A0) i 1 = [diag(Π)] 1E(Y,X) vec(X) Y o (2.5) = {E(Y,X)[vec(X)vec(X)T]} 1E(Y,X) vec(X) Y 0 Distributed Median Matrix Completion where we define the theoretical pseudo data e Y o = tr(XT b A0) [f(0)] 1 I h Y tr(XT b A0) i 1 Here 1n1n2 denotes the vector of dimension n1n2 with all elements equal to 1. Clearly, (2.5) is the vectorization of the solution to arg min A E(Y,X){e Y o tr(XTA)}2, where tr(XTA)} = vec(X)Tvec(A). From this heuristic argument, we can approximate the population Newton update by a least square solution based on the pseudo data Y 0, when we start from an b A0 close enough to A . Without the knowledge of f(0), the pseudo data cannot be used. In the above, f(0) can be easily estimated by the kernel density estimator: bf(0) = 1 Nh Yk tr(XT k b A0) h where K(x) is a kernel function which satisfies Condition (C4) and h > 0 is the bandwidth. For each 1 k N, we define the actual pseudo data e Y used in our proposed procedure to be e Yk = tr(XT k b A0) [ bf(0)] 1 I h Yk tr(XT k b A0) i 1 and e Y = (e Yk). For finite sample, regularization is imposed to estimate the high-dimensional parameter A . By using e Y, one natural candidate for the estimator of A is given by b A = arg min A B(a,n1,n2) e Yk tr(XT k A) 2 + λN A , (2.6) where is the nuclear norm and λN 0 is the tuning parameter. If e Y is replaced by Y, the optimization (2.6) is a common nuclear-norm regularized empirical risk estimator with quadratic loss which has been well studied in the literature (Cand es & Recht, 2009; Cand es & Plan, 2010; Koltchinskii et al., 2011; Klopp, 2014). Therefore, with the knowledge of e Y, corresponding computational algorithms can be adopted to solve (2.6). Note that the pseudo data are based on an initial estimator b A0. In Section 3, we show that any initial estimator that fulfills Condition (C5) can be improved by (2.6), which is therefore called a refinement step. It is easy to verify that the initial estimator b ALADMC,0 in Section 2.2 fulfills such condition. Note that the initial estimator, like b ALADMC,0, introduces complicated dependencies among the entries of e Y, which brings new challenges in analyzing (2.6), as opposed to the common estimator based on Y with independent entries. From our theory (Section 3), the refined estimator (2.6) improves upon the initial estimator. Depending on how bad the initial estimator is, a single refinement step may not be good enough to achieve a (near-)optimal estimator. But this can remedied by reapplying the refinement step again and again. In Section 3, we show that a finite number of application of the refinement step is enough. In our numerical experiments, 4 5 applications would usually produce enough improvement. Write b A(1) = b A given in (2.6) as the estimator from the first iteration and we can construct an iterative procedure to estimate A . In particular, let b A(t 1) be the estimator in the (t 1)-th iteration. Define bf (t)(0) = 1 Nht Yk tr(XT k b A(t 1)) ht where K(x) is the same smoothing function used to estimate f(0) in the first step and ht 0 is the bandwidth for the t-th iteration. Similarly, for each 1 k N, define e Y (t) k =tr(XT k b A(t 1)) bf (t) (0) 1 I h Yk tr(XT k b A(t 1)) i 1 We propose the following estimator b A(t) = arg min A B(a,n1,n2) e Y (t) k tr(XT k A) 2 + λN,t A , (2.8) where λN,t is the tunning parameter in the t-th iteration. To summarize, we list the full algorithm in Algorithm 1. 3. Theoretical Guarantee To begin with, we introduce several notations. Let m+ = m1 + m2, mmax = max{m1, m2} and mmin = min{m1, m2}. Similarly, write n+ = n1 + n2, nmax = max{n1, n2} and nmin = min{n1, n2}. For a given matrix A = (Aij) Rn1 n2, denote σi(A) be the i-th largest singular value of matrix A. Let A = σ1(A), A F = q Pn1 i=1 Pn2 j=1 A2 ij and A = Pnmin i=1 σi(A) be the spectral norm (operator norm), the infinity norm, the Frobenius norm and the trace norm of a matrix A respectively. Define a class of matrices C (n1, n2) = {A Rn1 n2 : A 1}. Denote the rank of matrix A by r = rank(A ) for simplicity. With these notations, we describe the following conditions which are useful in our theoretical analysis. (C1) For each k = 1, . . . , N, the design matrix Xk takes value in the canonical basis X as defined in (1.2). There exist positive constants c and c such that for any (s, t) {1, . . . , n1} {1, . . . , n2}, c/(n1n2) Pr(Xk = es(n1)e T t (n2)) c/(n1n2). Distributed Median Matrix Completion Algorithm 1 Distributed Least Absolute Deviation Matrix Completion Input: Observed data pairs {Xk, Yk} for k = 1, . . . , N, number of observations N, dimensions of design matrix X n1, n2, dimensions of sub-matrices to construct the initial estimator m1, m2 and the split subsets Ωl for l = 1, . . . , l1l2, kernel function K, a sequence of bandwidths ht and the regularization parameters λN,t for t = 1, . . . , T. 1: Get the robust low-rank estimator of each A ,l by solving the minimization problem (2.3) in parallel. 2: Set b A(0) to be the same as the initial estimator b ALADMC,0 by putting b ALADMC,l together. 3: for t = 1, 2 . . . , T do 4: Compute bf (t)(0) := (Nht) 1 PN k=1 K(h 1 t (Yk tr{XT k b A(t 1)})). 5: Construct the pseudo data {e Y (t) k } by equation (2.7). 6: Plugin the pseudo data {e Y (t) k } and compute the estimator b A(t) by solving the minimization problem (2.8). 7: end for Output: The final estimator b A(T ). (C2) The local dimensions m1, m2 on each block satisfies m1 nc 1 and m2 nc 2 for some 0 < c < 1. The number of observations in each block Nl are comparable for all l = 1, . . . , l1l2, i.e, Nl m1m2N/(n1n2). (C3) The density function f( ) is Lipschitz continuous (i.e., |f(x) f(y)| CL|x y| for any x, y R and some constant CL > 0). Moreover, there exists a constant c > 0 such that f(u) c for any |u| 2a. Also, Pr(ϵk 0) = 0.5 for each k = 1, . . . , N. Theorem 1 (Alquier et al. (2019), Theorem 4.6, Initial estimator). Suppose that Conditions (C1) (C3) hold and A B(a, n1, n2). For each l = 1, . . . , n1n2/(m1m2), assume that there exists a matrix with rank at most sl in A ,l + (ρsl/20)C (m1, m2) where ρsl = Cρ(slm1m2)(log(m+)/(m+Nl))1/2 with the universal constant Cρ. (i) Then there exist universal constants c(c, c) and C such that with λNl,l = c(c, c) p log(m+)/(mmin Nl), the estimator b ALADMC,l in (2.3) satisfies b ALADMC,l A ,l F slmmax log(m+) Nl , A ,l 1/2 with probability at least 1 C exp( Cslmmax log(m+)). (ii) Moreover, by putting these l1l2 estimators b ALADMC,l together, for the same constant C in (i), we have the initial estimator b ALADMC,0 satisfies b ALADMC,0 A F n1n2 C min {Pl1l2 l=1 sl}mmax log(m+) l=1 A ,l 1/2 mmax log(m+) with probability at least 1 C exp(log(n1n2) Cmmax log(m+)). From Theorem 1, we can guarantee the convergence of the sub-matrix estimator b ALADMC,l when m1, m2 . For the initial estimator b ALADMC,0, under Condition (C3) and that all the sub-matrices are low-rank (sl 1 for all l), we require the number of observation N C1(m1m2) 1(n1n2)mmax log(m+) for some constant C1 to ensure the convergence. As for the rate of convergence, p (n1n2)mmax log(m+)/(Nm1m2) is slower than the classical optimal rate p r nmax log(n+)/N when m1, m2 are of smaller than n1, n2 respectively. (C4) Assume the kernel functions K( ) is integrable with R K(u)du = 1. Moreover, assume that K( ) satisfies K(u) = 0 if |u| 1. Further, assume that K( ) is differentiable and its derivative K ( ) is bounded. (C5) The initial estimator b A0 satisfies (n1n2) 1/2 b A0 A F = OP((n1n2) 1/2a N), where the initial rate (n1n2) 1/2a N = o(1). For the notation consistency, denote the initial rate a N,0 = a N and define that r (n1n2)nmax log(n+) Theorem 2 (Repeated refinement). Suppose that Conditions (C1) (C5) hold and A B(a, n1, n2). By choosing the bandwidth ht (n1n2) 1/2a N,t 1 where a N,t is defined as in (3.2) and taking nmin N + a2 N,t 1 nmin(n1n2) where C is a sufficient large constant, we have F n1n2 = OP nmax log(n+) N + a4 N,t 1 n2 min(n1n2) When the iteration number t = 1, it means one-step refinement from the initial estimator b A0. For the right hand side Distributed Median Matrix Completion of (3.3), it is noted that both the first term p log(n+)/N and the second term r nmax log(n+)/N are seen in the error bound of existing works (Elsener & van de Geer, 2018; Alquier et al., 2019). The bound has an extra third term r a4 N,0/(n2 min(n1n2)) due to the initial estimator. After one round of refinement, one can see that the third term r a4 N,0/(n2 min(n1n2)) in (3.3) is faster than a2 N,0/(n1n2), the convergence rate of the initial estimator (see Condition (C5)), because r n 2 mina2 N,0 = o(1). With the increasing of the iteration number t, Theorem 2 shows that the estimator can be refined again and again, until near-optimal rate of convergence is achieved. It can be shown that when the iteration number t exceeds certain number, i.e, ( log(r2 n2 max log(n+)) log(nmin N) c0 log(r a2 N,0) 2c0 log(nmin) for some c0 > 0, the second term in the term associated with r is dominated by the first term and the convergence rate of b A(t) becomes r nmax N 1 log(n+) which is the near-optimal rate r nmax N 1 (optimal up to a logarithmic factor). Note that the number of iteration t is usually small due to the logarithmic transformation. 3.1. Main Lemma and Proof Outline For the t-th refinements, let ξ(t) k = e Y (t) k Xk, A be the residual of the pseudo data. Also, define the stochastic terms Σ(t) = N 1 PN k=1 ξ(t) k Xk. To provide an upper bound of (n1n2) 1 b A(t) A 2 F in Theorem 2, we follow the standard arguments, as used in corresponding key theorems in, e.g., Koltchinskii et al. (2011); Klopp (2014). The key is to control the spectral norm of the stochastic term Σ(t). A specific challenge of our setup is the dependency among the residuals {ξ(t) i }. We tackle this by the following lemma: Lemma 1. Suppose that Conditions (C1) (C5) hold and A B(a, n1, n2). For any iteration t 1, we choose the bandwidth ht (n1n2) 1/2a N,t where a N,t is defined as in (3.2). Then we have nmin N + a2 N,t 1 nmin(n1n2) We now give a proof outline of Lemma 1 for t = 1. The same argument can be applied iteratively to achieve the repeated refinement results as shown in Lemma 1. In our proof, we decompose the stochastic term Σ(1) into three components HN( b A0), (N bf(0)) 1 PN i=1[Xi I [ϵi 0] Xif(0)] and bf 1(0)UN i=1 Xitr XT i (A A ) + i=1 Xi f tr XT i (A A ) f (0) , and UN = sup A A F a N BN(A) with Xi I ϵi tr XT i (A A ) Xif tr XT i (A A ) i=1 [Xi I [ϵi 0] Xif (0)] . Then we control their spectral norms separately. For HN( b A0), we first bound |v THN( b A0)u| for fixed u and v where u = v = 1, by separating the random variables Xk and ϵk from b A0 A , and then applying the exponential inequality in Lemma 1 of Cai & Liu (2011). To control the spectral norm, we take supremum over u and v, and the corresponding uniform bound can be derived using an E net argument. The same technique can be used to handle the term UN. Therefore, we can bound the spectral norm of HN( b A0) and UN for any initial estimator that satisfies Condition (C5). As for the term (N bf(0)) 1 PN i=1[Xi I [ϵi 0] Xif(0)], we first note that it is not difficult to control a simplified version: (Nf(0)) 1 PN i=1[Xi I [ϵi 0] Xif(0)], with f(0) instead of bf(0). To control our target term, we provide Proposition A.1 in the supplementary materials which shows that | bf(0) f(0)| = OP( q Nh + a N n1n2 ). 4. Experiments 4.1. Synthetic Data We conducted a simulation study, under which we fixed the dimensions to n1 = n2 = 400. In each simulated data, the target matrix A was generated as UVT, where the entries of U Rn1 r and V Rn2 r were all drawn from the standard normal distributions N(0, 1) independently. Here r was set to 3. Thus A = UVT was a low-rank matrix. The missing rate was 0.2, which corresponds to N = 32, 000 We adopted the uniform missing mechanism where all entries had the same chance of being observed. We considered the following four noise distributions: S1 Normal: ϵ N(0, 1). S2 Cauchy: ϵ Cauchy(0, 1). Distributed Median Matrix Completion S3 Exponential: ϵ exp(1). S4 t-distribution with degree of freedom 1: ϵ t1. We note that Cauchy distribution is a very heavy-tailed distribution and its first moment (expectation) does not exist. For each of these four settings, we repeated the simulation for 500 times. Denote the proposed median MC procedure given in Algorithm 1 by DLADMC (Distributed Least Absolute Deviations Matrix Completion). Due to Theorem 1(ii), b ALADMC,0 A F = Op( p (n1n2)2mmax log(m+)/(m1m2N)), we fixed a N = a N,0 = c1 (n1n2)2mmax log(m+) where the constant c1 = 0.1. From our experiences, smaller c1 leads to similar results. As h (n1n2) 1/2a N, the bandwidth h was simply set to h = c2(n1n2) 1/2a N, and similarly, ht = c2(n1n2) 1/2a N,t where a N,t was defined by (3.2) with c2 = 0.1. In addition, all the tuning parameters λN,t in Algorithm 1 were chosen by validation. Namely, we minimized the absolute deviation loss evaluated on an independently generated validation sets with the same dimensions n1, n2. For the choice of the kernel functions K( ), we adopt the commonly used bi-weight kernel function, 64 x6 + 735 64 x2 + 105 64 , 1 x 1 0, x 1 . It is easy to verify that K( ) satisfies Condition (C1) in Section 3. If we compute e = b A(t) b A(t 1) 2 F / b A(t 1) 2 F and stop the algorithm once e 10 5, it typically only requires 4 5 iterations. We simply report the results of the estimators with T = 4 or T = 5 iterations in Algorithm 1 (depending on the noise distribution). We compared the performance of the proposed method (DLADMC) with three other approaches: (a) BLADMC: Blocked Least Absolute Deviation Matrix Completion b ALADMC,0, the initial estimator proposed in section 2.2. Number of row subsets l1 = 2, number of column subsets l2 = 2. (b) ACL: Least Absolute Deviation Matrix Completion with nuclear norm penalty based on the computationally expensive ADMM algorithm proposed by Alquier et al. (2019). c) MHT: The squared loss estimator with nuclear norm penalty proposed by Mazumder et al. (2010). The tuning parameters in these four methods were chosen based on the same validation set. We followed the selection procedure in Section 9.4 of Mazumder et al. (2010) to choose λ. Instead of fixing K to 1.5 or 2 as in Mazumder et al. (2010), we choose K by an additional pair of training and validation sets (aside from the 500 simulated datasets). We did this for every method to ensure a fair comparison. The performance of all the methods were evaluated via root mean square error (RMSE) and mean absolute error (MAE). The estimated ranks are also reported. Table 1. The average RMSEs, MAEs, estimated ranks and their standard errors (in parentheses) of DLADMC, BLADMC, ACL and MHT over 500 simulations. The number in the first column within the parentheses represents T in Algorithm 1 for DLADMC. (T) DLADMC BLADMC RMSE 0.5920 (0.0091) 0.7660 (0.0086) MAE 0.4273 (0.0063) 0.5615 (0.006) rank 52.90 (2.51) 400 (0.00) RMSE 0.9395 (0.0544) 1.7421 (0.3767) MAE 0.6735 (0.0339) 1.2061 (0.1570) rank 36.49 (7.94) 272.25 (111.84) RMSE 0.4868 (0.0092) 0.6319 (0.0090) MAE 0.3418 (0.0058) 0.4484 (0.0057) rank 66.66 (1.98) 400 (0.00) RMSE 1.1374 (0.8945) 1.6453 (0.2639) MAE 0.8317 (0.7370) 1.1708 (0.1307) rank 47.85 (13.22) 249.16 (111.25) (T) ACL MHT RMSE 0.5518 (0.0081) 0.4607 (0.0070) MAE 0.4031 (0.0056) 0.3375 (0.0047) rank 400 (0.00) 36.89 (1.79) RMSE 1.8236 (1.1486) 106.3660 (918.5790) MAE 1.2434 (0.5828) 1.4666 (2.2963) rank 277.08 (170.99) 1.25 (0.50) RMSE 0.4164 (0.0074) 0.4928 (0.0083) MAE 0.3121 (0.0054) 0.3649 (0.0058) rank 400 (0.00) 37.91 (1.95) RMSE 1.4968 (0.6141) 98.851 (445.4504) MAE 1.0792 (0.3803) 1.4502 (1.1135) rank 237.05 (182.68) 1.35 (0.71) From Table 1, we can see that both DLADMC and MHT produced low-rank estimators while BLADMC and ACL could not reduce the rank too much. As expected, when the noise is Gaussian, MHT performed best in terms of RMSE and MAE. Meanwhile, DLADMC and ACL were close to each other and slightly worse than MHT. It is not surprising that BLADMC was the worst due to its simple way to combine sub-matrices. As for Setting S3, ACL outperformed other three methods while the performances of DLADMC and MHT are close. For the heavy-tailed Settings S2 and S4, our proposed DLADMC performed significantly better than ACL, and MHT fails. Moreover, to investigate whether the refinement step can be isolated from the distributed optimization, we run the refinement step on an initial matrix that is synthetically Distributed Median Matrix Completion generated by making small noises to the ground-truth matrix A , as suggested by a reviewer. We provide these results in Section B.1 of the supplementary material. 4.2. Real-World Data We tested various methods on the Movie Lens-100K1 dataset. This data set consists of 100,000 movie ratings provided by 943 viewers on 1682 movies. The ratings range from 1 to 5. To evaluate the performance of different methods, we directly used the data splittings from the data provider, which splits the data into two sets. We refer them to as Raw A and Raw B. Similar to Alquier et al. (2019), we added artificial outliers by randomly changing 20% of ratings that are equal to 5 in the two sets, Raw A and Raw B, to 1 and constructed Out A and Out B respectively. To avoid rows and columns that contain too few observations, we only keep the rows and columns with at least 20 ratings. The resulting target matrix A is of dimension 739 918. Before we applied those four methods as described in Section 4.1, the data was preprocessed by a bi-scaling procedure (Mazumder et al., 2010). For the proposed DLADMC, we fixed the iteration number to 7. It is noted that the relative error stopping criterion (in Section 4.1) did not result in a stop within the first 7 iteration, where 7 is just a user-specified upper bound in the implementation. To understand the effect of this bound, we provided additional analysis of this upper bound in Section 2.2 of the supplementary material. Briefly, our conclusion in the rest of this section is not sensitive to this choice of upper bound. The tuning parameters for all the methods were chosen by 5-fold cross-validations. The RMSEs, MAEs, estimated ranks and the total computing time (in seconds) are reported in Table 2. For a fair comparison, we recorded the time of each method in the experiment with the selected tuning parameter. It is noted that under the raw data Raw A and Raw B, both the proposed DLADMC and the least absolute deviation estimator ACL performed similarly as the least squares estimator MHT. BLADMC lost some efficiency due to the embarrassingly parallel computing. For the dataset with outliers, the proposed DLADMC and the least absolute deviation estimator ACL performed better than MHT. Although DLADMC and ACL had similar performance in terms of the RMSEs and MAEs, DLADMC required much lower computing cost. Suggested by a reviewer, we also performed an experiment with a bigger data set (Movie Lens-1M dataset: 1,000,209 ratings of approximately 3,900 movies rated by 6,040 users.) However, ACL is not scalable, and, due to time limitations, we stopped the fitting of ACL when the running time of ACL exceeds five times of the proposed DLADMC. In our 1https://grouplens.org/datasets/ movielens/100k/ Table 2. The RMSEs, MAEs and estimated ranks of DLADMC, BLADMC, ACL and MHT under dimensions n1 = 739 and n2 = 918. DLADMC BLADMC ACL MHT RMSE 0.9235 0.9451 0.9258 0.9166 MAE 0.7233 0.7416 0.7252 0.7196 rank 41 530 509 57 t 254.33 65.64 393.40 30.16 RMSE 0.9352 0.9593 0.9376 0.9304 MAE 0.7300 0.7498 0.7323 0.7280 rank 51 541 521 58 t 244.73 60.30 448.55 29.60 RMSE 1.0486 1.0813 1.0503 1.0820 MAE 0.8568 0.8833 0.8590 0.8971 rank 38 493 410 3 t 255.25 89.65 426.78 10.41 RMSE 1.0521 1.0871 1.0539 1.0862 MAE 0.8616 0.8905 0.8628 0.9021 rank 28 486 374 6 t 260.79 104.97 809.26 10.22 analysis, we only compared the remaining methods. The conclusions were similar as in the smaller Moview Lens100K dataset. The details are presented in Section B.2 of the supplementary material. 5. Conclusion In this paper, we address the problem of median MC and obtain a computationally efficient estimator for large-scale MC problems. We construct the initial estimator in an embarrassing parallel fashion and refine it through regularized least square minimizations based on pseudo data. The corresponding non-standard asymptotic analysis are established. This shows that the proposed DLADMC achieves the (near-)oracle convergence rate. Numerical experiments are conducted to verify our conclusions. Acknowledgments Weidong Liu s research is supported by National Program on Key Basic Research Project (973 Program, 2018AAA0100704), NSFC Grant No. 11825104 and 11690013, Youth Talent Support Program, and a grant from Australian Research Council. Xiaojun Mao s research is supported by Shanghai Sailing Program 19YF1402800. Raymond K.W. Wong s research is partially supported by the National Science Foundation under Grants DMS-1806063, DMS-1711952 (subcontract) and CCF-1934904. Alquier, P., Cottet, V., and Lecu e, G. Estimation bounds and sharp oracle inequalities of regularized procedures with lipschitz loss functions. The Annals of Statistics, 47 (4):2117 2144, 2019. Distributed Median Matrix Completion Bach, F. R. Consistency of trace norm minimization. Journal of Machine Learning Research, 9(Jun):1019 1048, 2008. Bennett, J. and Lanning, S. The netflix prize. In Proceedings of KDD cup and workshop, volume 2007, pp. 35, 2007. Cai, T. T. and Liu, W. Adaptive thresholding for sparse covariance matrix estimation. Journal of the American Statistical Association, 106(494):672 684, 2011. Cai, T. T. and Zhou, W.-X. Matrix completion via max-norm constrained optimization. Electronic Journal of Statistics, 10(1):1493 1525, 2016. Cand es, E. J. and Plan, Y. Matrix completion with noise. Proceedings of the IEEE, 98(6):925 936, 2010. Cand es, E. J. and Recht, B. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6):717 772, 2009. Cand es, E. J. and Tao, T. The power of convex relaxation: Near-optimal matrix completion. Information Theory, IEEE Transactions on, 56(5):2053 2080, 2010. Cand es, E. J., Li, X., Ma, Y., and Wright, J. Robust principal component analysis? Journal of the ACM (JACM), 58(3): 11, 2011. Chandrasekaran, V., Sanghavi, S., Parrilo, P. A., and Willsky, A. S. Rank-sparsity incoherence for matrix decomposition. SIAM Journal on Optimization, 21(2):572 596, 2011. Chen, X., Liu, W., Mao, X., and Yang, Z. Distributed highdimensional regression under a quantile loss function. ar Xiv preprint ar Xiv:1906.05741, 2019a. Chen, Y., Xu, H., Caramanis, C., and Sanghavi, S. Robust matrix completion and corrupted columns. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pp. 873 880, 2011. Chen, Y., Jalali, A., Sanghavi, S., and Caramanis, C. Lowrank matrix recovery from errors and erasures. IEEE Transactions on Information Theory, 59(7):4324 4337, 2013. Chen, Y., Chi, Y., Fan, J., Ma, C., and Yan, Y. Noisy matrix completion: Understanding statistical guarantees for convex relaxation via nonconvex optimization. ar Xiv preprint ar Xiv:1902.07698, 2019b. Chen, Y., Fan, J., Ma, C., and Yan, Y. Bridging convex and nonconvex optimization in robust pca: Noise, outliers, and missing data. ar Xiv preprint ar Xiv:2001.05484, 2020. Davies, P. L. Aspects of robust linear regression. The Annals of statistics, 21(4):1843 1899, 1993. Elsener, A. and van de Geer, S. Robust low-rank matrix estimation. The Annals of Statistics, 46(6B):3481 3509, 2018. Fan, J., Gong, W., and Zhu, Z. Generalized highdimensional trace regression via nuclear norm regularization. Journal of Econometrics, 212(1):177 202, 2019. Gross, D. Recovering low-rank matrices from few coefficients in any basis. Information Theory, IEEE Transactions on, 57(3):1548 1566, 2011. Huber, P. J. Robust statistics. Springer, 2011. Keshavan, R. H., Montanari, A., and Oh, S. Matrix completion from noisy entries. Journal of Machine Learning Research, 11(69):2057 2078, 2010. Klopp, O. Noisy low-rank matrix completion with general sampling distribution. Bernoulli, 20(1):282 303, 2014. Klopp, O., Lounici, K., and Tsybakov, A. B. Robust matrix completion. Probability Theory and Related Fields, 169 (1-2):523 564, 2017. Koltchinskii, V., Lounici, K., and Tsybakov, A. B. Nuclearnorm penalization and optimal rates for noisy low-rank matrix completion. The Annals of Statistics, 39(5):2302 2329, 2011. Lafond, J. Low rank matrix completion with exponential family noise. In Conference on Learning Theory, pp. 1224 1243, 2015. Li, X. Compressed sensing and matrix completion with constant proportion of corruptions. Constructive Approximation, 37(1):73 99, 2013. Mackey, L., Talwalkar, A., and Jordan, M. I. Distributed matrix completion and robust factorization. Journal of Machine Learning Research, 16(28):913 960, 2015. Mazumder, R., Hastie, T., and Tibshirani, R. Spectral regularization algorithms for learning large incomplete matrices. Journal of Machine Learning Research, 11(80): 2287 2322, 2010. Negahban, S. and Wainwright, M. J. Estimation of (near) low-rank matrices with noise and high-dimensional scaling. The Annals of Statistics, 39(2):1069 1097, 2011. Negahban, S. and Wainwright, M. J. Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. Journal of Machine Learning Research, 13 (53):1665 1697, 2012. Distributed Median Matrix Completion Rohde, A. and Tsybakov, A. B. Estimation of highdimensional low-rank matrices. The Annals of Statistics, 39(2):887 930, 2011. Srebro, N., Rennie, J., and Jaakkola, T. S. Maximum-margin matrix factorization. In Advances in neural information processing systems, pp. 1329 1336, 2005. Wong, R. K. W. and Lee, T. C. M. Matrix completion with noisy entries and outliers. Journal of Machine Learning Research, 18(147):1 25, 2017. Xia, D. and Yuan, M. Statistical inferences of linear forms for noisy matrix completion. ar Xiv preprint ar Xiv:1909.00116, 2019.