# sparse_pca_with_oracle_property__1e73e99d.pdf Sparse PCA with Oracle Property Quanquan Gu Department of Operations Research and Financial Engineering Princeton University Princeton, NJ 08544, USA qgu@princeton.edu Zhaoran Wang Department of Operations Research and Financial Engineering Princeton University Princeton, NJ 08544, USA zhaoran@princeton.edu Han Liu Department of Operations Research and Financial Engineering Princeton University Princeton, NJ 08544, USA hanliu@princeton.edu In this paper, we study the estimation of the k-dimensional sparse principal subspace of covariance matrix Σ in the high-dimensional setting. We aim to recover the oracle principal subspace solution, i.e., the principal subspace estimator obtained assuming the true support is known a priori. To this end, we propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations. In particular, under a weak assumption on the magnitude of the population projection matrix, one estimator within this family exactly recovers the true support with high probability, has exact rank-k, and attains a p s/n statistical rate of convergence with s being the subspace sparsity level and n the sample size. Compared to existing support recovery results for sparse PCA, our approach does not hinge on the spiked covariance model or the limited correlation condition. As a complement to the first estimator that enjoys the oracle property, we prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA, even when the previous assumption on the magnitude of the projection matrix is violated. We validate the theoretical results by numerical experiments on synthetic datasets. 1 Introduction Principal Component Analysis (PCA) aims at recovering the top k leading eigenvectors u1, . . . , uk of the covariance matrix Σ from sample covariance matrix bΣ. In applications where the dimension p is much larger than the sample size n, classical PCA could be inconsistent [12]. To avoid this problem, one common assumption is that the leading eigenvector u1 of the population covariance matrix Σ is sparse, i.e., the number of nonzero elements in u1 is less than the sample size, s = supp(u1) < n. This gives rise to Sparse Principal Component Analysis (SPCA). In the past decade, significant progress has been made toward the methodological development [13, 8, 30, 22, 7, 14, 28, 19, 27] as well as theoretical understanding [12, 20, 1, 24, 21, 4, 6, 3, 2, 18, 15] of sparse PCA. However, all the above studies focused on estimating the leading eigenvector u1. When the top k eigenvalues of Σ are not distinct, there exist multiple groups of leading eigenvectors that are equivalent up to rotation. In order to address this problem, it is reasonable to de-emphasize eigenvectors and to instead focus on their span U, i.e., the principal subspace of variation. [23, 25, 16, 27] proposed Subspace Sparsity, which defines sparsity on the projection matrix onto subspace U, i.e., Π = UU , as the number of nonzero entries on the diagonal of Π , i.e., s = |supp(diag(Π ))|. They proposed to estimate the principal subspace instead of principal eigenvectors of Σ, based ℓ1,1-norm regularization on a convex set called Fantope [9], that provides a tight relaxation for simultaneous rank and orthogonality constraints on the positive semidefinite cone. The convergence rate of their estimator is O(λ1/(λk λk+1)s p log p/n), where λk, k = 1, . . . , p is the k-th largest eigenvalue of Σ. Moreover, their support recovery relies on limited correlation condition (LCC) [16], which is similar to irrepresentable condition in sparse linear regression. We notice that [1] also analyzed the semidefinite relaxation of sparse PCA. However, they only considered rank-1 principal subspace and the stringent spiked covariance model, where the population covariance matrix is block diagonal. In this paper, we aim to recover the oracle principal subspace solution, i.e., the principal subspace estimator obtained assuming the true support is known a priori. Based on recent progress made on penalized M-estimators with nonconvex penalty functions [17, 26], we propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations. It estimates the k-dimensional principal subspace of a population matrix Σ based on its empirical version bΣ. In particular, under a weak assumption on the magnitude of the projection matrix, i.e, min (i,j) T |Π ij| ν + C kλ1 λk λk+1 where T is the support of Π , ν is a parameter from nonconvex penalty and C is an universal constant, one estimator within this family exactly recovers the oracle solution with high probability, and is exactly of rank k. It is worth noting that unlike the linear regression setting, where the estimators that can recover the oracle solution often have nonconvex formulations, our estimator here is obtained from a convex optimization1, and has unique global solution. Compared to existing support recovery results for sparse PCA, our approach does not hinge on the spiked covariance model [1] or the limited correlation condition [16]. Moreover, it attains the same convergence rate as standard PCA as if the support of the true projection matrix is provided a priori. More specifically, the Frobenius norm error of the estimator bΠ is bounded with high probability as follows bΠ Π F Cλ1 λk λk+1 where k is the dimension of the subspace. As a complement to the first estimator that enjoys the oracle property, we prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA, even when the previous assumption on the magnitude of the projection matrix is violated. This estimator is based on nonconvex optimizaiton. With a suitable choice of the regularization parameter, we show that any local optima to the optimization problem is a good estimator for the projection matrix of the true principal subspace. In particular, we show that the Frobenius norm error of the estimator bΠ is bounded with high probability as bΠ Π F Cλ1 λk λk+1 where s1, m1, m2 are all no larger than s. Evidently, it is sharper than the convergence rate proved in [23]. Note that the above rate consists of two terms, the O( p s1s/n) term corresponds to the entries of projection matrix satisfying the previous assumption (i.e., with large magnitude), while O( p m1m2 log p/n) corresponds to the entries of projection matrix violating the previous assumption (i.e., with small magnitude). Finally, we demonstrate the numerical experiments on synthetic datasets, which support our theoretical analysis. The rest of this paper is arranged as follows. Section 2 introduces two estimators for the principal subspace of a covariance matrix. Section 3 analyzes the statistical properties of the two estimators. 1Even though we use nonconvex penalty, the resulting problem as a whole is still a convex optimization problem, because we add another strongly convex term in the regularization part, i.e., τ/2 Π F . We present an algorithm for solving the estimators in Section 4. Section 5 shows the experiments on synthetic datasets. Section 6 concludes this work with remarks. Notation. Let [p] be the shorthand for {1, . . . , p}. For matrices A, B of compatible dimension, A, B := tr(A B) is the Frobenius inner product, and A F = A, A is the squared Frobenius norm. x q is the usual ℓq norm with x 0 defined as the number of nonzero entries of x. A a,b is the (a, b)-norm defined to be the ℓb norm of the vector of rowwise ℓa norms of A, e.g. A 1, is the maximum absolute row sum. A 2 is the spectral norm of A, and A is the trace norm (nuclear norm) of A. For a symmetric matrix A, we define λ1(A) λ2(A) . . . λp(A) to be the eigenvalues of A with multiplicity. When the context is obvious we write λj = λj(A) as shorthand. 2 The Proposed Estimators In this section, we present a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations, for the principal subspace of the population covariance matrix. Before going into the details of the proposed estimators, we first present the formal definition of principal subspace estimation. 2.1 Problem Definition Let Σ Rp p be an unknown covariance matrix, with eigen-decomposition as follows i=1 λiuiu i , where λ1 . . . λp are eigenvalues (with multiplicity) and u1, . . . , up Rp are the associated eigenvectors. The k-dimensional principal subspace of Σ is the subspace spanned by u1, . . . , uk. The projection matrix to the k-dimensional principal subspace is i=1 uiu i = UU , where U = [u1, . . . , uk] is an orthonormal matrix. The reason why principal subspace is more appealing is that it avoids the problem of un-identifiability of eigenvectors when the eigenvalues are not distinct. In fact, we only need to assume λk λk+1 > 0 instead of λ1 > . . . > λk > λk+1. Then the principal subspace Π is unique and identifiable. We also assume that k is fixed. Next, we introduce the definition of Subspace Sparsity [25], which can be seen as the extension of conventional Eigenvector Sparsity used in sparse PCA. Definition 1. [25] (Subspace Sparsity) The projection Π onto the subspace spanned by the eigenvectors of Σ corresponding to its k largest eigenvalues satisfies U 2,0 s, or equivalently diag(Π) 0 s. In the extreme case that k = 1, the support of the projection matrix onto the rank-1 principal subspace is the same as the support of the sparse leading eigenvector. The problem definition of principal subspace estimation is: given an i.i.d. sample {x1, x2, . . . , xn} Rp which are drawn from an unknown distribution of zero mean and covariance matrix Σ, we aim to estimate Π based on the empirical covariance matrix S Rp p, that is given by bΣ = 1/n Pn i=1 xix i . We are particularly interested in the high dimensional setting, where p as n , in sharp contrast to conventional setting where p is fixed and n . Now we are ready to design a family of estimators for Π . 2.2 A Family of Sparse PCA Estimators Given a sample covariance matrix bΣ Rp p, we propose a family of sparse principal subspace estimator bΠ that is defined to be a solution of the semidefinite relaxation of sparse PCA bΠτ = argmin Π bΣ, Π + τ 2 Π 2 F + Pλ(Π), subject to Π Fk, (1) where τ > 0, λ > 0 is a regularization parameter, Fk is a convex body called the Fantope [9, 23], that is defined as follows Fk = {X : 0 X I and tr(X) = k }, and Pλ(Π) is a decomposable nonconvex penalty, i.e., Pλ(Π) = Pp i,j=1 pλ(Πij). Typical nonconvex penalties include the smoothly clipped absolute deviation (SCAD) penalty [10] and minimax concave penalty MCP [29], which can eliminate the estimation bias and attain more refined statistical rates of convergence [17, 26]. For example, MCP penalty is defined as pλ(t) = λ Z |t| dz = λ|t| t2 1(|t| bλ) + bλ2 2 1(|t| > bλ), (2) where b > 0 is a fixed parameter. An important property of the nonconvex penalties pλ(t) is that they can be formulated as the sum of the ℓ1 penalty and a concave part qλ(t): pλ(t) = λ|t| + qλ(t). For example, if pλ(t) is chosen to be the MCP penalty, then the corresponding qλ(t) is: 2b1(|t| bλ) + bλ2 2 λ|t| 1(|t| > bλ), We rely on the following regularity conditions on pλ(t) and its concave component qλ(t): (a) pλ(t) satisfies p λ(t) = 0, for |t| ν > 0. (b) q λ(t) is monotone and Lipschitz continuous, i.e., for t t, there exists a constant ζ 0 such that ζ q λ(t ) q λ(t) t t . (c) qλ(t) and q λ(t) pass through the origin, i.e., qλ(0) = q λ(0) = 0. (d) q λ(t) is bounded, i.e., |q λ(t)| λ for any t. The above conditions apply to a variety of nonconvex penalty functions. For example, for MCP in (2), we have ν = bλ and ζ = 1/b. It is easy to show that when τ > ζ , the problem in (1) is strongly convex, and therefore its solution is unique. We notice that [16] also introduced the same regularization term τ/2 Π 2 F in their estimator. However, our motivation is quite different from theirs. We introduce this term because it is essential for the estimator in (1) to achieve the oracle property provided that the magnitude of all the entries in the population projection matrix is sufficiently large. We call (1) Convex Sparse PCA Estimator. Note that constraint Π Fk only guarantees that the rank of bΠ is k. However, we can prove that our estimator is of rank k exactly. This is in contrast to [23], where some post projection is needed, to make sure their estimator is of rank k. 2.3 Nonconvex Sparse PCA Estimator In the case that the magnitude of entries in the population projection matrix Π violates the previous assumption, (1) with τ > ζ no longer enjoys the desired oracle property. To this end, we consider another estimator from the family of estimators in (1) with τ = 0, bΠτ=0 = argmin Π bΣ, Π + Pλ(Π), subject to Π Fk. (3) Since bΣ, Π is an affine function, and Pλ(Π) is nonconvex, the estimator in (3) is nonconvex. We simply refer to it as Nonconvex Sparse PCA Estimator. We will prove that it achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA [23], even when the previous assumption on the magnitude of the projection matrix is violated. It is worth noting that although our estimators in (1) and (3) are for the projection matrix Π of the principal subspace, we can also provide an estimator of U. By definition, the true subspace satisfies Π = UU . Thus, the estimator b U can be computed from bΠ using eigenvalue decomposition. In detail, we can set the columns of b U to be the top k leading eigenvectors of bΠ. In case that the top k eigenvalues of bΠ are the same, we can follow the standard PCA convention by rotating the eigenvectors with a rotation matrix R, such that (b UR)T bΣ(b UR) is diagonal. Then b UR is the orthonormal basis for the estimated principal subspace, and can be used for visualization and dimension reduction. 3 Statistical Properties of the Proposed Estimators In this section, we present the statistical properties of the two estimators in the family (1). One is with τ > ζ , the other is with τ = 0. The proofs are all included in the longer version of this paper. To evaluate the statistical performance of the principal subspace estimators, we need to define the estimator error between the estimated projection matrix and the true projection matrix. In our study, we use the Frobenius norm error bΠ Π F . 3.1 Oracle Property and Convergence Rate of Convex Sparse PCA We first analyze the estimator in (1) when τ > ζ . We prove that, the estimator bΠ in (1) recovers the support of Π under suitable conditions on its magnitude. Before we present this theorem, we introduce the definition of an oracle estimator, denoted by bΠO. Recall that S = supp(diag(Π )). The oracle estimator bΠO is defined as bΠO = argmin supp(diag(Π)) S,Π Fk L(Π). (4) where L(Π) = bΣ, Π + τ 2 Π 2 F . Note that the above oracle estimator is not a practical estimator, because we do not know the true support S in practice. The following theorem shows that, under suitable conditions, bΠ in (1) is the same as the oracle estimator bΠO with high probability, and therefore exactly recovers the support of Π . Theorem 1. (Support Recovery) Suppose the nonconvex penalty Pλ(Π) = Pp i,j=1 pλ(Π) satisfies conditions (a) and (b). If Π satisfies min(i,j) T |Π ij| ν + C kλ1/(λk λk+1) p s/n. For the estimator in (1) with the regularization parameter λ = Cλ1 p log p/n and τ > ζ , we have with probability at least 1 1/n2 that bΠ = bΠO, which further implies supp(diag(bΠ)) = supp(diag(bΠO)) = supp(diag(Π )) and rank(bΠ) = rank(bΠO) = k. For example, if we use MCP penalty, the magnitude assumption turns out to be min(i,j) T |Π ij| Cbλ1 p log p/n + C kλ1/(λk λk+1) p Note that in our proposed estimator in (1), we do not rely on any oracle knowledge on the true support. Our theory in Theorem 1 shows that, with high probability, the estimator is identical to the oracle estimator, and thus exactly recovers the true support. Compared to existing support recovery results for sparse PCA [1, 16], our condition on the magnitude is weaker. Note that the limited correlation condition [16] and the even stronger spiked covariance condition [1] impose constraints not only on the principal subspace corresponding to λ1, . . . , λk, but also on the non-signal part, i.e., the subspace corresponding to λk+1, . . . , λp. Unlike these conditions, we only impose conditions on the signal part, i.e., the magnitude of the projection matrix Π corresponding to λ1, . . . , λk. We attribute the oracle property of our estimator to novel regularizations (τ/2 Π 2 F plus nonconvex penalty). The oracle property immediately implies that under the above conditions on the magnitude, the estimator in (1) achieves the convergence rate of standard PCA as if we know the true support S a priori. This is summarized in the following theorem. Theorem 2. Under the same conditions of Theorem 1, we have with probability at least 1 1/n2 that kλ1 λk λk+1 for some universal constant C. Evidently, the estimator attains a much sharper statistical rate of convergence than the state-of-the-art result proved in [23]. 3.2 Convergence Rate of Nonconvex Sparse PCA We now analyze the estimator in (3), which is a special case of (1) when τ = 0. We basically show that any local optima of the non-convex optimization problem in (3) is a good estimator. In other words, our theory applies to any projection matrix bΠτ=0 Rp p that satisfies the first-order necessary conditions (variational inequality) to be a local minimum of (3): bΠτ=0 Π , bΣ + Pλ(bΠ) 0, Π Fk Recall that S = supp(diag(Π )) with |S| = s, T = S S with |T| = s2, and T c = [p] [p] \ T. For (i, j) T1 T with |T1| = t1, we assume |Π ij| ν, while for (i, j) T2 T with |T2| = t2, we assume |Π ij| < ν. Clearly, we have s2 = t1 + t2. There exists a minimal submatrix A Rn1 n2 of Π , which contains all the elements in T1, with s1 = min{n1, n2}. There also exists a minimal submatrix B Rm1 m2 of Π , that contains all the elements in T2. Note that in general, s1 s, m1 s and m2 s. In the worst case, we have s1 = m1 = m2 = s. Theorem 3. Suppose the nonconvex penalty Pλ(Π) = Pp i,j=1 pλ(Π) satisfies conditions (b) (c) and (d). For the estimator in (3) with regularization parameter λ = Cλ1 p log p/n and ζ (λk λk+1)/4, with probability at least 1 4/p2, any local optimal solution bΠτ=0 satisfies bΠτ=0 Π F 4Cλ1 s1 (λk λk+1) n | {z } T1:|Π ij| ν + 12Cλ1 m1m2 n | {z } T2:|Π ij|<ν Note that the upper bound can be decomposed into two parts according to the magnitude of the entries in the true projection matrix, i.e., |Π ij|, 1 i, j p. We have the following comments: On the one hand, for those strong signals , i.e., |Π ij| ν, we are able to achieve the convergence rate of O(λ1 s1/(λk λk+1) p s/n). Since s1 is at most equal to s, the worst-case rate is O(λ1/(λk λk+1)s/ n), which is sharper than the rate proved in [23], i.e., O(λ1/(λk λk+1)s p log p/n). In the other case that s1 < s, the convergence rate could be even sharper. On the other hand, for those weak signals , i.e., |Π ij| < ν, we are able to achieve the convergence rate of O(λ1 m1m2/(λk λk+1) p log p/n). Since both m1 and m2 are at most equal to s, the worst-case rate is O(λ1/(λk λk+1)s p log p/n), which is the same as the rate proved in [23]. In the other case that m1m2 < s, the convergence rate will be sharper than that in [23]. The above discussions clearly demonstrate the advantage of our estimator, which essentially benefits from non-convex penalty. 4 Optimization Algorithm In this section, we present an optimization algorithm to solve (1) and (3). Since (3) is a special case of (1) with τ = 0, it is sufficient to develop an algorithm for solving (1). Observing that (1) has both nonsmooth regularization term and nontrivial constraint set Fk, it is difficult to directly apply gradient descent and its variants. Following [23], we present an alternating direction method of multipliers (ADMM) algorithm. The proposed ADMM algorithm can efficiently compute the global optimum of (1). It can also find a local optimum to (3). It is worth noting that other algorithms such as Peaceman Rachford Splitting Method [11] can also be used to solve (1). We introduce an auxiliary variable Φ Rp p, and consider an equivalent form of (1) as follows argmin Π,Φ bΣ, Π + τ 2 Π 2 F + Pλ(Φ), subject to Π = Φ, Π Fk. (5) The augmented Lagrangian function corresponding to (5) is L(Π, Φ, Θ) = 1Fk(Π) bΣ, Π + τ 2 Π 2 F + Pλ(Φ) + Θ, Π Φ + ρ 2 Π Φ 2 F , (6) where Θ Rd d is the Lagrange multiplier associated with the equality constraint Π = Φ in (5), and ρ > 0 is a penalty parameter that enforces the equality constraint Π = Φ. The detailed update scheme is described in Algorithm 1. In details, the first subproblem (Line 5 of Algorithm 1) can be solved by projecting ρ/(ρ + τ)Φ(t) 1/(ρ + τ)Θ(t) + 1/(ρ + τ)bΣ onto Fantope Fk. This projection has a simple form solution as shown by [23, 16]. The second subproblem (Line 6 of Algorithm 1) can be solved by generalized soft-thresholding operator as shown by [5] [17]. Algorithm 1 Solving Convex Relaxation (5) using ADMM. 1: Input: Covariance Matrix Estimator bΣ 2: Parameter: Regularization parameters λ>0, τ 0, Penalty parameter ρ>0 of the augmented Lagrangian, Maximum number of iterations T 3: Π(0) 0, Φ(0) 0, Θ(0) 0 4: For t = 0, . . . , T 1 5: Π(t+1) arg minΠ Fk 1 2 Π ( ρ ρ+τ Φ(t) 1 ρ+τ Θ(t) + 1 ρ+τ bΣ) 2 F 6: Φ(t+1) arg minΦ 1 2 Φ (Π(t+1) + 1 ρΘ(t)) 2 F + P λ 7: Θ(t+1) Θ(t) + ρ(Π(t+1) Φ(t+1)) 8: End For 9: Output: Π(T ) 5 Experiments In this section, we conduct simulations on synthetic datasets to validate the effectiveness of the proposed estimators in Section 2. We generate two synthetic datasets via designing two covariance matrices. The covariance matrix Σ is basically constructed through the eigenvalue decomposition. In detail, for synthetic dataset I, we set s = 5 and k = 1. The leading eigenvalue of its covariance matrix Σ is set as λ1 = 100, and its corresponding eigenvector is sparse in the sense that only the first s = 5 entries are nonzero and set be to 1/ 5. The other eigenvalues are set as λ2 = . . . = λp = 1, and their eigenvectors are chosen arbitrarily. For synthetic dataset II, we set s = 10 and k = 5. The top-5 eigenvalues are set as λ1 = . . . = λ4 = 100 and λ5 = 10. We generate their corresponding eigenvectors by sampling its nonzero entries from a standard Gaussian distribution, and then orthnormalizing them while retaining the first s = 10 rows nonzero. The other eigenvalues are set as λ6 = . . . = λp = 1, and the associated eigenvectors are chosen arbitrarily. Based on the covariance matrix, the groundtruth rank-k projection matrix Π can be immediately calculated. Note that synthetic dataset II is more challenging than synthetic dataset I, because the smallest magnitude of Π in synthetic dataset I is 0.2, while that in synthetic dataset II is much smaller (about 10 3). We sample n = 80 i.i.d. observations from a normal distribution N(0, Σ) with p = 128, and then calculate the sample covariance matrix bΣ. Since the focus of this paper is principal subspace estimation rather than principal eigenvectors estimation, it is sufficient to compare our proposed estimators (Convex SPCA in (1) and Nonconvex SPCA in 3) with the estimator proposed in [23], which is referred to as Fantope SPCA. Note that Fantope PCA is the pioneering and the state-of-the-art estimator for principal subspace estimation of SPCA. However, since Fantope SPCA uses convex penalty Π 1,1on the projection matrix Π, the estimator is biased [29]. We also compare our proposed estimators with the oracle estimator in (4), which is not a practical estimator but provides the optimal results that we could achieve. In our experiments, we need to compare the estimator attained by the algorithmic procedure and the oracle estimator. To obtain the oracle estimator, we apply standard PCA on the submatrix (supported on the true support) of the sample covariance bΣ. Note that the true support is known because we use synthetic datasets here. In order to evaluate the performance of the above estimators, we look at the Frobenius norm error bΠ Π F . We also use True Positive Rate (TPR) and False Positive Rate (FPR) to evaluate the support recovery result. The larger the TPR and the smaller the FPR, the better the support recovery result. Both of our estimators use MCP penalty, though other nonconvex penalties such as SCAD could be used as well. In particular, we set b = 3. For Convex SPCA, we set τ = 2 b. The regularization parameter λ in our estimators as well as Fantope SPCA is tuned by 5-fold cross validation on a held-out dataset. The experiments are repeated 20 times, and the mean as well as the standard errors are reported. The empirical results on synthetic datasets I and II are displayed in Table 1. Table 1: Empirical results for subspace estimation on synthetic datasets I and II. Synthetic I bΠ Π F TPR FPR n = 80 Oracle 0.0289 0.0134 1 0 p = 128 Fantope SPCA 0.0317 0.0149 1.0000 0.0000 0.0146 0.0218 s = 5 Convex SPCA 0.0290 0.0132 1.0000 0.0000 0.0000 0.0000 k = 1 Nonconvex SPCA 0.0290 0.0133 1.0000 0.0000 0.0000 0.0000 Synthetic II bΠ Π F TPR FPR n = 80 Oracle 0.1487 0.0208 1 0 p = 128 Fantope SPCA 0.2788 0.0437 1.0000 0.0000 0.8695 0.1634 s = 10 Convex SPCA 0.2031 0.0331 1.0000 0.0000 0.5814 0.0674 k = 5 Nonconvex SPCA 0.2041 0.0326 1.0000 0.0000 0.6000 0.0829 It can be observed that both Convex SPCA and Nonconvex SPCA estimators outperform Fantope SPCA estimator [23] greatly in both datasets. In details, on synthetic dataset I with relatively large magnitude of Π , our Convex SPCA estimator achieves the same estimation error and perfect support recovery as the oracle estimator. This is consistent with our theoretical results in Theorems 1 and 2. In addition, our Nonconvex SPCA estimator achieves very similar results with Convex SPCA. This is not very surprising, because provided that the magnitude of all the entries in Π is large, Nonconvex SPCA attains a rate which is only 1/ s slower than Convex SPCA. Fantope SPCA cannot recover the support perfectly because it detected several false positive supports. This implies that the LCC condition is stronger than our large magnitude assumption, and does not hold on this dataset. On synthetic dataset II, our Convex SPCA estimator does not perform as well as the oracle estimator. This is because the magnitude of Π is small (about 10 3). Given the sample size n = 80, the conditions of Theorems 1 are violated. But note that Convex SPCA is still slightly better than Nonconvex SPCA. And both of them are much better than Fantope SPCA. This again illustrates the superiority of our estimators over existing best approach, i.e., Fantope SPCA [23]. 6 Conclusion In this paper, we study the estimation of the k-dimensional principal subspace of a population matrix Σ based on sample covariance matrix bΣ. We proposed a family of estimators based on novel regularizations. The first estimator is based on convex optimization, which is suitable for projection matrix with large magnitude entries. It enjoys oracle property and the same convergence rate as standard PCA. The second estimator is based on nonconvex optimization, and it also attains faster rate than existing principal subspace estimator, even when the large magnitude assumption is violated. Numerical experiments on synthetic datasets support our theoretical results. Acknowledgement We would like to thank the anonymous reviewers for their helpful comments. This research is partially supported by the grants NSF IIS1408910, NSF IIS1332109, NIH R01MH102339, NIH R01GM083084, and NIH R01HG06841. [1] A. Amini and M. Wainwright. High-dimensional analysis of semidefinite relaxations for sparse principal components. The Annals of Statistics, 37(5B):2877 2921, 2009. [2] Q. Berthet and P. Rigollet. Computational lower bounds for sparse PCA. ar Xiv preprint ar Xiv:1304.0828, 2013. [3] Q. Berthet and P. Rigollet. Optimal detection of sparse principal components in high dimension. The Annals of Statistics, 41(4):1780 1815, 2013. [4] A. Birnbaum, I. M. Johnstone, B. Nadler, D. Paul, et al. Minimax bounds for sparse PCA with noisy high-dimensional data. The Annals of Statistics, 41(3):1055 1084, 2013. [5] P. Breheny and J. Huang. Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection. The Annals of Applied Statistics, 5(1):232 253, 03 2011. [6] T. T. Cai, Z. Ma, Y. Wu, et al. Sparse PCA: Optimal rates and adaptive estimation. The Annals of Statistics, 41(6):3074 3110, 2013. [7] A. d Aspremont, F. Bach, and L. Ghaoui. Optimal solutions for sparse principal component analysis. The Journal of Machine Learning Research, 9:1269 1294, 2008. [8] A. d Aspremont, L. E. Ghaoui, M. I. Jordan, and G. R. Lanckriet. A direct formulation for sparse PCA using semidefinite programming. SIAM Review, pages 434 448, 2007. [9] J. Dattorro. Convex Optimization & Euclidean Distance Geometry. Meboo Publishing USA, 2011. [10] J. Fan and R. Li. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96(456):1348 1360, 2001. [11] B. He, H. Liu, Z. Wang, and X. Yuan. A strictly contractive peaceman rachford splitting method for convex programming. SIAM Journal on Optimization, 24(3):1011 1040, 2014. [12] I. Johnstone and A. Lu. On consistency and sparsity for principal components analysis in high dimensions. Journal of the American Statistical Association, 104(486):682 693, 2009. [13] I. Jolliffe, N. Trendafilov, and M. Uddin. A modified principal component technique based on the lasso. Journal of Computational and Graphical Statistics, 12(3):531 547, 2003. [14] M. Journ ee, Y. Nesterov, P. Richt arik, and R. Sepulchre. Generalized power method for sparse principal component analysis. The Journal of Machine Learning Research, 11:517 553, 2010. [15] R. Krauthgamer, B. Nadler, and D. Vilenchik. Do semidefinite relaxations really solve sparse PCA? ar Xiv preprint ar Xiv:1306.3690, 2013. [16] J. Lei and V. Q. Vu. Sparsistency and agnostic inference in sparse PCA. ar Xiv preprint ar Xiv:1401.6978, 2014. [17] P.-L. Loh and M. J. Wainwright. Regularized m-estimators with nonconvexity: Statistical and algorithmic theory for local optima. ar Xiv preprint ar Xiv:1305.2436, 2013. [18] K. Lounici. Sparse principal component analysis with missing observations. In High Dimensional Probability VI, pages 327 356. Springer, 2013. [19] Z. Ma. Sparse principal component analysis and iterative thresholding. The Annals of Statistics, 41(2):772 801, 2013. [20] D. Paul and I. M. Johnstone. Augmented sparse principal component analysis for high dimensional data. ar Xiv preprint ar Xiv:1202.1242, 2012. [21] D. Shen, H. Shen, and J. Marron. Consistency of sparse PCA in high dimension, low sample size contexts. Journal of Multivariate Analysis, 115:317 333, 2013. [22] H. Shen and J. Huang. Sparse principal component analysis via regularized low rank matrix approximation. Journal of multivariate analysis, 99(6):1015 1034, 2008. [23] V. Q. Vu, J. Cho, J. Lei, and K. Rohe. Fantope projection and selection: A near-optimal convex relaxation of sparse pca. In NIPS, pages 2670 2678, 2013. [24] V. Q. Vu and J. Lei. Minimax rates of estimation for sparse PCA in high dimensions. In International Conference on Artificial Intelligence and Statistics, pages 1278 1286, 2012. [25] V. Q. Vu and J. Lei. Minimax sparse principal subspace estimation in high dimensions. The Annals of Statistics, 41(6):2905 2947, 2013. [26] Z. Wang, H. Liu, and T. Zhang. Optimal computational and statistical rates of convergence for sparse nonconvex learning problems. The Annals of Statistics, 42(6):2164 2201, 12 2014. [27] Z. Wang, H. Lu, and H. Liu. Nonconvex statistical optimization: minimax-optimal sparse pca in polynomial time. ar Xiv preprint ar Xiv:1408.5352, 2014. [28] X.-T. Yuan and T. Zhang. Truncated power method for sparse eigenvalue problems. The Journal of Machine Learning Research, 14(1):899 925, 2013. [29] C.-H. Zhang. Nearly unbiased variable selection under minimax concave penalty. Ann. Statist., 38(2):894 942, 2010. [30] H. Zou, T. Hastie, and R. Tibshirani. Sparse principal component analysis. Journal of computational and graphical statistics, 15(2):265 286, 2006.