# efficient_sparse_pca_via_blockdiagonalization__c73cf9f6.pdf Published as a conference paper at ICLR 2025 EFFICIENT SPARSE PCA VIA BLOCKDIAGONALIZATION Alberto Del Pia & Dekun Zhou University of Wisconsin-Madison {delpia,dzhou44}@wisc.edu Yinglun Zhu University of California, Riverside yzhu@ucr.edu Sparse Principal Component Analysis (Sparse PCA) is a pivotal tool in data analysis and dimensionality reduction. However, Sparse PCA is a challenging problem in both theory and practice: it is known to be NP-hard and current exact methods generally require exponential runtime. In this paper, we propose a novel framework to efficiently approximate Sparse PCA by (i) approximating the general input covariance matrix with a re-sorted block-diagonal matrix, (ii) solving the Sparse PCA sub-problem in each block, and (iii) reconstructing the solution to the original problem. Our framework is simple and powerful: it can leverage any off-the-shelf Sparse PCA algorithm and achieve significant computational speedups, with a minor additive error that is linear in the approximation error of the block-diagonal matrix. Suppose g(k, d) is the runtime of an algorithm (approximately) solving Sparse PCA in dimension d and with sparsity constant k. Our framework, when integrated with this algorithm, reduces the runtime to O d d g(k, d ) + d2 , where d d is the largest block size of the block-diagonal matrix. For instance, integrating our framework with the Branch-and-Bound algorithm reduces the complexity from g(k, d) = O(k3 dk) to O(k3 d (d )k 1), demonstrating exponential speedups if d is small. We perform large-scale evaluations on many real-world datasets: for exact Sparse PCA algorithm, our method achieves an average speedup factor of 100.50, while maintaining an average approximation error of 0.61%; for approximate Sparse PCA algorithm, our method achieves an average speedup factor of 6.00 and an average approximation error of -0.91%, meaning that our method oftentimes finds better solutions. 1 INTRODUCTION In this paper, we study the Sparse Principal Component Analysis (Sparse PCA) problem, a variant of the well-known Principal Component Analysis (PCA) problem. Similar to PCA, Sparse PCA involves finding a linear combination of d features that explains most variance. However, Sparse PCA distinguishes itself by requiring the use of only k d many features, thus integrating a sparsity constraint. This constraint significantly enhances interpretability, which is essential in data analysis when dealing with a large number of features. Sparse PCA has found widespread application across various domains, including text data analysis (Zhang & Ghaoui, 2011), cancer research (Hsu et al., 2014), bioinformatics (Ma & Dai, 2011), and neuroscience (Zhuang et al., 2020). For further reading, we refer interested readers to a comprehensive survey on Sparse PCA (Zou & Xue, 2018). While being important and useful, Sparse PCA is a challenging problem it is known to be NPhard (Magdon-Ismail, 2017). Solving Sparse PCA exactly, such as through the Branch-and-Bound algorithm (Berk & Bertsimas, 2019), requires worst-case exponential runtime. Moreover, while there are near-optimal approximation algorithms that exhibit faster performance empirically, they still require worst-case exponential runtime (Bertsimas et al., 2022; Li & Xie, 2020; Cory-Wright & Pauphilet, 2022; Dey et al., 2022b; Zou et al., 2006; Dey et al., 2022a). This complexity impedes their applicability to large-scale datasets. On the other hand, there are numerous polynomial-time approximation algorithms for Sparse PCA (Chowdhury et al., 2020; Li & Xie, 2020; Papailiopoulos et al., 2013; Chan et al., 2015; Del Pia, 2022; Asteris et al., 2015). These algorithms typically face trade-offs between efficiency and solution quality. Some algorithms are fast but yield sub-optimal Published as a conference paper at ICLR 2025 solutions, others provide high-quality solutions at the cost of greater computational complexity, and a few achieve both efficiency and accuracy but only under specific statistical assumptions. In this paper, we introduce a novel framework designed to efficiently approximate Sparse PCA through matrix block-diagonalization. Our framework facilitates the use of off-the-shelf algorithms in a plug-and-play manner. Specifically, the framework comprises three principal steps: (i) Matrix Preparation: Given a general input covariance matrix to Sparse PCA, we generate a re-sorted block-diagonal matrix approximation. This includes finding a denoised input matrix by thresholding, grouping non-zero entries into blocks, and slicing corresponding blocks in original input matrix. (ii) Sub-Problem Solution: Solve Sparse PCA sub-problems using any known algorithm in each block with a lower dimension, and find out the best solution among sub-problems; (iii) Solution Reconstruction: Reconstruct the best solution to the original Sparse PCA problem. We illustrate these steps in Fig. 1. At a high level, our approach involves solving several sub-problems in smaller matrices instead of directly addressing Sparse PCA in a large input matrix (Step (ii)), which significantly reduces computational runtime. This is made possible by carefully constructing sub-problems (Step (i)) and efficiently reconstructing an approximate solution to the original problem (Step (iii)). Figure 1: Illustration of our proposed approach, given a 9 9 covariance input matrix A. (i) Entries away from zero are highlighted in the upper matrix (original input A). Then group those entries in blocks, zero out outside entries, sort the matrix, and obtain the lower block-diagonal approximation. Heatmaps present values of matrix entries. The axes are indices of A; (ii) Extract sub-matrices from the block-diagonal approximation, and solve sub-problems via a suitable Sparse PCA algorithm; (iii) Select the solution with the highest objective value obtained from the sub-problems. Construct a solution for the original Sparse PCA problem by mapping its non-zero entries to their original locations using the inverse mapping of the sorting process, and setting all other entries to zero. Our framework significantly speeds up the computation of Sparse PCA, with negligible approximation error. We theoretically quantify the computational speedup and approximation error of our framework in Sections 3 and 4, and conduct large scale empirical evaluations in Section 5. Next, we illustrate the performance of our method via a concrete example. Suppose one is given an input covariance matrix being a noisy sorted block-diagonal matrix with 10 blocks, each of size 100, and is asked to solve Sparse PCA with a sparsity constant 4. The Branch-and-Bound algorithm (Berk & Bertsimas, 2019) takes over 3600 seconds and obtains a solution with objective value 13.649. Our framework integrated with Branch-and-Bound algorithm reduces runtime to 17 seconds, a speedup by a factor of 211, and obtains a solution with objective value 13.637, with an approximation error of 0.09%. Our contributions. We summarize our contributions in the following: We propose a novel framework for approximately solving Sparse PCA via matrix blockdiagonalization. This framework allows users to reuse any known algorithm designed for Sparse PCA in the literature in a very easy way, i.e., simply applying the known algorithm to the blocks of the approximate matrix, and accelerates the computation. To the best of our knowledge, our work is the first to apply block-diagonalization to solving Sparse PCA problem. Published as a conference paper at ICLR 2025 We provide approximation guarantees for our framework. We show that when integrated with an approximation algorithm, our method can generally obtain a better multiplicative factor, with an additional cost of an additive error that is linear in the approximation error of the block-diagonal approximate matrix. Under certain assumptions, the additive error can also be improved. We also perform a time complexity analysis showing that for an (approximation) algorithm with runtime g(k, d), our framework could reduce the runtime to O( d d g(k, d ) + d2), where d is the dimension of Sparse PCA input covariance matrix, k is the sparsity constant, and d is the largest block size in the block-diagonal approximate matrix. We show that for a statistical model that is widely studied in robust statistics, there exists an efficient way to find a block-diagonal approximation matrix. Moreover, we provide a very simple way to extend to the setting where the statistical assumptions are dropped and where computational resources are taken into consideration. We conduct extensive experiments to evaluate our framework, and our empirical study shows that (i) when integrated with Branch-and-Bound algorithm, our framework achieves an average speedup factor of 100.50, while maintaining an average approximation error of 0.61%, and (ii) when integrated with Chan s algorithm (Chan et al., 2015), our framework achieves an average speedup factor of 6.00 and an average approximation error of -0.91%, meaning that our method oftentimes finds better solutions. Our techniques. In this paper, we also introduce several novel techniques that may be of independent interest. We characterize a structural property of an optimal solution to Sparse PCA, when given a block-diagonal input matrix. We establish that solving Sparse PCA for a block-diagonal matrix input is equivalent to addressing individual Sparse PCA sub-problems within blocks, which consequently provides significant computational speedups. This result is non-trivial: while the support of optimal solutions might span multiple blocks, we prove that there always exists an optimal solution whose support is contained within a single block, ensuring the efficiency of our framework. Additionally, leveraging the defined structure, we propose a novel binary-search based method to identify the best block-diagonal approximation, particularly when computational resources are limited. Paper organization. This paper is organized as follows. In Section 2, we introduce Sparse PCA problem formally, and provide motivation of our proposed framework. In Section 3, we provide details of operational procedures of our proposed framework, and develop guarantees of approximation errors as well as runtime complexity. In Section 4 we further extend our algorithms to learning problems either under (i) a statistical model, or under (ii) a model-free setting, and provide theoretical guarantees respectively. In Section 5, we run large-scale empirical evaluations and demonstrate the efficacy of our proposed framework when integrated with both exact and approximate Sparse PCA algorithms. We defer discussion of additional related work, including existing block-diagonalization methods in the literature, detailed proofs, and additional empirical results to the appendix. 2 PROBLEM SETTING AND MOTIVATION In this section, we define formally the problem of interest in this paper, and then we explain the motivation behind our framework. We first define the Sparse PCA problem as follows: OPT := max x Ax s.t. x 2 = 1, x 0 k. (SPCA) Here, a symmetric and positive semidefinite input matrix A Rd d, and a positive integer k d are given. k is known as the sparsity constant, the upper bound on the number of nonzero entries in solution x. Motivating idea in this paper. The motivation for our method is illustrated through the following example, which demonstrates the efficiency of solving sub-problems: Suppose that a user is given an input block-diagonal covariance matrix A of size d, comprising d/d many d d blocks (assume that d is divisible by d ), and is asked to solve SPCA with sparsity constant k. The user can simply apply an exact algorithm (e.g., Branch-and-Bound algorithm) to solve the large problem in time O(k3 d k ) = O(k3 dk). The user could also solve d/d many sub-problems in each block, potentially obtaining an optimal solution (a fact we rigidly prove in Section 3.2.1), in time O((d/d ) k3 (d )k). This segmented approach yields a significant speedup factor of O((d/d )k 1). However, not all input matrices A are of the block-diagonal structure, even after permuting its rows and columns. One goal of our work is to develop a reasonable way to construct a (sorted) Published as a conference paper at ICLR 2025 block-diagonal matrix approximation e A to A such that they are close enough. In this paper, we are interested in finding the following ε-matrix approximation to A: Definition 1 (ε-matrix approximation). Given a matrix A Rd d, denote by Aij the (i, j)-th entry of A. An ε-matrix approximation of A is defined as a d d matrix e A such that A e A := max i,j [d] Aij e Aij ε. Moreover, we define the set of all ε-matrix approximations of A to be B(A, ε). In other words, e A B(A, ε) if and only if e A belongs to an ℓ -ball centered at A with radius ε. Our overarching strategy is that if the input matrix is not block-diagonal, we find a matrix approximation in the ball so that it could be resorted to be block-diagonal. We then solve the Sparse PCA problem using this resorted matrix and reconstruct the approximate solution to the original problem, taking advantage of the computational speedups via addressing smaller sub-problems. Additional notation. We adopt non-asymptotic big-oh notation: For functions f, g : Z R+, we write f = O(g) (resp. f = Ω(g)) if there exists a constant C > 0 such that f(z) Cg(z) (resp. f(z) Cg(z)) for all z Z. For an integer n N, we let [n] denote the set {1, 2, . . . , n}. For a vector z Rd, we use zi to denote its i-th entry. We denote supp(z) := {i [d] : zi = 0} the support of z. For 1 q , we denote z q the q-norm of z, i.e., z q := (Pd i=1 |zi|q)1/q, and z := maxi [d] |zi|. For a matrix A Rn m, we use Aij to denote its (i, j)-th entry. We define A := maxi,j |Aij|. For index sets S [n], T [m], we denote AS,T the |S| |T| sub-matrix of A with row index S and column index T. For matrices Ai Rdi di for i = 1, 2, . . . , p, we denote diag(A1, A2, . . . , Ap) the (Pp i=1 di) (Pp i=1 di) block-diagonal matrix with blocks A1, A2, . . . , Ap. For a square matrix B Rd d, we define the size of B to be d. 3 OUR APPROACH In this section, we introduce our framework for solving SPCA approximately via blockdiagonalization technique. We explain in detail the operational procedures of our framework in Section 3.1. In Section 3.2, we characterize the approximation guarantees and computational speedups of our framework. Note that, in this section, our proposed procedures require a predefined threshold ε > 0. In Section 4, we extend our algorithms to learning the threshold in a statistical model, as well as in a model-free setting. 3.1 ALGORITHMS In this section, we outline the operational procedure of our proposed framework for solving SPCA approximately. The procedure commences with the input matrix A Rd d and a predefined threshold value ε > 0. Initially, a denoised matrix Aε is derived through thresholding as described in Algorithm 1. Subsequently, several sub-matrices and their corresponding sets of indices are extracted from A and Aε by employing Algorithm 2, which groups index pair (i, j) if Aε ij = 0. This grouping mechanism can be efficiently implemented using a Depth-First Search approach, which iterates over each unvisited index i and examines all non-zero Aε ij, and then visit each corresponding index j that is unvisited. The computational complexity of Algorithm 2 is bounded by O(d2). Finally, an approximate solution to SPCA is obtained by solving several Sparse PCA sub-problems in each block using a certain (approximation) algorithm A, mapping the solution back to the original problem, and finding out the best solution. The operational details are further elaborated in Algorithm 3.1 3.2 APPROXIMATION GUARANTEES AND COMPUTATIONAL SPEEDUPS In this section, we provide analysis of approximation guarantees and speedups of our framework Algorithm 3. Before showing approximation guarantees, we first define the ε-intrinsic dimension of A: 1We note that on line 6 in Algorithm 3, if k is larger than the size of e Ai, a PCA problem is solved instead. Published as a conference paper at ICLR 2025 Algorithm 1 Denoising procedure via thresholding 1: Input: Matrix A Rd d, threshold ε > 0 2: Output: Denoised matrix Aε Rd d 3: for (i, j) = (1, 1) to (d, d) do 4: If |Aij| > ε, set Aε ij Aij, otherwise set Aε ij 0 5: Return Aε Algorithm 2 Matrix block-diagonlization 1: Input: Matrices A, Aε Rd d 2: Output: Sub-matrices and their corresponding lists of indices 3: for each pair of indices (i, j) [d] [d] do 4: Group i and j together if Aε ij > 0 5: Obtain sets of indices {Si}p i=1 that are grouped together 6: Return sub-matrices ASi,Si and the corresponding Si, for i [p] Definition 2 (ε-intrinsic dimension). Let A Rd d. The ε-intrinsic dimension of A, denoted by int dim(A, ε), is defined as: int dim(A, ε) := min B B(A,ε) lbs(B), where lbs(B) denotes the largest block size of B, i.e., the largest among {|Si|}i [p] after running lines 3-5 in Algorithm 2 with Aε = B. As we will see later in this section, this concept plays a crucial role in characterizing both the quality of the approximate solution obtained through our framework, Algorithm 3, and the computational speedup achieved by our framework. Moreover, a matrix e A can be efficiently identified such that lbs( e A) = int dim(A, ε) using Algorithm 1, as shown in Lemma 1. Lemma 1. Let A Rd d and ε > 0. Given input (A, ε), Algorithm 1 outputs an ε-approximation of A, denoted as e A, such that lbs( e A) = int dim(A, ε) in time O(d2). 3.2.1 APPROXIMATION GUARANTEES In this section, we provide approximation guarantees for our framework Algorithm 3. We first show that in Theorem 1 that, an optimal solution to SPCA with input (A, k) and an optimal solution to (Aε, k) are close to each other in terms of objective value: Theorem 1. Let A Rd d be symmetric, k d, and ε > 0. Denote by Aε the output of Algorithm 1 with input (A, ε), ex an optimal solution to SPCA with input (Aε, k), and x an optimal solution to SPCA with input (A, k). Then, it follows that (x ) Ax ex Aex 2k ε. Algorithm 3 Efficient Sparse PCA via block-diagonalization 1: Input: Matrix A Rd d, positive integer k d, threshold ε > 0, algorithm A for (approximately) solving SPCA 2: Output: Approximate solution to SPCA 3: Obtain denoised matrix Aε using Algorithm 1 with input (A, ε) 4: Obtain sub-matrices { e Ai}p i=1 and corresponding index sets {Si}p i=1 via Algorithm 2 with (A, Aε) 5: for i = 1 to p do 6: Solve SPCA approximately with input ( e Ai, k) using algorithm A and obtain solution xi 7: Construct yi Rd by placing (xi)j at (yi)Si(j) for j in block indices Si and zero elsewhere 8: Return the best solution among {yi}p i=1 that gives the largest y i Ayi Published as a conference paper at ICLR 2025 Then, in Theorem 2, we show that solving SPCA exactly with a block-diagonal input matrix e A is equivalent to addressing SPCA sub-problems in blocks within e A: Theorem 2. Let e A = diag( e A1, e A2, . . . , e Ap) be a symmetric block-diagonal matrix. Denote OPT to be the optimal value to SPCA with input pair ( e A, k). Let OPTi to be the optimal value to SPCA with input pair ( e Ai, k), for i [p]. Then, one has OPT = maxi [p] OPTi. Remark 1. Theorem 2 is particularly non-trivial: without this result, one might think the support of any optimal solution to SPCA may span different blocks. Solving SPCA with a block-diagonal input would consequently involve iterating over all possible sparsity constants (that sum up to k) for each block-diagonal sub-matrix, making our framework proposed in Section 3.1 nearly impractical. We now extend Theorem 1 to incorporate the use of various approximation algorithms. We note that our extension improves the general bound proposed in Theorem 1, since Algorithm 3 finds a specific block-diagonal approximation of A. Furthermore, Theorem 3 suggests that our framework Algorithm 3, can also be highly effective when combined with other approximation algorithms. Specifically, it has the potential to find a better solution, particularly when the value of ε is small. Theorem 3. Let A Rd d be symmetric and positive semidefinite, let k be a positive integer, and denote by x Rd an optimal solution to SPCA with input (A, k). Suppose that an algorithm A for SPCA with input (A, k) finds an approximate solution x Rd to SPCA with multiplicative factor m(k, d) 1 and additive error a(k, d) 0, i.e., one has x Ax (x ) Ax /m(k, d) a(k, d). Furthermore, we assume that m(k, d) and a(k, d) is non-decreasing with respect to d. For ε > 0, denote by y Rd be the output of Algorithm 3 with input tuple (A, k, ε, A). Then, one has y Ay (x ) Ax m (k, int dim(A, ε)) a (k, int dim(A, ε)) 1 m (k, int dim(A, ε)) kε. Remark 2. Denote by d := int dim(A, ε) d. Theorem 3 implies that our framework, Algorithm 3, when inputted with (A, k, ε, A), could find an approximate solution with a better multiplicative factor m (k, d ), and an additive error a (k, d ) + kε m(k,d ). Note that the additive error is also improved if a(k, d) a (k, int dim(A, ε)) + 1 m (k, int dim(A, ε)) kε. In addition, we have found instances where our framework can indeed obtain better solutions when integrated with some approximation algorithm, as shown in Table 9 in Appendix C. Finally, we summarized in Table 1 the change of multiplicative factor and additive error when integrated our framework to some existing (approximation) algorithms for SPCA. Table 1: Comparison of multiplicative factors (MF) and additive errors (AE) when integrating our framework with different algorithms. The MISDP and Greedy algorithms are both studied in Li & Xie (2020). Note that our MF is always no larger than the original MF. Algorithm MF AE Our MF Our AE Chan et al. (2015) min( k, d1/3) 0 min( k, int dim(A, ε)1/3) kε/Our MF MISDP min(k, d/k) 0 min(k, int dim(A, ε)/k) kε/Our MF Greedy k 0 k ε Exact algorithms 1 0 1 kε Asteris et al. (2015) 1 δ 1 kε + δ 3.2.2 COMPUTATIONAL SPEEDUPS In this section, we characterize the runtime complexity of Algorithm 3. We will see that the ε-intrinsic dimension also plays an important role in this section. Proposition 1. Let A Rd d, and let k be a positive integer. Suppose that for a specific algorithm A, the time complexity of using A for (approximately) solving SPCA with input (A, k) is a function g(k, d) which is convex and non-decreasing with respect to d. We assume that g(k, 1) = 1. Then, for a given threshold ε > 0, the runtime of Algorithm 3 with input tuples (A, k, ε, A) is at most O d int dim(A, ε) g (k, int dim(A, ε)) + d2 Published as a conference paper at ICLR 2025 Remark 3. We remark that the assumption g(k, d) is convex and non-decreasing with respect to d in Proposition 1 is a very weak assumption. Common examples include dα and dα log d with α 1. Remark 4. In this remark, we explore the computational speedups provided by Algorithm 3 by modeling int dim(A, ε) as a function of A and ε. We assume without loss of generality that A = 1, thereby setting int dim(A, 1) = 0. It is important to note that while int dim(A, ε) is discontinuous, for the purpose of illustration in Table 2, it is treated as a continuous function of ε. Table 2: Comparison of original runtimes and current runtimes in our framework. We assume that the algorithm has runtime O(dα), for some α 2. int dim(A, ε) Runtime Our Runtime Speedup factor d(1 e c(1 ε)) O(dα) O(dα(1 e c(1 ε))α 1) O((1 e c(1 ε))1 α) d(1 ε) O(dα) O(dα(1 ε)α 1) O((1 ε)1 α) d(1 ε)m O(dα) O(dα(1 ε)(α 1)m) O((1 ε)(1 α)m) 4 EXTENSIONS Our algorithm presented in Section 3.1 takes a known threshold as an input. In this section, we remove this constraint by extending our algorithm to various settings. Specifically, in Section 4.1, we consider learning under a statistical model, where one can do the denoising of a matrix easily. In Section 4.2, we propose a practical learning approach without any assumption. 4.1 LEARNING UNDER A STATISTICAL MODEL A crucial aspect of our proposed framework involves determining an appropriate threshold, ε, as required by Algorithm 3. While specifying such a threshold is generally necessary, there are many instances where it can be effectively estimated from the data. In this section, we discuss a model commonly employed in robust statistics (Comminges et al., 2021; Kotekal & Gao, 2023; 2024). This model finds broad applications across various fields, including clustering (Chen & Witten, 2022) and sparse linear regression (Minsker et al., 2022). Before presenting the formal definition, we first establish some foundational intuition about the model. Conceptually, if the input data matrix A is viewed as an empirical covariance matrix, and based on the intuition that, a feature is expected to exhibit strong correlations with only a few other features, it follows that A should be, or at least be close to, a block-diagonal matrix. This observation motivates us for considering Model 1: Model 1. The input matrix A Rd d is close to a block-diagonal symmetric matrix e A Rd d, i.e., A = e A + E for some symmetric matrix E Rd d such that A is positive semidefinite and (i) for i j, each entry Eij is drawn from an i.i.d. ϱ-sub-Gaussian distribution, with mean zero, and variance σ2, where ϱ > 0 and σ2 being unknown, but an upper bound u ϱ2/σ2 is given; (ii) write e A = diag e A1, e A2, . . . , e Ap , the size of each diagonal block e Ai is upper bounded by d , where d Cdα for some known constant C > 0 and α (0, 1), but d is unknown. In assumption (i) of Model 1, the sub-Gaussian distribution is particularly favored as this class of distributions is sufficiently broad to encompass not only the Gaussian distribution but also distributions of bounded variables, which are commonly observed in noise components of real-world datasets. Assumption (ii) of Model 1 uses the constant α to quantify the extent to which features are clustered. The procedure for estimating E in Model 1 involves an estimation of the variance σ2, which is adapted from Comminges et al. (2021), and then multiply a constant multiple of log d. We present the details in Algorithm 4: In the next theorem, we provide a high probability approximation bound, as well as a runtime analysis, for Algorithm 3 using the threshold ε found via Algorithm 4. The results imply that in Model 1, one can efficiently solve SPCA by exploiting the hidden sparse structure. Proposition 2. Consider Model 1, and denote by ε the output of Algorithm 4 with input tuple (A, C, α, u). Let k d be a positive integer, and assume that d satisfies that d1 α > C0 (C + 1) Published as a conference paper at ICLR 2025 Algorithm 4 Estimation of E in Model 1 1: Input: Matrix A Rd d, parameters C > 0, α (0, 1), and u in Model 1 2: Output: A number ε > 0 as an estimate of E 3: Divide the set of indices {(i, j) [d] [d] : i j} into m = (2C + 2)d1+α disjoint subsets B1, B2, . . . , Bm randomly, ensuring each subset Bi has cardinality lower bounded by j d2+d 4: Initialize an array S to store subset variances 5: for i = 1 to m do 6: Si 1 |Bi| P (i,j) Bi A2 ij 7: σ2 median(S) 8: Return ε 2u σ 3 log d u log(8C + 8) for some large enough absolute constant C0 > 0. Then, the following holds with probability at least 1 2d 1 exp{2 log d d1+α/(4C + 4)}: (i) Denote by ex the optimal solution to SPCA with input ( e A, k). For an (approximation) algorithm A with multiplicative factor m(k, d) 1 and additive error a(k, d) 0, where the functions m and a is non-deceasing with respect to d, the output y of Algorithm 3 with input tuple (A, k, ε, A) satisfies that y e Ay (ex ) e Aex m (k, d ) a (k, d ) 1 + 2 m (k, d ) (ii) If, in addition, the time complexity of using A for (approximately) solving SPCA with input (A, k) is a function g(k, d) which is convex and non-decreasing with respect to d, and satisfies g(k, 1) = 1. Then, the runtime of Algorithm 3 with input tuples (A, k, ε, A) is at most g (k, d ) + d2 Finally, we remark that, the assumption d1 α > C0 (C + 1) u log(8C + 8) is true when d is large enough, as C0, C, and u are usually some constants not changing with respect to the dimension d. 4.2 A PRACTICAL LEARNING APPROACH In this section, we describe a practical algorithm to estimate ε without prior statistical assumptions while simultaneously providing an approximate solution to SPCA. Recall from Algorithm 3 that for a given threshold ε, a thresholded matrix Aε is generated via Algorithm 1 from the original matrix A. This matrix Aε is then utilized to approximate SPCA using a specified algorithm A. Our proposed approach in this section involves initially selecting an ε value within a predefined range, executing Algorithm 3 with this ε, and subsequently adjusting ε in the following iterations through a binary search process. Since efficiency is of our major interest; thus, we ask users to provide a parameter d0 that indicates the maximum dimension they are willing to compute due to computational resource limit. We defer the detailed description of this binary-search based algorithm to Algorithm 5. For clarity, let us denote by OPT(ε) the objective value of (xε) Axε, where xε is the output of Algorithm 3 with inputs (A, k, ε, A). We have the characterization of approximation error and runtime complexity of Algorithm 5: Theorem 4. Let A Rd d be symmetric and positive semidefinite, and let k be a positive integer. Suppose that an algorithm A for SPCA with input (A, k) finds an approximate solution x Rd to SPCA such that has multiplicative factor m(k, d) 1 and additive error a(k, d) 0, with m(k, d) and a(k, d) being non-decreasing with respect to d. Suppose that Algorithm 5 with input (A, k, δ, A, d0) terminates with OPT(ε ) 0. Then one has OPT(ε ) 1 m(k, d0) OPT(0) a(k, d0) 1 m(k, d0) kε . Suppose that for A, the time complexity of using A for (approximately) solving SPCA with input (A, k) is a function g(k, d) which is convex and non-decreasing with respect to d, g(k, 1) = 1, and Published as a conference paper at ICLR 2025 Algorithm 5 A practical version of Algorithm 3 without a given threshold 1: Input: Matrix A Rd d, positive integer k d, tolerance parameter δ, (approximate) algorithm A for SPCA, and integer d0 2: Result: Approximate solution to Problem SPCA 3: L 0, U A , find OPT(U) via A 4: while U L > δ do 5: ε (L + U)/2 6: Find Aε via Algorithm 1, and the largest block size d := lbs(Aε) 7: if problems with largest block size d has been solved before then U ε; continue 8: if d > d0 then L ε; continue 9: Find OPT(ε) via Algorithm 3 with algorithm A 10: if OPT(ε) does not improve then U ε 11: return best OPT(ε) found and the corresponding xε g(k, 0) = 0. The runtime for Algorithm 5 is at most g (k, d0) + d2 Remark 5. In this remark, we discuss the practical advantages of Algorithm 5: The first direct advantage is that Algorithm 5 does not rely on any assumptions, enabling users to apply the framework to any dataset. Moreover, the framework can be applied to scenarios where an improved estimation of the binary search range is known. For instance, if users possess prior knowledge about an appropriate threshold ε for binary search, they can refine the search interval in Algorithm 5 by setting the lower and upper bounds to L := a ε and U := b ε for some proper 0 a < b to improve the efficiency by a speedup factor of O(log( A /δ)/ log((b a) ε/δ)). Secondly, as detailed in Theorem 4, is that Algorithm 5 achieves a similar trade-off between computational efficiency and approximation error as Algorithm 1, which are proposed in Theorem 3 and Proposition 1. The time complexity of Algorithm 5, compared to Algorithm 3, increases only by a logarithmic factor due to the binary search step, thus preserving its efficiency. The third advantage is that this algorithm considers computational resources, ensuring that each invocation of Algorithm 3 remains efficient. 5 EMPIRICAL RESULTS In this section, we report the numerical performance of our proposed framework. Datasets. We perform numerical tests in datasets widely studied in feature selection, including datasets Cov Colon from Alon et al. (1999), Lymphoma Cov1 from Alizadeh et al. (1998), Reddit1500 and Reddit2000 from Dey et al. (2022b), Leukemia Cov, Lymphoma Cov2, and Prostate Cov from Dettling (2004), Arcene Cov and Dorothea Cov from Guyon et al. (2007), and GLI85Cov and GLABRA180Cov from Li (2020). We take the sample covariance matrices of features in these datasets as our input matrices A. The dimensions of A range from 500 to 100000. Most of the matrices A are dense and hence are not block-diagonal matrices, as discussed in Appendix C.1. Baselines. We use the state-of-the-art exact algorithm Branch-and-Bound (Berk & Bertsimas, 2019) and the state-of-the-art polynomial-time approximation algorithm (in terms of multiplicative factor) Chan s algorithm (Chan et al., 2015) as our baselines. Notation. We calculate the approximation error between two methods using the formula (obj Base obj Ours)/obj Base 100%, where obj Base is the objective value obtained by a baseline algorithm, while obj Ours is the objective value obtained by our framework integrated with the baseline algorithm. We calculate the speedup factor by dividing the runtime of a baseline algorithm by the runtime of Algorithm 5 integrated with the baseline algorithm. Results. In Table 3, we report the performance of Algorithm 5 when integrated with Branch-and Bound algorithm. We summarized the performance results for k = 2, 5, 10, 15 across various datasets. For each dataset, we report the average approximation error, average speedup factors, and Published as a conference paper at ICLR 2025 their corresponding standard deviations. Notably, the average speedup factor across all datasets is 100.50, demonstrating significant computational efficiency. Our theoretical analysis anticipates a trade-off between speedups and approximation errors; however, the empirical performance of our framework in terms of approximation accuracy is remarkably strong, with an average error of just 0.61% across all datasets. In Table 4 we report the performance of Algorithm 5, in larger datasets with larger k s, when integrated with Chan s algorithm (Chan et al., 2015). We summarized the performance results for k = 200, 500, 1000, 2000 across various datasets. Chan s algorithm terminates in time O(d3) and returns an approximate solution to SPCA with a multiplicative factor min{ k, d1/3}. The average speedup factor is 6.00, and grows up to 14.51 as the dimension of the dataset grows, while the average error across all four datasets remains remarkably below 0%, meaning that our framework finds better solutions than the standalone algorithm. These findings affirm the ability of our framework to handle diverse datasets and various problem sizes both efficiently and accurately. For a more detailed discussion, additional information on settings, and further empirical results, we refer readers to Appendix C. Table 3: Summary of average approximation errors and average speedup factors for each dataset, compared to Branch-and-Bound. Standard deviations are reported in the parentheses. Dataset Avg. Error (%) (Std. Dev) Avg. Speedup (Std. Dev) Cov Colon Cov 0.00 (0.00) 25.84 (38.69) Lymphoma Cov1 0.00 (0.00) 22.64 (15.87) Reddit1500 0.00 (0.00) 365.12 (238.24) Reddit2000 0.06 (0.13) 278.69 (164.93) Leukemia Cov 0.00 (0.00) 71.39 (83.17) Lymphoma Cov2 4.82 (5.78) 6.95 (4.35) Prostate Cov 0.00 (0.00) 19.90 (30.82) Arcene Cov 0.00 (0.00) 13.50 (14.17) Overall 0.61 (2.42) 100.50 (163.60) Table 4: Summary of average approximation errors and average speedup factors for each dataset, compared to Chan s algorithm. Standard deviations are reported in the parentheses. Datasets are sorted in ascending order according to their dimensions. Dataset Avg. Error (%) (Std. Dev) Avg. Speedup (Std. Dev) Arcene Cov -0.08 (0.10) 0.86 (0.28) GLI85Cov -1.06 (1.22) 1.58 (0.21) GLABRA180Cov -0.19 (0.26) 7.07 (0.62) Dorothea Cov -2.30 (0.62) 14.51 (0.50) Overall -0.91 (1.11) 6.00 (5.66) 6 DISCUSSION In this paper, we address the Sparse PCA problem by introducing an efficient framework that accommodates any off-the-shelf algorithm designed for Sparse PCA in a plug-and-play fashion. Our approach leverages matrix block-diagonalization, thereby facilitating significant speedups when integrated with existing algorithms. The implementation of our framework is very easy, enhancing its practical applicability. We have conducted a thorough theoretical analysis of both the approximation error and the time complexity associated with our method. Moreover, extensive empirical evaluations on large-scale datasets demonstrate the efficacy of our framework, which achieves considerable improvements in runtime with only minor trade-offs in approximation error. Looking ahead, an important direction for future research involves extending our framework to handle the Sparse PCA problem with multiple principal components, while maintaining similar approximation and runtime guarantees. This extension could further broaden the applicability of our approach, addressing more complex scenarios in real-world applications. Published as a conference paper at ICLR 2025 7 ACKNOWLEDGEMENTS A. Del Pia and D. Zhou are partially funded by AFOSR grant FA9550-23-1-0433. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the Air Force Office of Scientific Research. Farid Alizadeh, Jean-Pierre A Haeberly, and Michael L Overton. Primal-dual interior-point methods for semidefinite programming: convergence rates, stability and numerical results. SIAM Journal on Optimization, 8(3):746 768, 1998. Uri Alon, Naama Barkai, Daniel A Notterman, Kurt Gish, Suzanne Ybarra, Daniel Mack, and Arnold J Levine. Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proceedings of the National Academy of Sciences, 96(12):6745 6750, 1999. Arash A Amini and Martin J Wainwright. High-dimensional analysis of semidefinite relaxations for sparse principal components. In IEEE International Symposium on Information Theory, pp. 2454 2458, 2008. Megasthenis Asteris, Dimitris Papailiopoulos, Anastasios Kyrillidis, and Alexandros G Dimakis. Sparse pca via bipartite matchings. Advances in Neural Information Processing Systems, 28, 2015. Lauren Berk and Dimitris Bertsimas. Certifiably optimal sparse principal component analysis. Mathematical Programming Computation, 11:381 420, 2019. Dimitris Bertsimas, Ryan Cory-Wright, and Jean Pauphilet. Solving large-scale sparse pca to certifiable (near) optimality. Journal of Machine Learning Research, 23(13):1 35, 2022. Siu On Chan, Dimitris Papailiopoulos, and Aviad Rubinstein. On the worst-case approximability of sparse pca. ar Xiv preprint ar Xiv:1507.05950, 2015. Yiqun T Chen and Daniela M Witten. Selective inference for k-means clustering. ar Xiv preprint ar Xiv:2203.15267, 2022. Agniva Chowdhury, Petros Drineas, David P Woodruff, and Samson Zhou. Approximation algorithms for sparse principal component analysis. ar Xiv preprint ar Xiv:2006.12748, 2020. La etitia Comminges, Olivier Collier, Mohamed Ndaoud, and Alexandre B Tsybakov. Adaptive robust estimation in sparse vector model. The Annals of Statistics, 49(3):1347 1377, 2021. Ryan Cory-Wright and Jean Pauphilet. Sparse pca with multiple components. ar Xiv preprint ar Xiv:2209.14790, 2022. Alberto Del Pia. Sparse pca on fixed-rank matrices. Mathematical Programming, pp. 1 19, 2022. Yash Deshp, Andrea Montanari, et al. Sparse pca via covariance thresholding. Journal of Machine Learning Research, 17(141):1 41, 2016. Marcel Dettling. Bagboosting for tumor classification with gene expression data. Bioinformatics, 20 (18):3583 3593, 2004. Emilie Devijver and M elina Gallopin. Block-diagonal covariance selection for high-dimensional gaussian graphical models. Journal of the American Statistical Association, 113(521):306 314, 2018. Santanu S Dey, Rahul Mazumder, and Guanyi Wang. Using ℓ1-relaxation and integer programming to obtain dual bounds for sparse pca. Operations Research, 70(3):1914 1932, 2022a. Santanu S Dey, Marco Molinaro, and Guanyi Wang. Solving sparse principal component analysis with global support. Mathematical Programming, pp. 1 39, 2022b. Published as a conference paper at ICLR 2025 Yunzi Ding, Dmitriy Kunisky, Alexander S Wein, and Afonso S Bandeira. Subexponential-time algorithms for sparse pca. Foundations of Computational Mathematics, pp. 1 50, 2023. Tommaso d Orsi, Pravesh K Kothari, Gleb Novikov, and David Steurer. Sparse pca: algorithms, adversarial perturbations and certificates. In 2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS), pp. 553 564. IEEE, 2020. Salar Fattahi and Somayeh Sojoudi. Graphical lasso and thresholding: Equivalence and closed-form solutions. Journal of machine learning research, 20(10):1 44, 2019. Jiashi Feng, Zhouchen Lin, Huan Xu, and Shuicheng Yan. Robust subspace segmentation with block-diagonal prior. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3818 3825, 2014. Isabelle Guyon, Jiwen Li, Theodor Mader, Patrick A Pletscher, Georg Schneider, and Markus Uhr. Competitive baseline methods set new standards for the nips 2003 feature selection benchmark. Pattern recognition letters, 28(12):1438 1444, 2007. William W Hager, Dzung T Phan, and Jiajie Zhu. Projection algorithms for nonconvex minimization with application to sparse principal component analysis. Journal of Global Optimization, 65: 657 676, 2016. Insu Han, Rajesh Jayaram, Amin Karbasi, Vahab Mirrokni, David Woodruff, and Amir Zandieh. Hyperattention: Long-context attention in near-linear time. In The Twelfth International Conference on Learning Representations, 2023. Ying-Lin Hsu, Po-Yu Huang, and Dung-Tsa Chen. Sparse principal component analysis in cancer research. Translational cancer research, 3(3):182, 2014. Subhodh Kotekal and Chao Gao. Sparsity meets correlation in gaussian sequence model, 2023. Subhodh Kotekal and Chao Gao. Optimal estimation of the null distribution in large-scale inference, 2024. Jundong Li. Feature selection datasets at asu. https://jundongl.github.io/ scikit-feature/OLD/datasets_old.html, 2020. Accessed: 2024-09-27. Yongchun Li and Weijun Xie. Exact and approximation algorithms for sparse pca. ar Xiv preprint ar Xiv:2008.12438, 2020. Shuangge Ma and Ying Dai. Principal component analysis based methods in bioinformatics studies. Briefings in bioinformatics, 12(6):714 722, 2011. Malik Magdon-Ismail. Np-hardness and inapproximability of sparse PCA. Information Processing Letters, 126:35 38, 2017. Rahul Mazumder and Trevor Hastie. Exact covariance thresholding into connected components for large-scale graphical lasso. The Journal of Machine Learning Research, 13(1):781 794, 2012. Stanislav Minsker, Mohamed Ndaoud, and Lang Wang. Robust and tuning-free sparse linear regression via square-root slope. ar Xiv preprint ar Xiv:2210.16808, 2022. Dimitris Papailiopoulos, Alexandros Dimakis, and Stavros Korokythakis. Sparse pca through lowrank approximations. In International Conference on Machine Learning, pp. 747 755. PMLR, 2013. Roman Vershynin. High-dimensional probability: An introduction with applications in data science, volume 47. Cambridge university press, 2018. Xiwen Wang, Jiaxi Ying, and Daniel Palomar. Learning large-scale mtp 2 gaussian graphical models via bridge-block decomposition. Advances in Neural Information Processing Systems, 36: 73211 73231, 2023. Xiao-Tong Yuan and Tong Zhang. Truncated power method for sparse eigenvalue problems. Journal of Machine Learning Research, 14(4), 2013. Published as a conference paper at ICLR 2025 Youwei Zhang and Laurent Ghaoui. Large-scale sparse principal component analysis with application to text data. Advances in Neural Information Processing Systems, 24, 2011. Xiaowei Zhuang, Zhengshi Yang, and Dietmar Cordes. A technical review of canonical correlation analysis for neuroscience applications. Human brain mapping, 41(13):3807 3833, 2020. Hui Zou and Lingzhou Xue. A selective overview of sparse principal component analysis. Proceedings of the IEEE, 106(8):1311 1320, 2018. Hui Zou, Trevor Hastie, and Robert Tibshirani. Sparse principal component analysis. Journal of computational and graphical statistics, 15(2):265 286, 2006. Published as a conference paper at ICLR 2025 A ADDITIONAL RELATED WORK In this section, we discuss additional related work. There exist many attempts in solving Sparse PCA in the literature, and we outlined only a selective overview of mainstream methods. Algorithms require worst-case exponential runtimes, including mixed-integer semidefinite programs (Bertsimas et al., 2022; Li & Xie, 2020; Cory-Wright & Pauphilet, 2022), mixed-integer quadratic programs (Dey et al., 2022b), non-convex programs (Zou et al., 2006; Dey et al., 2022a), and sub-exponential algorithms (Ding et al., 2023). On the other hand, there also exist a number of algorithms that take polynomial runtime to approximately solving Sparse PCA. To the best of our knowledge, the state-of-the-art approximation algorithm is proposed by Chan et al. (2015), in which an algorithm with multiplicative factor min{ k, d 1/3} is proposed. There also exists tons of other (approximation) algorithms that have polynomial runtime, including semidefinite relaxations (Chowdhury et al., 2020), greedy algorithms (Li & Xie, 2020), low-rank approximations (Papailiopoulos et al., 2013), and fixed-rank Sparse PCA (Del Pia, 2022). Apart from the work we have mentioned, there also exists a streamline of work focusing on efficient estimation of sparse vectors in some specific statistical models. For instance, for a statistical model known as Spiked Covariance model, methods such as semidefinite relaxations (Amini & Wainwright, 2008; d Orsi et al., 2020) and covariance thresholding (Deshp et al., 2016). Finally, there are also local approximation algorithms designed for Sparse PCA, and iterative algorithms are proposed (Yuan & Zhang, 2013; Hager et al., 2016). The idea of decomposing a large-scale optimization problem into smaller sub-problems is a widely adopted approach in the optimization literature. For instance, Mazumder & Hastie (2012) leverages the connection between the thresholded input matrix and the support graph of the optimal solution in graphical lasso problem (GL), enabling significant computational speedups by solving sub-problems. Building on this, Fattahi & Sojoudi (2019) improves the computational efficiency of these subroutines by providing closed-form solutions under an acyclic assumption and analyzing the graph s edge structure. Wang et al. (2023) addresses GL in Gaussian graphical models through bridge-block decomposition and derives closed-form solutions for these blocks. In addition, several methods in the literature involve or are similar to matrix block-diagonalization. For instance, Feng et al. (2014) employ block-diagonalization for subspace clustering and segmentation problems by mapping data matrices into block-diagonal matrices through linear transformations. More recently, Han et al. (2023) introduce the sorted LSH algorithm to sparsify the softmax matrix, thus reducing the computational time of the attention matrix through random projection-based bucketing of block indices . We remark that in all studies above, the underlying problems of interests, as well as the techniques, are different from ours, and the techniques developed therein cannot be directly applied to SPCA. A more related work (Devijver & Gallopin, 2018) explores block-diagonal covariance matrix estimation using hard thresholding techniques akin to ours. However, our framework diverges in two critical aspects: (i) They assume data is drawn from a Gaussian distribution, while we do not - we show in Section 4.2 that our method can by applied to any data inputs; and (ii) they use maximum likelihood estimation, whereas our Algorithm 5 involves iterative binary search to determine the optimal threshold ε. B DEFERRED PROOFS In this section, we provide proofs that we defer in the paper. B.1 PROOFS IN SECTION 3.2 Lemma 1. Let A Rd d and ε > 0. Given input (A, ε), Algorithm 1 outputs an ε-approximation of A, denoted as e A, such that lbs( e A) = int dim(A, ε) in time O(d2). Proof of Lemma 1. It is clear that Algorithm 1 has time complexity O(d2), and it suffices to show that the output e A of Algorithm 1 with input (A, ε) satisfies lbs e A = int dim(A, ε). Suppose that A is an ε-approximation of A, and satisfies lbs (A ) = int dim(A, ε). We intend to show lbs e A lbs (A ). WLOG we assume that A B(A, ε) is a block-diagonal matrix with Published as a conference paper at ICLR 2025 blocks A 1, A 2, . . . , A p, i.e., A = diag A 1, A 2, . . . , A p . Write A = A11 A12 . . . A1p A21 A22 . . . A2p ... ... ... ... Ap1 Ap2 . . . App e A11 e A12 . . . e A1p e A21 e A22 . . . e A2p ... ... ... ... e Ap1 e Ap2 . . . e App , where Aii, e Aii, and A i have the same number of rows and columns, by the definition of B(A, ε), it is clear that Aij ε, for any 1 i = j p. This further implies that for any 1 i = j p, e Aij is a zero matrix by the procedure described in Algorithm 1, and therefore it is clear that lbs e A lbs (A ). By minimality of A , we obtain that lbs e A = lbs (A ). B.1.1 PROOFS IN SECTION 3.2.1 Theorem 1. Let A Rd d be symmetric, k d, and ε > 0. Denote by Aε the output of Algorithm 1 with input (A, ε), ex an optimal solution to SPCA with input (Aε, k), and x an optimal solution to SPCA with input (A, k). Then, it follows that (x ) Ax ex Aex 2k ε. Proof of Theorem 1. Note that (x ) Ax ex Aex (x ) Ax ex Aεex + ex (Aε A)ex (x ) Ax ex Aεex + k A Aε , where the last inequality follows from Holder s inequality and the fact that ex 1 k. Therefore, it suffices to show that (x ) Ax ex Aεex k A Aε . Observe that (x ) Ax ex Aεex = max x 2=1, x 0 k x Ax max x 2=1, x 0 k max x 2=1, x 0 k where the first inequality follows from the following facts: max x 2=1, x 0 k x Ax max x 2=1, x 0 k x Aεx max x 2=1, x 0 k max x 2=1, x 0 k x Aεx max x 2=1, x 0 k x Ax max x 2=1, x 0 k Finally, combining the fact that A Aε ε, we are done. Theorem 2. Let e A = diag( e A1, e A2, . . . , e Ap) be a symmetric block-diagonal matrix. Denote OPT to be the optimal value to SPCA with input pair ( e A, k). Let OPTi to be the optimal value to SPCA with input pair ( e Ai, k), for i [p]. Then, one has OPT = maxi [p] OPTi. Published as a conference paper at ICLR 2025 Proof of Theorem 2. It is clear that OPT maxi [p] OPTi. It remains to show the reverse, i.e., OPT maxi [p] OPTi. Suppose not, we have OPT > maxi [p] OPTi. Then, there must exist principal submatrices e A 1, e A 2, . . . , e A p contained in e A1, e A2, . . . , e Ap such that diag( e A 1, e A 2, . . . , e A p) is a k k matrix, and OPT = max x 2=1 x diag( e A 1, e A 2, . . . , e A p)x > max i [p] OPTi. This gives a contradiction, as max i [p] OPTi = max i [p] max x 2=1, x 0 k x e Aix max x 2=1 x diag( e A 1, e A 2, . . . , e A p)x, where the inequality follows from the fact that max x 2=1 x diag( e A 1, e A 2, . . . , e A p)x = max i [p] max x 2=1 x e A ix = max i [p] max x 2=1 x 0 k x e A ix = max i [p] OPTi. Theorem 3. Let A Rd d be symmetric and positive semidefinite, let k be a positive integer, and denote by x Rd an optimal solution to SPCA with input (A, k). Suppose that an algorithm A for SPCA with input (A, k) finds an approximate solution x Rd to SPCA with multiplicative factor m(k, d) 1 and additive error a(k, d) 0, i.e., one has x Ax (x ) Ax /m(k, d) a(k, d). Furthermore, we assume that m(k, d) and a(k, d) is non-decreasing with respect to d. For ε > 0, denote by y Rd be the output of Algorithm 3 with input tuple (A, k, ε, A). Then, one has y Ay (x ) Ax m (k, int dim(A, ε)) a (k, int dim(A, ε)) 1 m (k, int dim(A, ε)) kε. Proof of Theorem 3. In the proof, we use the same notation as in Algorithm 3. We assume WLOG that the thresholded matrix Aε is a block-diagonal matrix, i.e., Aε = diag(Aε 1, Aε 2, . . . , Aε p), where Aε i := Aε Si,Si. We also define e A := diag e A1, e A2, . . . , e Ap , with e Ai := ASi,Si. It is clear that A e A ε. Denote by ex Rd an optimal solution to SPCA with input ( e A, k). By Theorem 2, one can assume WLOG that ex has zero entries for indices greater than d1 := |S1|. In other words, ex is found in the block A1. Recall that we denote xi the solution found by A with input e Ai, k . Then, it is clear that max i [p] x i e Aixi x 1 e A1x1 1 m(k, d1) (ex ) e Aex a(k, d1) 1 m(k, int dim(A, ε)) (ex ) e Aex a(k, int dim(A, ε)). Recall that yi is the reconstructed solution. By the proof of Theorem 1, one obtains that max i [p] y i Ayi = max i [p] x i e Aixi 1 m(k, int dim(A, ε)) (ex ) e Aex a(k, int dim(A, ε)) 1 m(k, int dim(A, ε)) (x ) Ax k A e A a(k, int dim(A, ε)). Published as a conference paper at ICLR 2025 B.1.2 PROOFS IN SECTION 3.2.2 Proposition 1. Let A Rd d, and let k be a positive integer. Suppose that for a specific algorithm A, the time complexity of using A for (approximately) solving SPCA with input (A, k) is a function g(k, d) which is convex and non-decreasing with respect to d. We assume that g(k, 1) = 1. Then, for a given threshold ε > 0, the runtime of Algorithm 3 with input tuples (A, k, ε, A) is at most O d int dim(A, ε) g (k, int dim(A, ε)) + d2 Proof of Proposition 1. Suppose that the largest block size of e A B(A, ε) is equal to int dim(A, ε), and e A can be sorted to a block-diagonal matrix with p blocks e A1, e A2, . . . , e Ap and block sizes d1, d2, . . . , dp, respectively. WLOG we assume that p < d, otherwise the statement holds trivially. It is clear that the time complexity of running line 5 - 8 in Algorithm 3 is upper bounded by a constant multiple of max 1 di int dim(A,ε) Pm i=1 di=d i=1 g(k, dk). (2) Since k is fixed, the optimization problem (2) is maximizing a convex objective function over a polytope (d1, d2, . . . , dp) Rp : 1 di int dim(A, ε), i [p], Therefore, the optimum to (2) is obtained at an extreme point of P. It is clear that an extreme point (d 1, d 2, . . . , d p) of P is active at a linearly independent system, i.e., among all d i s, there are p 1 of them must be equal to 1 or int dim(A, ε), and the last one is taken such that the euqality Pp i=1 d i = d holds. Since there could be at most d/ int dim(A, ε) many blocks in e A with block size equal to int dim(A, ε), and that g(k, d) is increasing with respect to d, we obtain that the optimum of the optimization problem (2) is upper bounded by d int dim(A, ε) g (k, int dim(A, ε)) + p d int dim(A, ε) < d int dim(A, ε) g (k, int dim(A, ε)) + d. We are then done by noticing the fact that lines 3 - 4 in Algorithm 3 have a runtime bounded by O(d2). B.2 PROOFS IN SECTION 4.1 To establish a connection with robust statistics, we begin by expressing each element of the input matrix A in Model 1 as follows: Aij = e Aij + Eij := e Aij + σ2Zij, i j, where Zij is an i.i.d. centered (ϱ2/σ2)-sub-Gaussian random variable with unit variance. Considering that e A is block-diagonal and, according to (ii) in Model 1, has at most d/d (d )2 = O(d1+α) nonzero entries, the methodology proposed by Comminges et al. (2021) can be employed to estimate σ2 accurately using Algorithm 4. Inspired by the methods proposed in Comminges et al. (2021), we show that the number σ found in Algorithm 4 could provide an constant approximation ratio to the true variance σ2 in Model 1 with high probability: Proposition 3 (adapted from Proposition 1 in Deshp et al. (2016)). Consider Model 1, and denote PZ to be the distribution of the random variable Eij/σ2. Let σ2 be the same number found in Published as a conference paper at ICLR 2025 Algorithm 4 with input tuple (A, C, α). There exist an absolute constant c > 0 such that for the dimension d of the input matrix A satisfying log d > log log (8C + 8) cσ2 2ϱ2 σ2 2ϱ2 + 1 + 1 we have that for distribution PZ of Zij, the variance σ2, one has a uniform lower bound on the accuracy of estimation: inf PZ inf σ>0 inf e A 0 Cd1+α PPZ,σ, e A Proof of Proposition 3. Recall that we write Aij = e Aij + Eij := e Aij + σ2Zij, i j, where Zij is an i.i.d. centered (ϱ2/σ2)-sub-Gaussian random variable with unit variance. Since d Cdα for some α (0, 1), WLOG we assume that d = Cdα for some C > 0, and for simplicity we assume that d1+α and C are all integers to avoid countless discussions of integrality. By Algorithm 4, one divides the index set {(i, j) [d] [d] : i j} into m = (2C + 2)d1+α disjoint subsets B1, B2, . . . , Bm, and each Bi has cardinality at lease k := (d2 + d)/(2m) . Denote the set J to be the set of indices i such that the set Bi contains only indices such that for any (k, l) Bi, e Akl = 0. It is clear that |J| is lower bounded by m Cdα+1 = (2C + 1)d1+α. By Bernstein s inequality (see, e.g., Corollary 2.8.3 in Vershynin (2018)), it is clear that there exists an absolute constant c > 0 such that (k,l) Bi E2 kl σ2 (k,l) Bi Z2 kl 1 2 exp c min σ4 2 exp c min σ4 Then, we denote I := [σ2/2, 3σ2/2], denote Ai to be the random event n 1 |Bi| P (k,l) Bi A2 kl I o , and denote 1Ai to be the indicator function of Ai. One obtains that 2 Cdα+1 2|J| exp c min σ4 Published as a conference paper at ICLR 2025 and by Hoeffding s inequality, one obtains that i J (1Ai E1Ai) m 2 Cdα+1 2|J| exp c min σ4 i J (1Ai E1Ai) t Since m = (2C + 2)dα+1, it is clear that m |J| (2C + 1)dα+1, and t dα+1 1 2(2C + 2) exp c min σ4 d1 α/(4C + 4) dα+1 1 where the last inequality follows from the fact that log d > log (4C + 4) log (8C + 8) cσ2 2ϱ2 σ2 2ϱ2 + 1 Hence, the probability of σ2 I is upper bounded by exp{ dα+1/(4C + 4)}. Proposition 2. Consider Model 1, and denote by ε the output of Algorithm 4 with input tuple (A, C, α, u). Let k d be a positive integer, and assume that d satisfies that d1 α > C0 (C + 1) u log(8C + 8) for some large enough absolute constant C0 > 0. Then, the following holds with probability at least 1 2d 1 exp{2 log d d1+α/(4C + 4)}: (i) Denote by ex the optimal solution to SPCA with input ( e A, k). For an (approximation) algorithm A with multiplicative factor m(k, d) 1 and additive error a(k, d) 0, where the functions m and a is non-deceasing with respect to d, the output y of Algorithm 3 with input tuple (A, k, ε, A) satisfies that y e Ay (ex ) e Aex m (k, d ) a (k, d ) 1 + 2 m (k, d ) (ii) If, in addition, the time complexity of using A for (approximately) solving SPCA with input (A, k) is a function g(k, d) which is convex and non-decreasing with respect to d, and satisfies g(k, 1) = 1. Then, the runtime of Algorithm 3 with input tuples (A, k, ε, A) is at most g (k, d ) + d2 Proof of Proposition 2. We first show that, the probability that E ε is at least 1 2d 1 exp n 2 log d d1+α 4C+4 o . This comes directly from Proposition 3 that σ2 satisfies that 1 2 with probability at least 1 exp{ d1+α/(4C +4)}, the definition of Ekl being i.i.d. ϱ-sub-Gaussian variables, and a union bound argument. Indeed, define the random event A := { 1 Published as a conference paper at ICLR 2025 can see that P (|Ekl| > ε) = P (|Ekl| > ε | A) P(A) + P (|Ekl| > ε | Ac) P(Ac) = P |Ekl| > 2 log d | A P(A) + P (|Ekl| > ε | Ac) P(Ac) log d | A P(A) + P(Ac) log d + exp d1+α 2 exp 6u2σ2 2ϱ2 log d + exp d1+α = 2d 3 + exp d1+α By union bound, one has P ( E > ε) d2 P (|Ekl| > ε) 2d 1 + exp 2 log d d1+α This implies that, with high probability, int dim (A, ε) d . In the remainder of the proof, we will assume that the random event E ε holds true. We are now ready to show the desired approximation error. Denote by x the optimal solution to SPCA with input (A, k), by the proof of Theorem 1, it is clear that (x ) Ax (ex ) e Aex k ε, due to the fact that e A A ε. Combining with Theorem 3, we have y e Ay y Ay k ε m (k, int dim(A, ε)) a (k, int dim(A, ε)) 1 + 1 m (k, int dim(A, ε)) (ex ) e Aex m (k, int dim(A, ε)) a (k, int dim(A, ε)) 1 + 2 m (k, int dim(A, ε)) Finally, we obtain the desired runtime complexity via Proposition 1. B.3 PROOFS IN SECTION 4.2 Theorem 4. Let A Rd d be symmetric and positive semidefinite, and let k be a positive integer. Suppose that an algorithm A for SPCA with input (A, k) finds an approximate solution x Rd to SPCA such that has multiplicative factor m(k, d) 1 and additive error a(k, d) 0, with m(k, d) and a(k, d) being non-decreasing with respect to d. Suppose that Algorithm 5 with input (A, k, δ, A, d0) terminates with OPT(ε ) 0. Then one has OPT(ε ) 1 m(k, d0) OPT(0) a(k, d0) 1 m(k, d0) kε . Suppose that for A, the time complexity of using A for (approximately) solving SPCA with input (A, k) is a function g(k, d) which is convex and non-decreasing with respect to d, g(k, 1) = 1, and g(k, 0) = 0. The runtime for Algorithm 5 is at most g (k, d0) + d2 Proof of Theorem 4. We will use Theorem 3 and Proposition 1 to prove this theorem. Published as a conference paper at ICLR 2025 We first prove the approximation bound. First note that A Aε ε . By Theorem 3, OPT(ε ) (x ) Ax m (k, int dim(A, ε )) a (k, int dim(A, ε )) 1 m (k, int dim(A, ε )) kε = OPT(0) m (k, int dim(A, ε )) a (k, int dim(A, ε )) 1 m (k, int dim(A, ε )) kε . Note that in Algorithm 5, the algorithm is guaranteed to terminate with an ε such that int dim(A, ε ) d0. Since the functions m(k, d) and a(k, d) are all non-decreasing with respect to d, we have that m (k, int dim(A, ε )) m(k, d0) and a (k, int dim(A, ε )) a(k, d0), and therefore we obtain our desired approximation bound OPT(ε ) OPT(0) m (k, d0) a (k, d0) 1 m (k, d0) kε . Here, the inequality holds due to the fact that (i) if OPT(0) kε , then since OPT(ε ) is nonnegative, the inequality trivially holds; (ii) if OPT(0) > kε , then the inequality also holds since a(k, d) and m(k, d) are non-decreasing with respect to d. Then, we show the time complexity. By Proposition 1, it is clear that the runtime for a single iteration in the while loop in Algorithm 5 is upper bounded by O d int dim(A, ε) g (k, int dim(A, ε)) + d2 , (3) for some ε 0 such that int dim(A, ε) d0. Let ε1 ε2 be two positive numbers, and it is clear that int dim(A, ε1) int dim(A, ε2). Since int dim(A, ε1) = int dim(A, ε1) int dim(A, ε2) int dim(A, ε2) + 1 int dim(A, ε1) int dim(A, ε2) and by convexity of g(k, d), along with the fact that g(k, 0) = 0, we obtain that g(k, int dim(A, ε1)) int dim(A, ε1) int dim(A, ε2) g(k, int dim(A, ε2)) + 0. This implies that for any ε 0 such that int dim(A, ε) d0, we have that d int dim(A, ε1) g(k, int dim(A, ε1)) d d0 g(k, d0), and thus d int dim(A, ε) g(k, int dim(A, ε)) d int dim(A, ε) g(k, int dim(A, ε)) + g(k, int dim(A, ε)) d0 g(k, d0) + g(k, int dim(A, ε)) d0 g(k, d0) + g(k, d0) 2 d Combining with (3), and the fact that at most O(log( A /δ)) iterations would be executed in the while loop in Algorithm 5, we are done. C ADDITIONAL EMPIRICAL SETTINGS AND RESULTS In this section, we report additional empirical settings in Appendix C.1 and detailed empirical results in Appendices C.2 and C.3. We report empirical results under Model 1 in Appendix C.4. In Appendix C.5, we discuss impacts of parameters d0 and δ in Algorithm 5. Published as a conference paper at ICLR 2025 C.1 EMPIRICAL SETTINGS Datasets. We summarize the dimensions and percentage of non-zero entries in the input matrices in Table 5. It is important to note that more than 70% of these input covariance matrices are dense. We utilize these matrices to evaluate the practicality of our approach, demonstrating its effectiveness irrespective of whether the input matrix has a block-diagonal structure. Table 5: Dimensions for each dataset and percentage of non-zero entries. Dataset Dimension (d) Percentage of non-zero entries Cov Colon Cov 500 100% Lymphoma Cov1 500 100% Reddit1500 1500 5.30% Reddit2000 2000 6.20% Leukemia Cov 3571 100% Lymphoma Cov2 4026 100% Prostate Cov 6033 100% Arcene Cov 10000 100% GLI85Cov 22283 100% GLABRA180Cov 49151 100% Dorothea Cov 100000 77.65% Parameters. (i) When integrated Algorithm 5 with Branch-and-Bound algorithm: We choose the parameter ε := 1, a := 0, b := A , d0 := 30, and δ := 0.01 A . (ii) When integrated Algorithm 5 with Chan s algorithm: We choose the parameter ε := 1, a := 0, b := A , d0 := 2k (twice the sparsity constant), and δ := 0.01 A . Early Stopping. For experiments conducted with a positive semidefinite input matrix A, we add a very simple step in Algorithm 5 - we set an extra stopping criteria on Algorithm 5, which breaks the while loop as soon as a problem with largest block size d0 has been solved. This additional step helps further reduce computational time of Algorithm 5. Compute resources. We conducted all tests on a computing cluster equipped with 36 Cores (2x 3.1G Xeon Gold 6254 CPUs) and 768 GB of memory. C.2 EMPIRICAL RESULTS WHEN INTEGRATED WITH BRANCH-AND-BOUND ALGORITHM In this section, we provide the detailed empirical results of our framework when integrated with Branch-and-Bound algorithm (previously summarized in Table 3), in Table 6. We provide statistics related to the best solution in Table 7, including the ratio of corresponding ε (the best threshold found) and A , the percentage of zeros in the matrix Aε , and the Jaccard index between the solution obtained by baseline algorithm and that obtained by Algorithm 5, which is defined as Jaccard index := | supp(x Base) supp(x Ours)| | supp(x Base) supp(x Ours)|. Here, x Base denotes the solution obtained by the baseline algorithm, which refers to Branch-and Bound algorithm in this section and Chan s algorithm in Appendix C.3, and x Ours denotes the solution obtained by Algorithm 5 when integrated with the baseline algorithm. We also summarize the average approximation errors and average speedup factors for with respect to k in Table 8. From the data presented in Table 6, it is evident that our framework achieves exceptional performance metrics. The median approximation error is 0, with the 90th percentile approximation error not exceeding 0.19%. Additionally, the median speedup factor is recorded at 20.87. These statistics further underscore the efficiency and effectiveness of our framework, demonstrating its capability to provide substantial computational speedups while maintaining small approximation errors. From the data in Table 7, we conclude that the Jaccard index is oftentimes one, and the change of Jaccard index aligns with the change of the approximation error. Additionally, for each dataset, the relative threshold value ε / A and the percentage of zeros in Aε remain relatively stable across Published as a conference paper at ICLR 2025 Table 6: Comparison of runtime and objective values on real-world datasets, between Branch-and Bound and Algorithm 5 integrated with Branch-and-Bound. d is the dimension of Sparse PCA problem, and k is the sparsity constant. We set the time limit to 3600 seconds for each method. Approximation errors between two methods, and speedup factors are reported. We note that the objective value for Arcene Cov is scaled by 105. Dataset d k Branch-and-Bound Ours Error (%) Speedup Time Objective Time(s) Objective Cov Colon Cov 500 3 8 1059.49 1.20 1059.49 0.00 6.67 Cov Colon Cov 500 5 8 1646.45 1.08 1646.45 0.00 7.41 Cov Colon Cov 500 10 3600 2641.23 42.93 2641.23 0.00 83.86 Cov Colon Cov 500 15 3600 3496.6 666.95 3496.6 0.00 5.40 Lymphoma Cov1 500 3 11 2701.61 1.32 2701.61 0.00 8.33 Lymphoma Cov1 500 5 44 4300.5 1.51 4300.5 0.00 29.14 Lymphoma Cov1 500 10 3600 6008.74 85.72 6008.74 0.00 42.00 Lymphoma Cov1 500 15 3600 7628.66 324.6 7628.66 0.00 11.09 Reddit1500 1500 3 75 946.12 3.68 946.12 0.00 20.38 Reddit1500 1500 5 3600 980.97 6.86 980.97 0.00 524.78 Reddit1500 1500 10 3600 1045.74 9.20 1045.74 0.00 391.30 Reddit1500 1500 15 3600 1082.12 6.87 1082.12 0.00 524.02 Reddit2000 2000 3 412 1311.36 8.70 1311.36 0.00 47.36 Reddit2000 2000 5 3600 1397.36 9.03 1397.36 0.00 398.67 Reddit2000 2000 10 3600 1523.82 13.18 1523.82 0.00 273.14 Reddit2000 2000 15 3600 1605.48 9.10 1601.32 0.26 395.60 Leukemia Cov 3571 3 280 7.13 18.89 7.13 0.00 14.82 Leukemia Cov 3571 5 3600 10.29 18.64 10.29 0.00 193.13 Leukemia Cov 3571 10 3600 17.22 64.03 17.22 0.00 56.22 Leukemia Cov 3571 15 3600 21.87 168.47 21.87 0.00 21.37 Lymphoma Cov2 4026 3 210 40.62 25.73 40.62 0.00 8.16 Lymphoma Cov2 4026 5 144 63.66 26.75 63.66 0.00 5.38 Lymphoma Cov2 4026 10 3600 78.29 293.42 69.27 11.52 12.27 Lymphoma Cov2 4026 15 3600 93.46 1810.19 86.20 7.77 1.99 Prostate Cov 6033 3 102 8.19 57.35 8.19 0.00 1.78 Prostate Cov 6033 5 3600 12.92 54.63 12.92 0.00 65.90 Prostate Cov 6033 10 3600 24.38 407.14 24.38 0.00 8.84 Prostate Cov 6033 15 3600 34.98 1175.40 34.98 0.00 3.06 Arcene Cov 10000 3 88 3.36 138.07 3.36 0.00 0.64 Arcene Cov 10000 5 248 5.56 135.47 5.56 0.00 1.83 Arcene Cov 10000 10 3600 10.81 138.56 10.81 0.00 25.98 Arcene Cov 10000 15 3600 15.30 140.92 15.30 0.00 25.55 k, as the largest block size d0 is set to 30, and the best solutions are typically found in blocks near this size. From the data in Table 8, it is clear that our framework consistently obtains significant computational speedups across a range of settings. Notably, it achieves an average speedup factor of 13.52 for k = 3, with this factor increasing dramatically to over 153 for k 5. An observation from the analysis is the relationship between k and performance metrics: although the speedup factor increases with larger k, there is a corresponding rise in the average approximation errors. Specifically, for k = 3 and k = 5, the average errors remain exceptionally low, which are all zero, but they escalate to 1.47% for k = 10 and 1.05% for k = 15. This indicates a consistent pattern of increased approximation errors as k grows, which is aligned with Theorem 3, despite the significant gains in speed. C.3 EMPIRICAL RESULTS WHEN INTEGRATED WITH CHAN S ALGORITHM In this section, we provide the detailed numerical test results of our framework when integrated with Chan s algorithm (previously summarized in Table 4), in Table 9. We provide statistics related to the Published as a conference paper at ICLR 2025 Table 7: Additional statistics regarding the best solution obtained by Algorithm 5 in Table 6. Denote by ε the threshold found that yields the best solution in Algorithm 5, A the input covariance matrix, Aε is thresholded matrix obtained by Algorithm 1 with input (A, ε ). Dataset k Error (%) ε / A Zeros in Aε (%) Jaccard Index Cov Colon Cov 3 0 0.75 99.988 1.00 Cov Colon Cov 5 0 0.59 99.941 1.00 Cov Colon Cov 10 0 0.58 99.926 1.00 Cov Colon Cov 15 0 0.59 99.941 1.00 Lymphoma Cov1 3 0 0.50 99.982 1.00 Lymphoma Cov1 5 0 0.50 99.982 1.00 Lymphoma Cov1 10 0 0.38 99.951 1.00 Lymphoma Cov1 15 0 0.35 99.940 1.00 Reddit1500 3 0 0.13 99.999 1.00 Reddit1500 5 0 0.06 99.997 1.00 Reddit1500 10 0 0.06 99.997 1.00 Reddit1500 15 0 0.06 99.997 1.00 Reddit2000 3 0 0.13 99.999 1.00 Reddit2000 5 0 0.09 99.998 1.00 Reddit2000 10 0 0.09 99.998 1.00 Reddit2000 15 0.26 0.09 99.998 0.88 Leukemia Cov 3 0 0.41 99.998 1.00 Leukemia Cov 5 0 0.50 99.999 1.00 Leukemia Cov 10 0 0.50 99.999 1.00 Leukemia Cov 15 0 0.44 99.999 1.00 Lymphoma Cov2 3 0 0.50 99.999 1.00 Lymphoma Cov2 5 0 0.50 99.999 1.00 Lymphoma Cov2 10 11.52 0.42 99.999 0.42 Lymphoma Cov2 15 7.77 0.44 99.999 0.30 Prostate Cov 3 0 0.75 99.999 1.00 Prostate Cov 5 0 0.68 99.999 1.00 Prostate Cov 10 0 0.69 99.999 1.00 Prostate Cov 15 0 0.69 99.999 1.00 Arcene Cov 3 0 0.75 99.999 1.00 Arcene Cov 5 0 0.75 99.999 1.00 Arcene Cov 10 0 0.75 99.999 1.00 Arcene Cov 15 0 0.75 99.999 1.00 Table 8: Summary of average approximation errors and average speedup factors for each k, when integrated our framework with Branch-and-Bound algorithm. The standard deviations are reported in the parentheses. k Avg. Error (Std. Dev) Avg. Speedup (Std. Dev) 3 0.00 (0.00) 13.52 (15.12) 5 0.00 (0.00) 153.28 (203.17) 10 1.47 (4.06) 111.70 (141.81) 15 1.05 (2.72) 123.51 (210.55) best solution in Table 10, including the ratio of corresponding ε and A , the percentage of zeros in the matrix Aε , and the Jaccard index. We also summarized the average approximation errors and average speedup factors for with respect to k in Table 11. In Table 9, we once again demonstrate the substantial speedup improvements our framework achieves when integrated with Chan s algorithm. The results show a median approximation error of -0.38%, with the maximum approximation error not exceeding 0.28%, and a median speedup factor of 3.96. We observe that the speedup factor grows (linearly) as the dimension of the dataset goes up, Published as a conference paper at ICLR 2025 highlighting the efficacy of our framework, especially in large datasets. Notably, in all instances within the Dorothea Cov dataset, the instances k = 500, 1000, 2000 in GLI85Cov and GLABRA180Cov datasets, the instances k = 1000, 2000 in Arcene Cov dataset, our framework attains solutions with negative approximation errors, indicating that it consistently finds better solutions than Chan s algorithm alone. These instances provide tangible proof of our framework s potential to find better solutions, as highlighted in Remark 2. In Table 10, the Jaccard index does not vary significantly with k, nor does its change align with the change of the approximation error. A possible reason is that Chan s algorithm is an approximation algorithm, and the Jaccard index may not fully capture the similarity between the solutions obtained and the true optimal solutions. As expected, since d0 = 2k, the relative threshold value ε / A and the percentage of zeros decrease as k increases. In Table 11, we do not observe an increase in speedup factors as k increases. This phenomenon can be attributed to the runtime of Chan s algorithm, which is O(d3) and independent of k. Consequently, as predicted by Proposition 1, the speedup factors do not escalate with larger values of k. Additionally, unlike the trends noted in Table 8, the average errors do not increase with k. This deviation is likely due to Chan s algorithm only providing an approximate solution with a certain multiplicative factor: As shown in Theorem 3, our algorithm has the capability to improve this multiplicative factor at the expense of some additive error. When both k and d are large, the benefits from the improved multiplicative factor tend to balance out the impact of the additive error. Consequently, our framework consistently achieves high-quality solutions that are comparable to those obtained solely through Chan s algorithm, maintaining an average error of less than 0.55% for each k. Finally, we note that Chan s algorithm is considerably more scalable than the Branch-and-Bound algorithm. The statistics underscore our framework s ability not only to enhance the speed of scalable algorithms like Chan s, but also to maintain impressively low approximation errors in the process. Table 9: Comparison of runtime and objective values on real-world datasets, between Chan s algorithm and Algorithm 5 integrated with Chan s algorithm. d is the dimension of Sparse PCA problem, and k is the sparsity constant. We note that the objective value for GLI85Cov is scaled by 109 and that for GLABRA180Cov is scaled by 108. The error being negative means that our framework finds a better solution than the standalone algorithm. Dataset d k Chan s algorithm Ours Error Speedup Time Objective Time Objective Arcene Cov 10000 200 228 91.7 288 91.7 0.00 0.79 Arcene Cov 10000 500 239 140.4 188 140.4 0.00 1.27 Arcene Cov 10000 1000 249 190.4 349 190.8 -0.21 0.71 Arcene Cov 10000 2000 256 233.3 377 233.5 -0.09 0.68 GLI85Cov 22283 200 2836 35.8 1660 35.7 0.28 1.71 GLI85Cov 22283 500 2728 38.8 1528 39.4 -1.55 1.79 GLI85Cov 22283 1000 2869 39.9 1942 40.9 -2.51 1.48 GLI85Cov 22283 2000 2680 42.3 2034 42.5 -0.47 1.32 GLABRA180Cov 49151 200 35564 44.7 4890 44.6 0.11 7.27 GLABRA180Cov 49151 500 33225 60.4 4475 60.7 -0.50 7.42 GLABRA180Cov 49151 1000 30864 70.5 4142 70.7 -0.28 7.45 GLABRA180Cov 49151 2000 29675 78.1 4830 78.2 -0.10 6.14 Dorothea 100000 200 246348 3.25 16194 3.34 -2.77 15.21 Dorothea 100000 500 246539 4.94 17554 5.07 -2.63 14.04 Dorothea 100000 1000 244196 6.23 16934 6.38 -2.41 14.42 Dorothea 100000 2000 255281 7.18 17774 7.28 -1.39 14.36 C.4 EMPIRICAL RESULTS UNDER MODEL 1 In this section, we report the numerical results in Model 1 using Algorithm 3 with a threshold obtained by Algorithm 4. In Model 1, we set E to have i.i.d. centered Gaussian variables with a standard deviation σ = 0.1 in its lower triangle entries, i.e., Eij for 1 i j, and set Eij = Eji for i = j to Published as a conference paper at ICLR 2025 Table 10: Additional statistics regarding the best solution obtained by Algorithm 5 in Table 9. Denote by ε the threshold found that yields the best solution in Algorithm 5, A the input covariance matrix, Aε is thresholded matrix obtained by Algorithm 1 with input (A, ε ). Dataset k Error (%) ε / A Zeros in Aε (%) Jaccard Index Arcene Cov 200 0.00 0.38 99.974 0.82 Arcene Cov 500 0.00 0.26 99.868 0.82 Arcene Cov 1000 -0.21 0.16 99.469 0.79 Arcene Cov 2000 -0.09 0.05 95.891 0.94 GLI85Cov 200 0.28 0.09 99.997 0.67 GLI85Cov 500 -1.55 0.04 99.985 0.49 GLI85Cov 1000 -2.51 0.02 99.962 0.62 GLI85Cov 2000 -0.47 0.01 99.775 0.73 GLABRA180Cov 200 0.11 0.18 99.999 0.82 GLABRA180Cov 500 -0.50 0.10 99.997 0.81 GLABRA180Cov 1000 -0.28 0.05 99.990 0.82 GLABRA180Cov 2000 -0.10 0.04 99.980 0.80 Dorothea Cov 200 -2.77 0.12 99.999 0.78 Dorothea Cov 500 -2.63 0.09 99.999 0.79 Dorothea Cov 1000 -2.41 0.06 99.999 0.76 Dorothea Cov 2000 -1.39 0.05 99.998 0.67 Table 11: Summary of average approximation errors and average speedup factors for each k, when integrated our framwork with Chan s algorithm. The standard deviations are reported in the parentheses. k Avg. Error (Std. Dev) Avg. Speedup (Std. Dev) 200 -0.60 (1.45) 6.25 (6.63) 500 -1.17 (1.17) 6.13 (5.96) 1000 -1.35 (1.28) 6.02 (6.36) 2000 -0.51 (0.61) 5.63 (6.31) make it symmetric. We generate 30 independent random blocks in e A, each of size 20, with each block defined as M i M/100, where M R100 20 has i.i.d. standard Gaussian entries (which implies that u = 1 in Model 1). We run Algorithm 4 with C = 1, α = 0.7, and u = 1, obtaining the threshold ε as the output. Subsequently, we execute Algorithm 3 with e A, k {2, 3, 5, 7, 10}, the Branch-and-Bound algorithm, and ε. We denote the solution output by Algorithm 3 as x Ours. In Table 12, we compare Algorithm 3 integrated with Branch-and-Bound algorithm, with vanilla Branch-and-Bound algorithm. We report the optimality gap, speed up factor, and the value of ε outputted by Algorithm 4. The optimality gap is defined as: Gap := Obj BB Obj Ours, where Obj BB := x BB e Ax BB, and x BB is the output of the Branch-and-Bound algorithm with input ( e A, k), and where Obj Ours is the objective value y e Ay in Proposition 2. To ensure reproducibility, we set the random seed to 42 and run the experiments ten times for each k. From Table 12, we observe that the average optimality gap increases as k grows. Additionally, ε remains relatively stable across different values of k, as the calculation of ε does not depend on k at all. The optimality gap is much smaller than the predicted bound 3k ε, providing computational verification of the bound proposed in Proposition 2. The speedup factor is exceptionally high, often exceeding a thousand when k 5. Published as a conference paper at ICLR 2025 Table 12: Summary of average optimality gaps, average speedup factors, and average threshold ε for each k, when integrated our framework with Branch-and-Bound algorithm. The standard deviations are reported in the parentheses. The time limit for Branch-and-Bound is set to 600 seconds. k Avg. Gap (Std. Dev) Avg. Spdup (Std. Dev) Avg. ε (Std. Dev) 2 0.29 (0.08) 16.52 (12.18) 0.96 (0.0016) 3 0.35 (0.13) 94.78 (78.52) 0.96 (0.0030) 5 0.48 (0.12) 2010.02 (625.55) 0.96 (0.0028) 7 0.59 (0.14) 1629.38 (869.91) 0.96 (0.0040) 10 0.64 (0.14) 1900.05 (598.14) 0.96 (0.0036) C.5 IMPACTS OF PARAMETERS d0 AND δ IN ALGORITHM 5 In this section, we discuss the impacts of parameters d0 and δ in Algorithm 5. We showcase the impacts by comparing the approximation errors and speedups on certain datasets when integrating Algorithm 5 with Branch-and-Bound algorithm. C.5.1 IMPACT OF d0 In Table 13, we summarize the numerical results when increasing d0 from 30 to 45 and 55 in the Lymphoma Cov2 dataset. We observe a significant improvement on the approximation error, going from an average of 4.82% to 1.94% and 0.03%, at the cost of lowering the average speedup factor from 6.95 to around 4. Table 13: Comparison of errors and speedups for different d0 values across Lymphoma Cov2 dataset. Dataset d k d0 = 30 d0 = 45 d0 = 55 Error Speedup Error Speedup Error Speedup Lymphoma Cov2 4026 3 0.0 8.16 0.0 8.29 0.0 8.62 Lymphoma Cov3 4026 5 0.0 5.38 0.0 5.35 0.0 5.95 Lymphoma Cov4 4026 10 11.52 12.27 0.0 1.84 0.0 1.00 Lymphoma Cov5 4026 15 7.77 1.99 7.77 0.92 0.13 1.00 Overall 4.82 6.95 1.94 4.10 0.03 4.14 C.5.2 IMPACT OF δ In Table 14, we summarize the numerical results when increasing δ from 0.01 A to 0.1 A and 0.15 A in the Prostate Cov dataset. We observe that, by increasing δ, the speedup factors dramatically improve, from an average of 19.90 to 38.41 and 99.49, at the potential cost of lowering the approximation error from 0 to 6.07%. Table 14: Comparison of errors and speedups for different δ values across Prostate Cov dataset. Dataset d k δ = 0.01 A δ = 0.1 A δ = 0.15 A Error Speedup Error Speedup Error Speedup Prostate Cov 6033 3 0.0 1.78 0.0 2.79 0.0 3.62 Prostate Cov 6033 5 0.0 65.90 0.0 106.32 0.0 128.16 Prostate Cov 6033 10 0.0 8.84 0.0 29.56 0.0 128.25 Prostate Cov 6033 15 0.0 3.06 0.0 14.99 24.27 137.93 Overall 0.0 19.90 0.0 38.41 6.07 99.49 Published as a conference paper at ICLR 2025 D EXTENSION TO NON-POSITIVE SEMIDEFINITE INPUT MATRIX In this paper, we focus on solving SPCA, where the input matrix A is symmetric and positive semidefinite, following conventions in the Sparse PCA literature (Chan et al., 2015; Amini & Wainwright, 2008; Berk & Bertsimas, 2019; Dey et al., 2022a;b). We remark that, our framework remains effective for symmetric and non-positive semidefinite inputs, provided that the algorithm A for (approximately) solving SPCA supports symmetric and non-positive semidefinite inputs.2 In other words, if A is compatible with symmetric and non-positive semidefinite inputs, the assumption of A being positive semidefinite can be removed in Theorems 3 and 4 and in Proposition 3 (stated in Model 1), and all theoretical guarantees still hold. 2Such algorithms do exist; for example, a naive exact algorithm that finds an eigenvector corresponding to the largest eigenvalue of submatrices AS,S for all subsets S with |S| k.