# parametric_simplex_method_for_sparse_learning__c8ab763c.pdf Parametric Simplex Method for Sparse Learning Haotian Pang Robert Vanderbei Han Liu? Tuo Zhao Princeton University ?Tencent AI Lab Northwestern University Georgia Tech High dimensional sparse learning has imposed a great computational challenge to large scale data analysis. In this paper, we are interested in a broad class of sparse learning approaches formulated as linear programs parametrized by a regularization factor, and solve them by the parametric simplex method (PSM). Our parametric simplex method offers significant advantages over other competing methods: (1) PSM naturally obtains the complete solution path for all values of the regularization parameter; (2) PSM provides a high precision dual certificate stopping criterion; (3) PSM yields sparse solutions through very few iterations, and the solution sparsity significantly reduces the computational cost per iteration. Particularly, we demonstrate the superiority of PSM over various sparse learning approaches, including Dantzig selector for sparse linear regression, LAD-Lasso for sparse robust linear regression, CLIME for sparse precision matrix estimation, sparse differential network estimation, and sparse Linear Programming Discriminant (LPD) analysis. We then provide sufficient conditions under which PSM always outputs sparse solutions such that its computational performance can be significantly boosted. Thorough numerical experiments are provided to demonstrate the outstanding performance of the PSM method. 1 Introduction A broad class of sparse learning approaches can be formulated as high dimensional optimization problems. A well known example is Dantzig Selector, which minimizes a sparsity-inducing 1 norm with an 1 norm constraint. Specifically, let X 2 Rn d be a design matrix, y 2 Rn be a response vector, and 2 Rd be the model parameter. Dantzig Selector can be formulated as the solution to the following convex program, k k1 s.t. k X>(y X )k1 λ. (1.1) where k k1 and k k1 denote the 1 and 1 norms respectively, and λ > 0 is a regularization factor. Candes and Tao (2007) suggest to rewrite (1.1) as a linear program solved by linear program solvers. Dantzig Selector motivates many other sparse learning approaches, which also apply a regularization factor to tune the desired solution. Many of them can be written as a linear program in the following generic form with either equality constraints: x (c + λ c)>x s.t. Ax = b + λ b, x 0, (1.2) or inequality constraints: x (c + λ c)>x s.t. Ax b + λ b, x 0. (1.3) Existing literature usually suggests the popular interior point method (IPM) to solve (1.2) and (1.3). The interior point method is famous for solving linear programs in polynomial time. Specifically, the interior point method uses the log barrier to handle the constraints, and rewrites (1.2) or (1.3) Correspondence to Tuo Zhao: tuo.zhao@isye.gatech.edu. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. as a unconstrained program, which is further solved by the Newton s method. Since the log barrier requires the Newton s method to only iterate within the interior of the feasible region, IPM cannot yield exact sparse iterates, and cannot take advantage of sparsity to boost the computation. An alternative approach is the simplex method. From a geometric perspective, the classical simplex method iterates over the vertices of a polytope. Algebraically, the algorithm involves moving from one partition of the basic and nonbasic variables to another. Each partition deviates from the previous in that one basic variable gets swapped with one nonbasic variable in a process called pivoting. Different variants of the simplex method are defined by different rules of pivoting. The simplex method has been shown to work well in practice, even though its worst-case iteration complexity has been shown to scale exponentially with the problem scale in existing literature. More recently, some researchers propose to use alternating direction methods of multipliers (ADMM) to directly solve (1.1) without reparametrization as a linear program. Although these methods enjoy O(1/T) convergence rates based on variational inequality criteria, where T is the number of iterations. ADMM can be viewed as an exterior point method, and always gives infeasible solutions within finite number of iterations. We often observe that after ADMM takes a large number of iterations, the solutions still suffer from significant feasibility violation. These methods work well only for moderate scale problems (e.g., d < 1000). For larger d, ADMM becomes less competitive. These methods, though popular, are usually designed for solving (1.2) and (1.3) for one single regularization factor. This is not satisfactory, since an appropriate choice of λ is usually unknown. Thus, one usually expects an algorithm to obtain multiple solutions tuned over a reasonable range of values for λ. For each value of λ, we need to solve a linear program from scratch, and it is therefore often very inefficient for high dimensional problems. To overcome the above drawbacks, we propose to solve both (1.2) and (1.3) by a variant of the parametric simplex method (PSM) in a principled manner (Murty, 1983; Vanderbei, 1995). Specifically, the parametric simplex method parametrizes (1.2) and (1.3) using the unknown regularization factor as a parameter . This eventually yields a piecewise linear solution path for a sequence of regularization factors. Such a varying parameter scheme is also called homotopy optimization in existing literature. PSM relies some special rules to iteratively choose the pair of variables to swap, which algebraically calculates the solution path during each pivoting. PSM terminates at a value of parameter, where we have successfully solved the full solution path to the original problem. Although in the worst-case scenario, PSM can take an exponential number of pivots to find an optimal solution path. Our empirical results suggest that the number of iterations is roughly linear in the number of nonzero variables for large regularization factors with sparse optima. This means that the desired sparse solutions can often be found using very few pivots. Several optimization methods for solving (1.1) are closely related to PSM. However, there is a lack of generic design in these methods. Their methods, for example, the simplex method proposed in Yao and Lee (2014) can be viewed as a special example of our proposed PSM, where the perturbation is only considered on the right-hand-side of the inequalities in the constraints. DASSO algorithm computes the entire coefficient path of Dantzig selector by a simplex-like algorithm. Zhu et al. (2004) propose a similar algorithm which takes advantage of the piece-wise linearity of the problem and computes the whole solution path on 1-SVM. These methods can be considered as similar algorithms derived from PSM but only applied to special cases, where the entire solution path is computed but an accurate dual certificate stopping criterion is not provided. Notations: We denote all zero and all one vectors by 1 and 0 respectively. Given a vector a = (a1, ..., ad)> 2 Rd, we define the number of nonzero entries kak0 = P j 1(aj 6= 0), kak1 = P j |aj|, kak2 j, and kak1 = maxj |aj|. When comparing vectors, and mean component-wise comparison. Given a matrix A 2 Rd d with entries ajk, we use |||A||| to denote entry-wise norms and k Ak to denote matrix norms. Accordingly |||A|||0 = P j,k 1(ajk 6= 0), |||A|||1 = P j,k |ajk|, |||A|||1 = maxj,k |ajk|, k Ak1 = maxk j |ajk|, k Ak1 = maxj k |ajk|, k Ak2 = maxkak2 1 k Aak2, and k Ak2 jk. We denote A\i\j as the submatrix of A with i-th row and j-th column removed. We denote Ai\j as the i-th row of A with its j-th entry removed and A\ij as the j-th column of A with its i-th entry removed. For any subset G of {1, 2, . . . , d}, we let AG denote the submatrix of A 2 Rp d consisting of the corresponding columns of A. The notation A 0 means all of A s entries are nonnegative. Similarly, for a vector a 2 Rd, we let a G denote the subvector of a associated with the indices in G. Finally, Id denotes the d-dimensional identity matrix and ei denotes vector that has a one in its i-th entry and zero elsewhere. In a large matrix, we leave a submatrix blank when all of its entries are zeros. 2 Background Many sparse learning approaches are formulated as convex programs in a generic form: L( ) + λk k1, (2.1) where L( ) is a convex loss function, and λ > 0 is a regularization factor controlling bias and variance. Moreover, if L( ) is smooth, we can also consider an alternative formulation: k k1 s.t. kr L( )k1 λ, (2.2) where r L( ) is the gradient of L( ), and λ > 0 is a regularization factor. As will be shown later, both (2.2) and (2.1) are naturally suited for our algorithm, when L( ) is piecewise linear or quadratic respectively. Our algorithm yields a piecewise-linear solution path as a function of λ by varying λ from large to small. Before we proceed with our proposed algorithm, we first introduce the sparse learning problems of our interests, including sparse linear regression, sparse linear classification, and undirected graph estimation. Due to space limit, we only present three examples, and defer the others to the appendix. Dantzig Selector: The first problem is sparse linear regression. Let y 2 Rn be a response vector and X 2 Rn d be the design matrix. We consider a linear model y = X + , where 2 Rd is the unknown regression coefficient vector, and is the observational noise vector. Here we are interested in a high dimensional regime: d is much larger than n, i.e., d n, and many entries in are zero, i.e., k k0 = s n. To get a sparse estimator of 0, machine learning researchers and statisticians have proposed numerous approaches including Lasso (Tibshirani, 1996), Dantzig Selector (Candes and Tao, 2007) and LAD-Lasso (Wang et al., 2007). The Dantzig selector is formulated as the solution to the following convex program: k k1 subject to k X>(y X )k1 λ. (2.3) By setting = + with + j = j 1( j > 0) and + j = j 1( j < 0), we rewrite (2.3) as a linear program: min +, 1>( + + ) s.t. X>X X>X X>X X>X λ1 + X>y λ1 X>y , +, 0. (2.4) By complementary slackness, we can guarantee that the optimal + j s are nonnegative and complementary to each other. Note that (2.4) fits into our parametric linear program as (1.3) with X>X X>X X>X X>X , c = 1, b = 1, c = 0, x = Sparse Support Vector Machine: The second problem is Sparse SVM (Support Vector Machine), which is a model-free discriminative modeling approach (Zhu et al., 2004). Given n independent and identically distributed samples (x1, y1), ..., (xn, yn), where xi 2 Rd is the feature vector and yi 2 {1, 1} is the binary label. Similar to sparse linear regression, we are interested in the high dimensional regime. To obtain a sparse SVM classifier, we solve the following convex program: [1 yi( 0 + >xi)]+ s.t. k k1 λ, (2.5) where 0 2 R and 2 Rd. Given a new sample z 2 Rd, Sparse SVM classifier predicts its label by sign( 0 + >z). Let ti = 1 yi( 0 + >xi) for i = 1, ..., n. Then ti can be expressed as ti = t+ i . Notice [1 yi( 0 + >xi)]+ can be represented by t+ i . We split and 0 into positive and negative parts as well: = + and 0 = + 0 and add slack variable w to the constraint so that the constraint becomes equality: + + + w = λ1, w 0. Now we cast the problem into the equality parametric simplex form (1.2). We identify each component of (1.2) as the following: x = '> 2 R(n+1) (2n+3d+2), x 1> 0> 0> 0> 0 0 0>'> 2 R2n+3d+2, c = 0 2 R2n+3d+2, b = & '> 2 Rn+1, b = '> 2 Rn+1, and A = In In Z Z y y 1> 1> 1> R(n+1) (2n+3d+2), where Z = (y1x1, . . . , ynxn)> 2 Rn d. Differential Graph Estimation: The third problem is the differential graph estimation, which aims to identify the difference between two undirected graphs (Zhao et al., 2013; Danaher et al., 2013). The related applications in biological and medical research can be found in existing literature (Hudson et al., 2009; Bandyopadhyaya et al., 2010; Ideker and Krogan, 2012). Specifically, given n1 i.i.d. samples x1, ..., xn from Nd(µ0 X) and n2 i.i.d. samples y1, ..., yn from Nd(µ0 Y ) We are interested in estimating the difference of the precision matrices of two distributions: We define the empirical covariance matrices as: SX = 1 n1 j=1(xj x)(xj x)> and SY = 1 n2 j=1(yj y)(yj x)>, where x = 1 n1 j=1 xj and y = 1 n2 j=1 yj. Then Zhao et al. (2013) propose to estimate 0 by solving the following problem: ||| |||1 s.t. |||SX SY SX + SY |||1 λ, (2.6) where SX and SY are the empirical covariance matrices. As can be seen, (2.6) is essentially a special example of a more general parametric linear program as follows, D |||D|||1 s.t. |||XDZ Y |||1 λ, (2.7) where D 2 Rd1 d2, X 2 Rm1 d1, Z 2 Rd2 m2 and Y 2 Rm1 m2 are given data matrices. Instead of directly solving (2.7), we consider a reparametrization by introducing an axillary variable C = XD. Similar to CLIME, we decompose D = D+ + D , and eventually rewrite (2.7) as min D+,D 1>(D+ + D )1 s.t. |||CZ Y |||1 λ, X(D+ D ) = C, D+, D 0, (2.8) Let vec(D+), vec(D ), vec(C) and vec(Y ) be the vectors obtained by stacking the columns of matrices D+, D C and Y , respectively. We write (2.8) as a parametric linear program, X0 X0 Im1d2 Z0 Im1m2 Z0 Im1m2 C A 2 Rm1d2 d1d2, Z0 = z11Im1 zd21Im1 ... ... ... z1m2Im1 zd2m2Im1 Rm1m2 m1d2, where zij denotes the (i, j) entry of matrix Z; vec(D+) vec(D ) vec(C) w > 2 R2d1d2+m1d2+2m1m2, where w 2 R2m1m2 is nonnegative slack variable vector used to make the inequality become an equality. Moreover, we also have b = 0> vec(Y ) vec(Y ) > 2 Rm1d2+2m1m2, where the first m1d2 components of vector b are 0 and the rest components are from matrix Y ; b = & 0> 1> 1>'> 2 Rm1d2+2m1m2, where the first m1d2 components of vector b are 0 and the rest 2m1m2 components are 1; c = 1> 1> 0> 0>'> 2 R2d1d2+m1d2+2m1m2, where the first 2d1d2 components of vector c are 1 and the rest m1d2 + 2m1m2 components are 0. 3 Homotopy Parametric Simplex Method We first briefly review the primal simplex method for linear programming, and then derive the proposed algorithm. Preliminaries: We consider a standard linear program as follows, x c>x s.t. Ax = b, x 0 x 2 Rn, (3.1) where A 2 Rm n, b 2 Rm and c 2 Rn are given. Without loss of generality, we assume that m n and matrix A has full row rank m. Throughout our analysis, we assume that an optimal solution exists (it needs not be unique). The primal simplex method starts from a basic feasible solution (to be defined shortly but geometrically can be thought of as any vertex of the feasible polytope) and proceeds step-by-step (vertex-by-vertex) to the optimal solution. Various techniques exist to find the first feasible solution, which is often referred to the Phase I method. See Vanderbei (1995); Murty (1983); Dantzig (1951). Algebraically, a basic solution corresponds to a partition of the indices {1, . . . , n} into m basic indices denoted B and n m non-basic indices denoted N. Note that not all partitions are allowed the submatrix of A consisting of the columns of A associated with the basic indices, denoted AB, must be invertible. The submatrix of A corresponding to the nonbasic indices is denoted AN . Suppressing the fact that the columns have been rearranged, we can write A = [AN , AB]. If we rearrange the rows of x and c in the same way, we can introduce a corresponding partition of these vectors: x = . From the commutative property of addition, we rewrite the constraint as AN x N + ABx B = b. Since the matrix AB is assumed to be invertible, we can express x B in terms of x N as follows: B AN x N , (3.2) where we have written x B as an abbreviation for A 1 B b. This rearrangement of the equality constraints is called a dictionary because the basic variables are defined as functions of the nonbasic variables. Denoting the objective c>x by , then we also can write: N )>x N , (3.3) where = c> B AN )>c B c N . We call equations (3.2) and (3.3) the primal dictionary associated with the current basis B. Corresponding to each dictionary, there is a basic solution (also called a dictionary solution) obtained by setting the nonbasic variables to zero and reading off values of the basic variables: x N = 0, x B = x B. This particular solution satisfies the equality constraints of the problem by construction. To be a feasible solution, one only needs to check that the values of the basic variables are nonnegative. Therefore, we say that a basic solution is a basic feasible solution if x The dual of (3.1) is given by y b>y s.t. A>y z = c, z 0 z 2 Rn, y 2 Rm. (3.4) In this case, we separate variable z into basic and nonbasic parts as before: [z] = corresponding dual dictionary is given by: B AN )>z B, = (x B)>z B, (3.5) where denotes the objective function in the (3.4), = c> B AN )>c B c N . For each dictionary, we set x N and z B to 0 (complementarity) and read off the solutions to x B and z N according to (3.2) and (3.5). Next, we remove one basic index and replacing it with a nonbasic index, and then get an updated dictionary. The simplex method produces a sequence of steps to adjacent bases such that the value of the objective function is always increasing at each step. Primal feasibility requires that x B 0, so while we update the dictionary, primal feasibility must always be satisfied. This process will stop when z N 0 (dual feasibility), since it satisfies primal feasibility, dual feasibility and complementarity (i.e., the optimality condition). Parametric Simplex Method: We derive the parametric simplex method used to find the full solution path while solving the parametric linear programming problem only once. A few variants of the simplex method are proposed with different rules for choosing the pair of variables to swap at each iteration. Here we describe the rule used by the parametric simplex method: we add some positive perturbations ( b and c) times a positive parameter λ to both objective function and the right hand side of the primal problem. The purpose of doing this is to guarantee the primal and dual feasibility when λ is large. Since the problem is already primal feasible and dual feasible, there is no phase I stage required for the parametric simplex method. Furthermore, if the i-th entry of b or the j-th entry of c has already satisfied the feasibility condition (bi 0 or cj 0), then the corresponding perturbation bi or cj to that entry is allowed to be 0. With these perturbations, (3.1) becomes: x (c + λ c)>x s.t. Ax = b + λ b, x 0 x 2 Rn. (3.6) We separate the perturbation vectors into basic and nonbasic parts as well and write down the the dictionary with perturbations corresponding to (3.2),(3.3), and (3.5) as: B + λ x B) A 1 B AN x N , = (z N + λ z N )>x N , (3.7) N + λ z N ) + (A 1 B AN )>z B, = (x B + λ x B)>z B, (3.8) B AN )>c B c N , x B = A 1 B b and z N = (A 1 B AN )> c B c N . When λ is large, the dictionary will be both primal and dual feasible (x B +λ x B 0 and z N +λ z N 0). The corresponding primal solution is simple: x B = x B + λ x B and x N = 0. This solution is valid until λ hits a lower bound which breaks the feasibility. The smallest value of λ without break any feasibility is given by λ = min{λ : z N + λ z N 0 and x B + λ x B 0}. (3.9) In other words, the dictionary and its corresponding solution x B = x B + λ x B and x N = 0 is optimal for the value of λ 2 [λ , λmax], where maxj2N , z Nj >0 Nj z Nj , maxi2B, x Bi>0 minj2N , z Nj <0 Nj z Nj , mini2B, x Bi<0 Note that although initially the perturbations are nonnegative, as the dictionary gets updated, the perturbation does not necessarily maintain nonnegativity. For each dictionary, there is a corresponding interval of λ given by (3.10) and (3.11). We have characterized the optimal solution for this interval, and these together give us the solution path of the original parametric linear programming problem. Next, we show how the dictionary gets updated as the leaving variable and entering variable swap. We expect that after swapping the entering variable j and leaving variable i, the new solution in the dictionary (3.7) and (3.8) would slightly change to: B t x B, x B x B t x B, N s z N , z N z N s z N , where t and t are the primal step length for the primal basic variables and perturbations, s and s are the dual step length for the dual nonbasic variables and perturbations, x B and z N are the primal and dual step directions, respectively. We explain how to find these values in details now. There is either a j 2 N for which z N + λ z N = 0 or an i 2 B for which x B + λ x B = 0 in (3.9). If it corresponds to a nonbasic index j, then we do one step of the primal simplex. In this case, we declare j as the entering variable, then we need to find the primal step direction x B. After the entering variable j has been selected, x N changes from 0 to tej, where t is the primal step length. Then according to (3.7), we have that x B = (x B + λ x B) A 1 B AN tej. The step direction x B is given by x B = A 1 B AN ej. We next select the leaving variable. In order to maintain primal feasibility, we need to keep x B 0, therefore, the leaving variable i is selected such that i 2 B achieves the maximal value of xi x i +λ xi . It only remains to show how z N changes. Since i is the leaving variable, according to (3.8), we have z N = (A 1 B AN )>ei. After we know the entering variables, the primal and dual step directions, the primal and dual step lengths can be found as t = x i xi , t = xi xi , s = j zj , s = zj zj . If, on the other hand, the constraint in (3.9) corresponds to a basic index i, we declare i as the leaving variable, then similar calculation can be made based on the dual simplex method (apply the primal simplex method to the dual problem). Since it is very similar to the primal simplex method, we omit the detailed description. The algorithm will terminate whenever λ 0. The corresponding solution is optimal since our dictionary always satisfies primal feasibility, dual feasibility and complementary slackness condition. The only concern during the entire process of the parametric simplex method is that λ does not equal to zero, so as long as λ can be set to be zero, we have the optimal solution to the original problem. We summarize the parametric simplex method in Algorithm 1: The following theorem shows that the updated basic and nonbasic partition gives the optimal solution. Theorem 3.1. For a given dictionary with parameter λ in the form of (3.7) and (3.8), let B be a basic index set and N be an nonbasic index set. Assume this dictionary is optimal for λ 2 [λ , λmax], where λ and λmax are given by (3.10) and (3.11), respectively. The updated dictionary with basic index set B and nonbasic index set N is still optimal at λ = λ . Write down the dictionary as in (3.7) and (3.8); Find λ given by (3.10); while λ > 0 do if the constraint in (3.10) corresponds to an index j 2 N then Declare xj as the entering variable; Compute primal step direction. x B = A 1 B AN ej; Select leaving variable. Need to find i 2 B that achieves the maximal value of xi x Compute dual step direction. It is given by z N = (A 1 B AN )>ei; else if the constraint in (3.10) corresponds to an index i 2 B then Declare zi as the leaving variable; Compute dual step direction. z N = (A 1 B AN )>ei; Select entering variable. Need to find j 2 N that achieves the maximal value of zj z Compute primal step direction. It is given by x B = A 1 B AN ej; Compute the dual and primal step lengths for both variables and perturbations: , t = xi xi , s = zj zj Update the primal and dual solutions: j = t, xj = t, z i = s, zi = s, B t x B, x B x B t x B z N s z N , z N z N s z N . Update the basic and nonbasic index sets B := B \ {i} \ {j} and N := N \ {j} \ {i}. Write down the new dictionary and compute λ given by (3.10); end Set the nonbasic variables as 0s and read the values of the basic variables. Algorithm 1: The parametric simplex method During each iteration, there is an optimal solution corresponding to λ 2 [λ , λmax]. Notice each of these λ s range is determined by a partition between basic and nonbasic variables, and the number of the partition into basic and nonbasic variables is finite. Thus after finite steps, we must find the optimal solution corresponding to all λ values. Theory: We present our theoretical analysis on solving Dantzig selector using PSM. Specifically, given X 2 Rn d, y 2 Rn, we consider a linear model y = X + , where is the unknown sparse regression coefficient vector with k k0 = s , and N(0, σ2In). We show that PSM always maintains a pair of sparse primal and dual solutions. Therefore, the computation cost within each iteration of PSM can be significantly reduced. Before we proceed with our main result, we introduce two assumptions. The first assumption requires the regularization factor to be sufficiently large. Assumption 3.2. Suppose that PSM solves (2.3) for a regularization sequence {λK}N K=0. The smallest regularization factor λN satisfies n 4k X> k1 for some generic constant C. Existing literature has extensively studied Assumption 3.2 for high dimensional statistical theories. Such an assumption enforces all regularization parameters to be sufficiently large in order to eliminate irrelevant coordinates along the regularization path. Note that Assumption 3.2 is deterministic for any given λN. Existing literature has verified that for sparse linear regression models, given N(0, σ2In), Assumption 3.2 holds with overwhelming probability. Before we present the second assumption, we define the largest and smallest s-sparse eigenvalues of n 1X>X respectively as follows. Definition 3.3. Given an integer s 1, we define +(s) = sup k k0 s and (s) = inf k k0 s Assumption 3.4. Given k k0 s , there exists an integer es such that es 100 s , +(s + es) < +1, and e (s + es) > 0, where is defined as = +(s + es)/e (s + es). Assumption 3.4 guarantees that n 1X>X satisfies the sparse eigenvalue conditions as long as the number of active irrelevant blocks never exceeds e2s along the solution path. That is closely related to the restricted isometry property (RIP) and restricted eigenvalue (RE) conditions, which have been extensively studied in existing literature. We then characterize the sparsity of the primal and dual solutions within each iteration. Theorem 3.5 (Primal and Dual Sparsity). Suppose that Assumptions 3.2 and 3.4 hold. We consider an alterantive formulation to the Dantzig selector, b λ0 = argmin k k1 subject to rj L( ) λ0, rj L( ) λ0. (3.12) Let bµλ0 = [bµλ0 1 , ..., bµλ0 d+1, ..., bγλ0 2d]> denote the optimal dual variables to (3.12). For any λ0 λ, we have kbµλ0k0 + kbγλ0k0 s + es. Moreover, given design matrix satisfying S XS) 1k1 1 , where > 0 is a generic constant, S = {j | j 6= 0} and S = {j | j = 0}, we have kb λ0k0 s . The proof of Theorem 3.5 is provided in Appendix B. Theorem 3.5 shows that within each iteration, both primal and dual variables are sparse, i.e., the number of nonzero entries are far smaller than d. Therefore, the computation cost within each iteration of PSM can be significantly reduced by a factor of O(d/s ). This partially justifies the superior performance of PSM in sparse learning. 4 Numerical Experiments In this section, we present some numerical experiments and give some insights about how the parametric simplex method solves different linear programming problems. We verify the following assertions: (1) The parametric simplex method requires very few iterations to identify the nonzero component if the original problem is sparse. (2) The parametric simplex method is able to find the full solution path with high precision by solving the problem only once in an efficient and scalable manner. (3) The parametric simplex method maintains the feasibility of the problem up to machine precision along the solution path. 3 2 1 0 1 2 Nonzero Entries of the Response Vector True Value Estimated Path (a) Solution Path 0 100 200 300 400 500 Values of Lambda along the Path (b) Parameter Path (Rescaled by n) 0 100 200 300 400 500 0 100 200 300 400 Infeasibility (c) Feasibility Violation Figure 1: Dantzig selector method: (a) The solution path of the parametric simplex method; (b) The parameter path of the parametric simplex method; (c) Feasibility violation along the solution path. Solution path of Dantzig selector: We start with a simple example that illustrates how the recovered solution path of the Dantzig selector model changes as the parametric simplex method iterates. We adopt the example used in Candes and Tao (2007). The design matrix X has n = 100 rows and d = 250 columns. The entries of X are generated from an array of independent Gaussian random variables that are then Gaussianized so that each column has a given norm. We randomly select s = 8 entries from the response vector 0, and set them as 0 i = si(1 + ai), where si = 1 or 1, with probability 1/2 and ai N(0, 1). The other entries of 0 are set to zero. We form y = X 0 + , where i N(0, σ), with σ = 1. We stop the parametric simplex method when λ σn log d/n. The solution path of the result is shown in Figure 1(a). We see that our method correctly identifies all nonzero entries of in less than 10 iterations. Some small overestimations occur in a few iterations after all nonzero entries have been identified. We also show how the parameter λ evolves as the parametric simplex method iterates in Figure 1(b). As we see, λ decreases sharply to less than 5 after all nonzero components have been identified. This reconciles with the theorem we developed. The algorithm itself only requires a very small number of iterations to correctly identify the nonzero entries of . In our example, each iteration in the parametric simplex method identifies one or two non-sparse entries in . Feasibility of Dantzig Selector: Another advantage of the parametric simplex method is that the solution is always feasible along the path while other estimating methods usually generate infeasible solutions along the path. We compare our algorithm with flare (Li et al., 2015) which uses the Alternating Direction Method of Multipliers (ADMM) using the same example described above. We compute the values of k X>X i X>yk1 λi along the solution path, where i is the i-th basic solution (with corresponding λi) obtained while the parametric simplex method is iterating. Without any doubts, we always obtain 0s during each iteration. We plug the same list of λi into flare and compute the solution path for this list as well. As shown in Table 1, the parametric simplex method is always feasible along the path since it is solving each iteration up to machine precision; while the solution path of the ADMM is almost always breaking the feasibility by a large amount, especially in the first few iterations which correspond to large λ values. Each experiment is repeated for 100 times. Table 1: Average feasibility violation with standard errors along the solution path Maximum violation Minimum Violation ADMM 498(122) 143(73.2) PSM 0(0) 0(0) Performance Benchmark of Dantzig Selector: In this part, we compare the timing performance of our algorithm with R package flare . We fix the sample size n to be 200 and vary the data dimension d from 100 to 5000. Again, each entries of X is independent Gaussian and Gaussianized such that the column has uniform norm. We randomly select 2% entries from vector to be nonzero and each entry is chosen as N(0, 1). We compute y = X + , with i N(0, 1) and try to recover vector , given X and y. Our method stops when λ is less than 2σ log d/n, such that the full solution path for all the values of λ up to this value is computed by the parametric simplex method. In flare , we estimate when λ is equal to the value in the Dantzig selector model. This means flare has much less computation task than the parametric simplex method. As we can see in Table 2, our method has a much better performance than flare in terms of speed. We compare and present the timing performance of the two algorithms in seconds and each experiment is repeated for 100 times. In practice, only very few iterations is required when the response vector is sparse. Table 2: Average timing performance (in seconds) with standard errors in the parentheses on Dantzig selector 500 1000 2000 5000 Flare 19.5(2.72) 44.4(2.54) 142(11.5) 1500(231) PSM 2.40(0.220) 29.7(1.39) 47.5(2.27) 649(89.8) Performance Benchmark of Differential Network: We now apply this optimization method to the Differential Network model. We need the difference between two inverse covariance matrices to be sparse. We generate 0 x = U > U, where 2 Rd d is a diagonal matrix and its entries are i.i.d. and uniform on [1, 2], and U 2 Rd d is a random matrix with i.i.d. entries from N(0, 1). Let D1 2 Rd d be a random sparse symmetric matrix with a certain sparsity level. Each entry of D1 is i.i.d. and from N(0, 1). We set D = D1 + 2|λmin(D1)|Id in order to guarantee the positive definiteness of D, where λmin(D1) is the smallest eigenvalue of D1. Finally, we let 0 We then generate data of sample size n = 100. The corresponding sample covariance matrices SX and SY are also computed based on the data. We are not able to find other software which can efficiently solve this problem, so we only list the timing performance of our algorithm as dimension d varies from 25 to 200 in Table 3. We stop our algorithm whenever the solution achieved the desired sparsity level. When d = 25, 50 and 100, the sparsity level of D1 is set to be 0.02 and when d = 150 and 200, the sparsity level of D1 is set to be 0.002. Each experiment is repeated for 100 times. Table 3: Average timing performance (in seconds) and iteration numbers with standard errors in the parentheses on differential network 25 50 100 150 200 Timing 0.0185(0.00689) 0.376(0.124) 6.81(2.38) 13.41(1.26) 46.88(7.24) Iteration Number 15.5(7.00) 55.3(18.8) 164(58.2) 85.8(16.7) 140(26.2) BANDYOPADHYAYA, S., MEHTA, M., KUO, D., SUNG, M.-K., CHUANG, R., JAEHNIG, E. J., BODENMILLER, B., LICON, K., COPELAND, W., SHALES, M., FIEDLER, D., DUTKOWSKI, J., GUÉNOLÉ, A., ATTIKUM, H. V., SHOKAT, K. M., KOLODNER, R. D., HUH, W.-K., AEBERSOLD, R., KEOGH, M.-C. and KROGAN, N. J. (2010). Rewiring of genetic networks in response to dna damage. Science Signaling 330 1385 1389. BÜHLMANN, P. and VAN DE GEER, S. (2011). Statistics for high-dimensional data: methods, theory and applications. Springer Science & Business Media. CAI, T. and LIU, W. (2011). A direct estimation approach to sparse linear discriminant analysis. Journal of the American Statistical Association 106 1566 1578. CAI, T., LIU, W. and LUO, X. (2011). A constrained l1 minimization approach to sparse precision matrix estimation. Journal of the American Statistical Association 106 594 607. CANDES, E. and TAO, T. (2007). The dantzig selector: Statistical estimation when p is much larger than n. The Annals of Statistics 35 2313 2351. DANAHER, P., WANG, P. and WITTEN, D. M. (2013). The joint graphical lasso for inverse covariance estimation across multiple classes. Journal of the Royal Statistical Society Series B 7 373 397. DANTZIG, G. (1951). Linear Programming and Extensions. Princeton University Press. DEMPSTER, A. (1972). Covariance selection. Biometrics 28 157 175. GAI, Y., ZHU, L. and LIN, L. (2013). Model selection consistency of dantzig selector. Statistica Sinica 615 634. HUDSON, N. J., REVERTER, A. and DALRYMPLE, B. P. (2009). A differential wiring analysis of expression data correctly identifies the gene containing the causal mutation. PLo S Computational Biology. 5. IDEKER, T. and KROGAN, N. (2012). Differential network biology. Molecular Systems Biology 5 LI, X., ZHAO, T., YUAN, X. and LIU, H. (2015). The flare package for hign dimensional linear regression and precision matrix estimation in r. Journal of Machine Learning Research 16 553 557. MURTY, K. (1983). Linear Programming. Wiley, New York, NY. TIBSHIRANI, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society 101 267 288. VANDERBEI, R. (1995). Linear Programming, Foundations and Extensions. Kluwer. WANG, H., LI, G. and JIANG, G. (2007). Robust regression shrinkage and consistent variable selection through the lad-lasso. Journal of Business & Economic Statistics 25 347 355. YAO, Y. and LEE, Y. (2014). Another look at linear programming for feature selection via methods of regularization. Statistics and Computing 24 885 905. ZHAO, S. D., CAI, T. and LI, H. (2013). Direct estimation of differential networks. Biometrika 58 ZHOU, H. and HASTIE, T. (2005). Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society 67 301 320. ZHU, J., ROSSET, S., HASTIE, T. and TIBSHIRANI, R. (2004). 1-norm support vector machines. Advances in Neural Information Processing Systems 16.