# lasso_screening_rules_via_dual_polytope_projection__223eecbd.pdf Journal of Machine Learning Research 16 (2015) 1063-1101 Submitted 1/14; Revised 9/14; Published 5/15 Lasso Screening Rules via Dual Polytope Projection Jie Wang jwangumi@umich.edu Department of Computational Medicine and Bioinformatics University of Michigan Ann Arbor, MI 48109-2218, USA Peter Wonka peter.wonka@asu.edu Department of Computer Science and Engineering Arizona State University Tempe, AZ 85287-8809, USA Jieping Ye jpye@umich.edu Department of Computational Medicine and Bioinformatics Department of Electrical Engineering and Computer Science University of Michigan Ann Arbor, MI 48109-2218, USA Editor: Hui Zou Lasso is a widely used regression technique to find sparse representations. When the dimension of the feature space and the number of samples are extremely large, solving the Lasso problem remains challenging. To improve the efficiency of solving large-scale Lasso problems, El Ghaoui and his colleagues have proposed the SAFE rules which are able to quickly identify the inactive predictors, i.e., predictors that have 0 components in the solution vector. Then, the inactive predictors or features can be removed from the optimization problem to reduce its scale. By transforming the standard Lasso to its dual form, it can be shown that the inactive predictors include the set of inactive constraints on the optimal dual solution. In this paper, we propose an efficient and effective screening rule via Dual Polytope Projections (DPP), which is mainly based on the uniqueness and nonexpansiveness of the optimal dual solution due to the fact that the feasible set in the dual space is a convex and closed polytope. Moreover, we show that our screening rule can be extended to identify inactive groups in group Lasso. To the best of our knowledge, there is currently no exact screening rule for group Lasso. We have evaluated our screening rule using synthetic and real data sets. Results show that our rule is more effective in identifying inactive predictors than existing state-of-the-art screening rules for Lasso. Keywords: lasso, safe screening, sparse regularization, polytope projection, dual formulation, large-scale optimization 1. Introduction Data with various structures and scales comes from almost every aspect of daily life. To effectively extract patterns in the data and build interpretable models with high prediction accuracy is always desirable. One popular technique to identify important explanatory features is by sparse regularization. For instance, consider the widely used ℓ1-regularized c 2015 Jie Wang, Peter Wonka and Jieping Ye. Wang, Wonka, and Ye least squares regression problem known as Lasso (Tibshirani, 1996). The most appealing property of Lasso is the sparsity of the solutions, which is equivalent to feature selection. Suppose we have N observations and p features. Let y denote the N dimensional response vector and X = [x1, x2, . . . , xp] be the N p feature matrix. Let λ 0 be the regularization parameter. The Lasso problem is formulated as the following optimization problem: inf β Rp 1 2 y Xβ 2 2 + λ β 1. (1) Lasso has achieved great success in a wide range of applications (Chen et al., 2001; Cand es, 2006; Zhao and Yu, 2006; Bruckstein et al., 2009; Wright et al., 2010) and in recent years many algorithms have been developed to efficiently solve the Lasso problem (Efron et al., 2004; Kim et al., 2007; Park and Hastie, 2007; Donoho and Tsaig, 2008; Friedman et al., 2007; Becker et al., 2010; Friedman et al., 2010). However, when the dimension of feature space and the number of samples are very large, solving the Lasso problem remains challenging because we may not even be able to load the data matrix into main memory. The idea of screening has been shown very promising in solving Lasso for large-scale problems. Essentially, screening aims to quickly identify the inactive features that have 0 components in the solution and then remove them from the optimization. Therefore, we can work on a reduced feature matrix to solve the Lasso problem, which may lead to substantial savings in computational cost and memory usage. Existing screening methods for Lasso can be roughly divided into two categories: the Heuristic Screening Methods and the Safe Screening Methods. As the name indicated, the heuristic screening methods can not guarantee that the discarded features have zero coefficients in the solution vector. In other words, they may mistakenly discard the active features which have nonzero coefficients in the sparse representations. Well-known heuristic screening methods for Lasso include SIS (Fan and Lv, 2008) and strong rules (Tibshirani et al., 2012). SIS is based on the associations between features and the prediction task, but not from an optimization point of view. Strong rules rely on the assumption that the absolute values of the inner products between features and the residue are nonexpansive (Bauschke and Combettes, 2011) with respect to the parameter values. Notice that, in real applications, this assumption is not always true. In order to ensure the correctness of the solutions, strong rules check the KKT conditions for violations. In case of violations, they weaken the screened set and repeat this process. In contrast to the heuristic screening methods, the safe screening methods for Lasso can guarantee that the discarded features are absent from the resulting sparse models. Existing safe screening methods for Lasso includes SAFE (El Ghaoui et al., 2012) and DOME (Xiang et al., 2011), which are based on an estimation of the dual optimal solution. The key challenge of searching for effective safe screening rules is how to accurately estimate the dual optimal solution. The more accurate the estimation is, the more effective the resulting screening rule is in discarding the inactive features. Moreover, Xiang et al. (2011) have shown that the SAFE rule for Lasso can be read as a special case of their testing rules. In this paper, we develop novel efficient and effective screening rules for the Lasso problem; our screening rules are safe in the sense that no active features will be discarded. As the name indicated (DPP), the proposed approaches heavily rely on the geometric properties of the Lasso problem. Indeed, the dual problem of problem (1) can be formulated as Lasso Screening Rules via Dual Polytope Projection a projection problem. More specifically, the dual optimal solution of the Lasso problem is the projection of the scaled response vector onto a nonempty closed and convex polytope (the feasible set of the dual problem). This nice property provides us many elegant approaches to accurately estimate the dual optimal solutions, e.g., nonexpansiveness, firmly nonexpansiveness (Bauschke and Combettes, 2011). In fact, the estimation of the dual optimal solution in DPP is a direct application of the nonexpansiveness of the projection operators. Moreover, by further exploiting the properties of the projection operators, we can significantly improve the estimation of the dual optimal solution. Based on this estimation, we develop the so called enhanced DPP (EDPP) rules which are able to detect far more inactive features than DPP. Therefore, the speedup gained by EDPP is much higher than the one by DPP. In real applications, the optimal parameter value of λ is generally unknown and needs to be estimated. To determine an appropriate value of λ, commonly used approaches such as cross validation and stability selection involve solving the Lasso problems over a grid of tuning parameters λ1 > λ2 > . . . > λK. Thus, the process can be very time consuming. To address this challenge, we develop the sequential version of the DPP families. Briefly speaking, for the Lasso problem, suppose we are given the solution β (λk 1) at λk 1. We then apply the screening rules to identify the inactive features of problem (1) at λk by making use of β (λk 1). The idea of the sequential screening rules is proposed by El Ghaoui et al. (2012) and Tibshirani et al. (2012) and has been shown to be very effective for the aforementioned scenario. In Tibshirani et al. (2012), the authors demonstrate that the sequential strong rules are very effective in discarding inactive features especially for very small parameter values and achieve the state-of-the-art performance. However, in contrast to the recursive SAFE (the sequential version of SAFE rules) and the sequential version of DPP rules, it is worthwhile to mention that the sequential strong rules may mistakenly discard active features because they are heuristic methods. Moreover, it is worthwhile to mention that, for the existing screening rules including SAFE and strong rules, the basic versions are usually special cases of their sequential versions, and the same applies to our DPP and EDPP rules. For the DOME rule (Xiang et al., 2011), it is unclear whether its sequential version exists. The rest of this paper is organized as follows. We present the family of DPP screening rules, i.e., DPP and EDPP, in detail for the Lasso problem in Section 2. Section 3 extends the idea of DPP screening rules to identify inactive groups in group Lasso (Yuan and Lin, 2006). We evaluate our screening rules on synthetic and real data sets from many different applications. In Section 4, the experimental results demonstrate that our rules are more effective in discarding inactive features than existing state-of-the-art screening rules. We show that the efficiency of the solver can be improved by several orders of magnitude with the enhanced DPP rules, especially for the high-dimensional data sets (notice that, the screening methods can be integrated with any existing solvers for the Lasso problem). Some concluding remarks are given in Section 5. 2. Screening Rules for Lasso via Dual Polytope Projections In this section, we present the details of the proposed DPP and EDPP screening rules for the Lasso problem. We first review some basics of the dual problem of Lasso including its Wang, Wonka, and Ye geometric properties in Section 2.1; we also briefly discuss some basic guidelines for developing safe screening rules for Lasso. Based on the geometric properties discussed in Section 2.1, we then develop the basic DPP screening rule in Section 2.2. As a straightforward extension in dealing with the model selection problems, we also develop the sequential version of DPP rules. In Section 2.3, by exploiting more geometric properties of the dual problem of Lasso, we further improve the DPP rules by developing the so called enhanced DPP (EDPP) rules. The EDPP screening rules significantly outperform DPP rules in identifying the inactive features for the Lasso problem. Different from Xiang et al. (2011), we do not assume y and all xi have unit length.The dual problem of problem (1) takes the form of (to make the paper self-contained, we provide the detailed derivation of the dual form in the appendix): 2 : |x T i θ| 1, i = 1, 2, . . . , p , (2) where θ is the dual variable. For notational convenience, let the optimal solution of problem (2) be θ (λ) [recall that the optimal solution of problem (1) with parameter λ is denoted by β (λ)]. Then, the KKT conditions are given by: y = Xβ (λ) + λθ (λ), (3) x T i θ (λ) ( sign([β (λ)]i), if [β (λ)]i = 0, [ 1, 1], if [β (λ)]i = 0, i = 1, . . . , p, (4) where [ ]k denotes the kth component. In view of the KKT condition in (4), we have |x T i (θ (λ))T | < 1 [β (λ)]i = 0 xi is an inactive feature. (R1) In other words, we can potentially make use of (R1) to identify the inactive features for the Lasso problem. However, since θ (λ) is generally unknown, we can not directly apply (R1) to identify the inactive features. Inspired by the SAFE rules (El Ghaoui et al., 2012), we can first estimate a region Θ which contains θ (λ ). Then, (R1) can be relaxed as follows: sup θ Θ |x T i θ| < 1 [β (λ)]i = 0 xi is an inactive feature. (R1 ) Clearly, as long as we can find a region Θ which contains θ (λ), (R1 ) will lead to a screening rule to detect the inactive features for the Lasso problem. Moreover, in view of (R1) and (R1 ), we can see that the smaller the region Θ is, the more accurate the estimation of θ (λ) is. As a result, more inactive features can be identified by the resulting screening rules. The dual problem has interesting geometric interpretations. By a closer look at the dual problem (2), we can observe that the dual optimal solution is the feasible point which is closest to y/λ. For notational convenience, let the feasible set of problem (2) be F. Clearly, F is the intersection of 2p closed half-spaces, and thus a closed and convex polytope. (Notice that, F is also nonempty since 0 F.) In other words, θ (λ) is the projection of y/λ onto Lasso Screening Rules via Dual Polytope Projection the polytope F. Mathematically, for an arbitrary vector w and a convex set C in a Hilbert space H, let us define the projection operator as PC(w) = argmin u C u w 2. (5) Then, the dual optimal solution θ (λ) can be expressed by θ (λ) = PF (y/λ) = argmin θ F Indeed, the nice property of problem (2) illustrated by (6) leads to many interesting results. For example, it is easy to see that y/λ would be an interior point of F when λ is large enough. If this is the case, we immediately have the following assertions: 1) y/λ is an interior point of F implies that none of the constraints of problem (2) would be active on y/λ, i.e., |x T i (y/(λ)|) < 1 for all i = 1, . . . , p; 2) θ (λ) is an interior point of F as well since θ (λ) = PF (y/λ) = y/λ by (6) and the fact y/λ F. Combining the results in 1) and 2), it is easy to see that |x T i θ (λ)| < 1 for all i = 1, . . . , p. By (R1), we can conclude that β (λ) = 0, under the assumption that λ is large enough. The above analysis may naturally lead to a question: does there exist a specific parameter value λmax such that the optimal solution of problem (1) is 0 whenever λ > λmax? The answer is affirmative. Indeed, let us define λmax = max i |x T i y|. (7) It is well known (Tibshirani et al., 2012) that λmax defined by (7) is the smallest parameter such that problem (1) has a trivial solution, i.e., β (λ) = 0, λ [λmax, ). (8) Combining the results in (8) and (3), we immediately have λ, λ [λmax, ). (9) Therefore, through out the rest of this paper, we will focus on the cases with λ (0, λmax). In the subsequent sections, we will follow (R1 ) to develop our screening rules. More specifically, the derivation of the proposed screening rules can be divided into the following three steps: 1. We first estimate a region Θ which contains the dual optimal solution θ (λ). 2. We solve the maximization problem in (R1 ), i.e., supθ Θ |x T i θ|. 3. By plugging in the upper bound we find in 2, it is straightforward to develop the screening rule based on (R1 ). The geometric property of the dual problem illustrated by (6) serves as a fundamentally important role in developing our DPP and EDPP screening rules. Wang, Wonka, and Ye 2.2 Fundamental Screening Rules via Dual Polytope Projections (DPP) In this Section, we propose the so called DPP screening rules for discarding the inactive features for Lasso. As the name indicates, the idea of DPP heavily relies on the properties of projection operators, e.g., the nonexpansiveness (Bertsekas, 2003). We will follow the three steps stated in Section 2.1 to develop the DPP screening rules. First, we need to find a region Θ which contains the dual optimal solution θ (λ). Indeed, the result in (9) provides us an important clue. That is, we may be able to estimate a possible region for θ (λ) in terms of a known θ (λ0) with λ < λ0. Notice that, we can always set λ0 = λmax and make use of the fact that θ (λmax) = y/λmax implied by (9). Another key ingredient comes from (6), i.e., the dual optimal solution θ (λ) is the projection of y/λ onto the feasible set F, which is nonempty closed and convex. A nice property of the projection operators defined in a Hilbert space with respect to a nonempty closed and convex set is the so called nonexpansiveness. For convenience, we restate the definition of nonexpansiveness in the following theorem. Theorem 1 Let C be a nonempty closed convex subset of a Hilbert space H. Then the projection operator defined in (5) is continuous and nonexpansive, i.e., PC(w2) PC(w1) 2 w2 w1 2, w2, w1 H. (10) In view of (6), a direct application of Theorem 1 leads to the following result: Theorem 2 For the Lasso problem, let λ, λ0 > 0 be two regularization parameters. Then, θ (λ) θ (λ0) 2 1 λ 1 For notational convenience, let a ball centered at c with radius ρ be denoted by B(c, ρ). Theorem 2 actually implies that the dual optimal solution must be inside a ball centered at θ (λ0) with radius |1/λ 1/λ0| y 2, i.e., θ (λ) B θ (λ0), 1 λ 1 We thus complete the first step for developing DPP. Because it is easy to find the upper bound of a linear functional over a ball, we combine the remaining two steps as follows. Theorem 3 For the Lasso problem, assume that the dual optimum at λ0, i.e., θ (λ0), is known. Let λ be a positive value different from λ0. Then [β (λ)]i = 0 if x T i θ (λ) < 1 xi 2 y 2 Proof The dual optimal solution θ (λ) is estimated to be inside the ball given by (12). To simplify notations, let c = θ (λ0) and ρ = |1/λ 1/λ0| y 2. To develop a screening rule based on (R1 ), we need to solve the optimization problem: supθ B(c,ρ) |x T i θ|. Lasso Screening Rules via Dual Polytope Projection Indeed, for any θ B(c, ρ), it can be expressed by: θ = θ (λ0) + v, v 2 ρ. Therefore, the optimization problem can be easily solved as follows: sup θ B(c,ρ) x T i θ = sup v 2 ρ x T i (θ (λ0) + v) = x T i θ (λ0) + ρ xi 2. (13) By plugging the upper bound in (13) to (R1 ), we obtain the statement in Theorem 3, which completes the proof. Theorem 3 implies that we can develop applicable screening rules for Lasso as long as the dual optimal solution θ ( ) is known for a certain parameter value λ0. By simply setting λ0 = λmax and noting that θ (λmax) = y/λmax from (9), Theorem 3 immediately leads to the following result. Corollary 4 Basic DPP: For the Lasso problem (1), let λmax = maxi |x T i y|. If λ λmax, then [β ]i = 0, i I. Otherwise, [β (λ)]i = 0 if x T i y λmax Remark 5 Notice that, DPP is not the same as ST1 Xiang et al. (2011) and SAFE El Ghaoui et al. (2012), which discards the ith feature if |x T i y| < λ xi 2 y 2 λmax λ λmax . (14) From the perspective of the sphere test, the radius of ST1/SAFE and DPP are the same. But the centers of ST1 and DPP are y/λ and y/λmax respectively, which leads to different formulas, i.e., (14) and Corollary 4. In real applications, the optimal parameter value of λ is generally unknown and needs to be estimated. To determine an appropriate value of λ, commonly used approaches such as cross validation and stability selection involve solving the Lasso problem over a grid of tuning parameters λ1 > λ2 > . . . > λK, which is very time consuming. Motivated by the ideas of Tibshirani et al. (2012) and El Ghaoui et al. (2012), we develop a sequential version of DPP rules. We first apply the DPP screening rule in Corollary 4 to discard inactive features for the Lasso problem (1) with parameter being λ1. After solving the reduced optimization problem for λ1, we obtain the exact solution β (λ1). Hence by (3), we can find θ (λ1). According to Theorem 3, once we know the optimal dual solution θ (λ1), we can construct a new screening rule by setting λ0 = λ1 to identify inactive features for problem (1) with parameter being λ2. By repeating the above process, we obtain the sequential version of the DPP rule as in the following corollary. Corollary 6 Sequential DPP: For the Lasso problem (1), suppose we are given a sequence of parameter values λmax = λ0 > λ1 > . . . > λm. Then for any integer 0 k < m, we have [β (λk+1)]i = 0 if β (λk) is known and the following holds: x T i y Xβ (λk) < 1 1 λk+1 1 Wang, Wonka, and Ye Remark 7 From Corollaries 4 and 6, we can see that both of the DPP and sequential DPP rules discard the inactive features for the Lasso problem with a smaller parameter value by assuming a known dual optimal solution at a larger parameter value. This is in fact a standard way to construct screening rules for Lasso (Tibshirani et al., 2012; El Ghaoui et al., 2012; Xiang et al., 2011). Remark 8 For illustration purpose, we present both the basic and sequential version of the DPP screening rules. However, it is easy to see that the basic DPP rule can be easily derived from its sequential version by simply setting λk = λmax and λk+1 = λ. Therefore, in this paper, we will focus on the development and evaluation of the sequential version of the proposed screening rules. To avoid any confusions, DPP and EDPP all refer to the corresponding sequential versions. 2.3 Enhanced DPP Rules for Lasso In this section, we further improve the DPP rules presented in Section 2.2 by a more careful analysis of the projection operators. Indeed, from the three steps by which we develop the DPP rules, we can see that the first step is a key. In other words, the estimation of the dual optimal solution serves as a fundamentally important role in developing the DPP rules. Moreover, (R1 ) implies that the more accurate the estimation is, the more effective the resulting screening rule is in discarding the inactive features. The estimation of the dual optimal solution in DPP rules is in fact a direct consequence of the nonexpansiveness of the projection operators. Therefore, in order to improve the performance of the DPP rules in discarding the inactive features, we propose two different approaches to find more accurate estimations of the dual optimal solution. These two approaches are presented in detail in Sections 2.3.1 and 2.3.2 respectively. By combining the ideas of these two approaches, we can further improve the estimation of the dual optimal solution. Based on this estimation, we develop the enhanced DPP rules (EDPP) in Section 2.3.3. Again, we will follow the three steps in Section 2.1 to develop the proposed screening rules. 2.3.1 Improving the DPP rules via Projections of Rays In the DPP screening rules, the dual optimal solution θ (λ) is estimated to be inside the ball B (θ (λ0), |1/λ 1/λ0| y 2) with θ (λ0) given. In this section, we show that θ (λ) lies inside a ball centered at θ (λ0) with a smaller radius. Indeed, it is well known that the projection of an arbitrary point onto a nonempty closed convex set C in a Hilbert space H always exists and is unique (Bauschke and Combettes, 2011). However, the converse is not true, i.e., there may exist w1, w2 H such that w1 = w2 and PC(w1) = PC(w2). In fact, it is known that the following result holds: Lemma 9 (Bauschke and Combettes, 2011) Let C be a nonempty closed convex subset of a Hilbert space H. For a point w H, let w(t) = PC(w) + t(w PC(w)). Then, the projection of the point w(t) is PC(w) for all t 0, i.e., PC(w(t)) = PC(w), t 0. Clearly, when w = PC(w), i.e., w / C, w(t) with t 0 is the ray starting from PC(w) and pointing in the same direction as w PC(w). By Lemma 9, we know that the projection Lasso Screening Rules via Dual Polytope Projection of the ray w(t) with t 0 onto the set C is a single point PC(w). [When w = PC(w), i.e., w C, w(t) with t 0 becomes a single point and the statement in Lemma 9 is trivial.] By making use of Lemma 9 and the nonexpansiveness of the projection operators, we can improve the estimation of the dual optimal solution in DPP [please refer to Theorem 2 and (12)]. More specifically, we have the following result: Theorem 10 For the Lasso problem, suppose that the dual optimal solution θ ( ) at λ0 (0, λmax] is known. For any λ (0, λ0], let us define y λ0 θ (λ0), if λ0 (0, λmax), sign(x T y)x , if λ0 = λmax, where x = argmaxxi|x T i y|, (15) v2(λ, λ0) = y λ θ (λ0), (16) v 2 (λ, λ0) = v2(λ, λ0) v1(λ0), v2(λ, λ0) v1(λ0) 2 2 v1(λ0). (17) Then, the dual optimal solution θ (λ) can be estimated as follows: θ (λ) B θ (λ0), v 2 (λ, λ0) 2 B θ (λ0), 1 λ 1 Proof By making use of Lemma 9, we present the proof of the statement for the cases with λ0 (0, λmax). We postpone the proof of the statement for the case with λ0 = λmax after we introduce more general technical results. In view of the assumption λ0 (0, λmax), it is easy to see that λ0 θ (λ0) = 0. (18) For each λ0 (0, λmax), let us define θλ0(t) = θ (λ0) + tv1(λ0) = θ (λ0) + t y λ0 θ (λ0) , t 0. (19) By the result in (18), we can see that θλ0( ) defined by (19) is a ray which starts at θ (λ0) and points in the same direction as y/λ0 θ (λ0). In view of (6), a direct application of Lemma 9 leads to that: PF (θλ0(t)) = θ (λ0), t 0. (20) By applying Theorem 1 again, we have θ (λ) θ (λ0) 2 = PF y PF (θλ0(t)) 2 (21) λ θλ0(t) 2 = t y λ0 θ (λ0) y λ θ (λ0) 2 = tv1(λ0) v2(λ, λ0) 2, t 0. Wang, Wonka, and Ye Because the inequality in (21) holds for all t 0, it is easy to see that θ (λ) θ (λ0) 2 min t 0 tv1(λ0) v2(λ, λ0) 2 (22) ( v2(λ, λ0) 2, if v1(λ0), v2(λ, λ0) < 0, v 2 (λ, λ0) 2 , otherwise. The inequality in (22) implies that, to prove the first half of the statement, i.e., θ (λ) B(θ (λ0), v 2 (λ, λ0) 2), we only need to show that v1(λ0), v2(λ, λ0) 0. Indeed, it is easy to see that 0 F. Therefore, in view of (20), the distance between θλ0(t) and θ (λ0) must be shorter than the one between θλ0(t) and 0 for all t 0, i.e., θλ0(t) θ (λ0) 2 2 θλ0(t) 0 2 2 (23) 0 θ (λ0) 2 2 + 2t θ (λ0), y Since the inequality in (23) holds for all t 0, we can conclude that: θ (λ0), y θ (λ0) 2 2 0 y 2 λ0 θ (λ0) 2. (24) Therefore, we can see that: v1(λ0), v2(λ, λ0) = y λ0 θ (λ0), y λ0 θ (λ0) (25) λ0 θ (λ0), y y 2 2 λ0 θ (λ0), y y 2 2 λ0 θ (λ0) 2 y 2 The last inequality in (25) is due to the result in (24). Clearly, in view of (22) and (25), we can see that the first half of the statement holds, i.e., θ (λ) B(θ (λ0), v 2 (λ, λ0) 2). The second half of the statement, i.e., B(θ (λ0), v 2 (λ, λ0) 2) B(θ (λ0), |1/λ 1/λ0| y 2), can be easily obtained by noting that the inequality in (21) reduces to the one in (12) when t = 1. This completes the proof of the statement with λ0 (0, λmax). Before we present the proof of Theorem 10 for the case with λ0 = λmax, let us briefly review some technical results from convex analysis first. Definition 11 (Ruszczy nski, 2006) Let C be a nonempty closed convex subset of a Hilbert space H and w C. The set NC(w) := {v : v, u w 0, u C} is called the normal cone to C at w. Lasso Screening Rules via Dual Polytope Projection In terms of the normal cones, the following theorem provides an elegant and useful characterization of the projections onto nonempty closed convex subsets of a Hilbert space. Theorem 12 (Bauschke and Combettes, 2011) Let C be a nonempty closed convex subset of a Hilbert space H. Then, for every w H and w0 C, w0 is the projection of w onto C if and only if w w0 NC(w0), i.e., w0 = PC(w) w w0, u w0 0, u C. In view of the proof of Theorem 10, we can see that (20) is a key step. When λ0 = λmax, similar to (19), let us define θλmax(t) = θ (λmax) + tv1(λmax), t 0. (26) By Theorem 12, the following lemma shows that (20) also holds for λ0 = λmax. Lemma 13 For the Lasso problem, let v1( ) and θλmax( ) be given by (15) and (26), then the following result holds: PF (θλmax(t)) = θ (λmax), t 0. (27) Proof To prove the statement, Theorem 12 implies that we only need to show: v1(λmax), θ θ (λmax) 0, θ F. (28) Recall that v1(λmax) = sign(x T y)x , x = argmaxxi|x T i y| from (15), and θ (λmax) = y/λmax from (9). It is easy to see that v1(λmax), θ (λmax) = sign(x T y)x , y λmax = |x T y| λmax = 1. (29) Moreover, assume θ is an arbitrary point of F. Then, we have | x , θ | 1, and thus v1(λmax), θ = sign(x T y)x , θ | x , θ | 1. (30) Therefore, the inequality in (28) easily follows by combing the results in (29) and (30), which completes the proof. We are now ready to give the proof of Theorem 10 for the case with λ0 = λmax. Proof In view of Theorem 1 and Lemma 13, we have θ (λ) θ (λmax) 2 = PF y PF (θλmax(t)) 2 (31) λ θλmax(t) 2 = tv1(λmax) y λ θ (λmax) 2 = tv1(λmax) v2(λ, λmax) 2, t 0. Wang, Wonka, and Ye Because the inequality in (31) holds for all t 0, we can see that θ (λ) θ (λmax) 2 min t 0 tv1(λmax) v2(λ, λmax) 2 (32) ( v2(λ, λmax) 2, if v1(λmax), v2(λ, λmax) < 0, v 2 (λ, λmax) 2 , otherwise. Clearly, we only need to show that v1(λmax), v2(λ, λmax) 0. Indeed, Lemma 13 implies that v1(λmax) NF (θ (λmax)) [please refer to the inequality in (28)]. By noting that 0 F, we have v1(λmax), 0 y λmax 0 v1(λmax), y 0. Moreover, because y/λmax = θ (λmax), it is easy to see that v1(λmax), v2(λ, λmax) = v1(λmax), y v1(λmax), y 0. Therefore, in view of (32) and (33), we can see that the first half of the statement holds, i.e., θ (λ) B(θ (λmax), v 2 (λ, λmax) 2). The second half of the statement, i.e., B(θ (λmax), v 2 (λ, λmax) 2) B(θ (λmax), |1/λ 1/λmax| y 2), can be easily obtained by noting that the inequality in (32) reduces to the one in (12) when t = 0. This completes the proof of the statement with λ0 = λmax. Thus, the proof of Theorem 10 is completed. Theorem 10 in fact provides a more accurate estimation of the dual optimal solution than the one in DPP, i.e., θ (λ) lies inside a ball centered at θ (λ0) with a radius v 2 (λ, λ0) 2. Based on this improved estimation and (R1 ), we can develop the following screening rule to discard the inactive features for Lasso. Theorem 14 For the Lasso problem, assume the dual optimal solution θ ( ) at λ0 (0, λmax] is known. Then, for each λ (0, λ0), we have [β (λ)]i = 0 if |x T i θ (λ0)| < 1 v 2 (λ, λ0) 2 xi 2. We omit the proof of Theorem 14 since it is very similar to the one of Theorem 3. By Theorem 14, we can easily develop the following sequential screening rule. Improvement 1: For the Lasso problem (1), suppose we are given a sequence of parameter values λmax = λ0 > λ1 > . . . > λK. Then for any integer 0 k < K, we have [β (λk+1)]i = 0 if β (λk) is known and the following holds: x T i y Xβ (λk) < 1 v 2 (λk+1, λk) 2 xi 2. The screening rule in Improvement 1 is developed based on (R1 ) and the estimation of the dual optimal solution in Theorem 10, which is more accurate than the one in DPP. Therefore, in view of (R1 ), the screening rule in Improvement 1 are more effective in discarding the inactive features than the DPP rule. Lasso Screening Rules via Dual Polytope Projection 2.3.2 Improving the DPP rules via Firmly Nonexpansiveness In Section 2.3.1, we improve the estimation of the dual optimal solution in DPP by making use of the projections of properly chosen rays. (R1 ) implies that the resulting screening rule stated in Improvement 1 is more effective in discarding the inactive features than DPP. In this Section, we present another approach to improve the estimation of the dual optimal solution in DPP by making use of the so called firmly nonexpansiveness of the projections onto nonempty closed convex subset of a Hilbert space. Theorem 15 (Bauschke and Combettes, 2011) Let C be a nonempty closed convex subset of a Hilbert space H. Then the projection operator defined in (5) is continuous and firmly nonexpansive. In other words, for any w1, w2 H, we have PC(w1) PC(w2) 2 2 + (Id PC)(w1) (Id PC)(w2) 2 2 w1 w2 2 2, (34) where Id is the identity operator. In view of the inequalities in (34) and (10), it is easy to see that firmly nonexpansiveness implies nonexpansiveness. But the converse is not true. Therefore, firmly nonexpansiveness of the projection operators is a stronger property than the nonexpansiveness. A direct application of Theorem 15 leads to the following result. Theorem 16 For the Lasso problem, let λ, λ0 > 0 be two parameter values. Then θ (λ) B θ (λ0) + 1 B θ (λ0), 1 λ 1 Proof In view of (6) and the firmly nonexpansiveness in (34), we have θ (λ) θ (λ0) 2 2 + y θ (λ) θ (λ0) 2 2 θ (λ) θ (λ0), y θ (λ) θ (λ0) + 1 which completes the proof of the first half of the statement. The second half of the statement is trivial by noting that the first inequality in (36) (firmly nonexpansiveness) implies the inequality in (11) (nonexpansiveness) but not vice versa. Indeed, it is easy to see that the ball in the middle of (35) is inside the right one and has only a half radius. Clearly, Theorem 16 provides a more accurate estimation of the dual optimal solution than the one in DPP, i.e., the dual optimal solution must be inside a ball which is a subset of the one in DPP and has only a half radius. Again, based on the estimation in Theorem 16 and (R1 ), we have the following result. Wang, Wonka, and Ye Theorem 17 For the Lasso problem, assume that the dual optimal solution θ ( ) at λ0 (0, λmax] is known. Then, for each λ (0, λ0), we have [β (λ)]i = 0 if x T i We omit the proof of Theorem 17 since it is very similar to the proof of Theorem 3. A direct application of Theorem 17 leads to the following sequential screening rule. Improvement 2: For the Lasso problem (1), suppose that we are given a sequence of parameter values λmax = λ0 > λ1 > . . . > λK. Then for any integer 0 k < K, we have [β (λk+1)]i = 0 if β (λk) is known and the following holds: x T i Because the screening rule in Improvement 2 is developed based on (R1 ) and the estimation in Theorem 16, it is easy to see that Improvement 2 is more effective in discarding the inactive features than DPP. 2.3.3 The Proposed Enhanced DPP Rules In Sections 2.3.1 and 2.3.2, we present two different approaches to improve the estimation of the dual optimal solution in DPP. In view of (R1 ), we can see that the resulting screening rules, i.e., Improvements 1 and 2, are more effective in discarding the inactive features than DPP. In this section, we give a more accurate estimation of the dual optimal solution than the ones in Theorems 10 and 16 by combining the aforementioned two approaches together. The resulting screening rule for Lasso is the so called enhanced DPP rule (EDPP). Again, (R1 ) implies that EDPP is more effective in discarding the inactive features than the screening rules in Improvements 1 and 2. We also present several experiments to demonstrate that EDPP is able to identify more inactive features than the screening rules in Improvements 1 and 2. Therefore, in the subsequent sections, we will focus on the generalizations and evaluations of EDPP. To develop the EDPP rules, we still follow the three steps in Section 2.1. Indeed, by combining the two approaches proposed in Sections 2.3.1 and 2.3.2, we can further improve the estimation of the dual optimal solution in the following theorem. Theorem 18 For the Lasso problem, suppose that the dual optimal solution θ ( ) at λ0 (0, λmax] is known, and λ (0, λ0], let v 2 (λ, λ0) be given by (17). Then, we have θ (λ) θ (λ0) + 1 2v 2 (λ, λ0) 2 1 2 v 2 (λ, λ0) 2. Proof Recall that θλ0(t) is defined by (19) and (26). In view of (34), we have PF y PF (θλ0(t)) 2 2 + (Id PF ) y (Id PF ) (θλ0(t)) 2 Lasso Screening Rules via Dual Polytope Projection By expanding the second term on the left hand side of (37) and rearranging the terms, we obtain the following equivalent form: PF y PF (θλ0(t)) 2 λ θλ0(t), PF y PF (θλ0(t)) E . (38) In view of (6), (20) and (27), the inequality in (38) can be rewritten as θ (λ) θ (λ0) 2 2 Dy λ θλ0(t), θ (λ) θ (λ0) E (39) λ θ (λ0) tv1(λ0), θ (λ) θ (λ0) E = v2(λ, λ0) tv1(λ0), θ (λ) θ (λ0) , t 0. [Recall that v1(λ0) and v2(λ, λ0) are defined by (15) and (16) respectively.] Clearly, the inequality in (39) is equivalent to θ (λ) θ (λ0) + 1 2(v2(λ, λ0) tv1(λ0)) 4 v2(λ, λ0) tv1(λ0) 2 2, t 0. (40) The statement follows easily by minimizing the right hand side of the inequality in (40), which has been done in the proof of Theorem 10. Indeed, Theorem 18 is equivalent to bounding θ (λ) in a ball as follows: θ (λ) B θ (λ0) + 1 2v 2 (λ, λ0), 1 2 v 2 (λ, λ0) 2 Based on this estimation and (R1 ), we immediately have the following result. Theorem 19 For the Lasso problem, assume that the dual optimal problem θ ( ) at λ0 (0, λmax] is known, and λ (0, λ0]. Then [β (λ)]i = 0 if the following holds: x T i 2v 2 (λ, λ0) < 1 1 2 v 2 (λ, λ0) 2 xi 2. We omit the proof of Theorem 19 since it is very similar to the one of Theorem 3. Based on Theorem 19, we can develop the EDPP rules as follows. Corollary 20 EDPP: For the Lasso problem, suppose that we are given a sequence of parameter values λmax = λ0 > λ1 > . . . > λK. Then for any integer 0 k < K, we have [β (λk+1)]i = 0 if β (λk) is known and the following holds: x T i 2v 2 (λk+1, λk) < 1 1 2 v 2 (λk+1, λk) 2 xi 2. (42) It is easy to see that the ball in (41) has the smallest radius compared to the ones in Theorems 10 and 16, and thus it provides the most accurate estimation of the dual optimal solution. According to (R1 ), EDPP is more effective in discarding the inactive features than DPP, Improvements 1 and 2. Wang, Wonka, and Ye 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 Rejection Ratio DPP Improvement1 Improvement2 EDPP 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 Rejection Ratio DPP Improvement1 Improvement2 EDPP 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 Rejection Ratio DPP Improvement1 Improvement2 EDPP 0 10 20 30 40 (a) Prostate Cancer, X R132 15154 0 50 100 150 200 (b) PIE, X R1024 11553 0 50 100 150 200 250 (c) MNIST, X R784 50000 Figure 1: Comparison of the family of DPP rules on three real data sets: Prostate Cancer digit data set (left), PIE data set (middle) and MNIST image data set (right). The first row shows the rejection ratios of DPP, Improvement 1, Improvement 2 and EDPP. The second row presents the speedup gained by these four methods. We evaluate the performance of the family of DPP screening rules, i.e., DPP, Improvement 1, Improvement 2 and EDPP, on three real data sets: a) the Prostate Cancer (Petricoin et al., 2002); b) the PIE face image data set (Sim et al., 2003); c) the MNIST handwritten digit data set (Lecun et al., 1998). To measure the performance of the screening rules, we compute the following two quantities: 1. the rejection ratio, i.e., the ratio of the number of features discarded by screening rules to the actual number of zero features in the ground truth; 2. the speedup, i.e., the ratio of the running time of the solver with screening rules to the running time of the solver without screening. For each data set, we run the solver with or without the screening rules to solve the Lasso problem along a sequence of 100 parameter values equally spaced on the λ/λmax scale from 0.05 to 1.0. Figure 1 presents the rejection ratios and speedup by the family of DPP screening rules. Table 1 reports the running time of the solver with or without the screening rules for solving the 100 Lasso problems, as well as the time for running the screening rules. The Prostate Cancer data set (Petricoin et al., 2002) is obtained by protein mass spectrometry. The features are indexed by time-of-flight values, which are related to the mass over charge ratios of the constituent proteins in the blood. The data set has 15154 measurements of 132 patients. 69 of the patients have prostate cancer and the rest are healthy. Therefore, the data matrix X is of size 132 15154, and the response vector y {1, 1}132 contains the binary labels of the patients. Lasso Screening Rules via Dual Polytope Projection Data solver DPP+solver Imp.1+solver Imp.2+solver EDPP+solver DPP Imp.1 Imp.2 EDPP Prostate Cancer 121.41 23.36 6.39 17.00 3.70 0.30 0.27 0.28 0.23 PIE 629.94 74.66 11.15 55.45 4.13 1.63 1.34 1.54 1.33 MNIST 2566.26 332.87 37.80 226.02 11.12 5.28 4.36 4.94 4.19 Table 1: Running time (in seconds) for solving the Lasso problems along a sequence of 100 tuning parameter values equally spaced on the scale of λ/λmax from 0.05 to 1 by (a): the solver (Liu et al., 2009) (reported in the second column) without screening; (b): the solver combined with different screening methods (reported in the 3rd to the 6th columns). The last four columns report the total running time (in seconds) for the screening methods. The PIE face image data set used in this experiment1 (Cai et al., 2007) contains 11554 gray face images of 68 people, taken under different poses, illumination conditions and expressions. Each of the images has 32 32 pixels. Therefore, in each trial, we first randomly pick an image as the response y R1024, and then use the remaining images to form the data matrix X R1024 11553. We run 100 trials and report the average performance of the screening rules. The MNIST data set contains gray images of scanned handwritten digits, including 60, 000 for training and 10, 000 for testing. The dimension of each image is 28 28. We first randomly select 5000 images for each digit from the training set (and in total we have 50000 images) and get a data matrix X R784 50000. Then in each trial, we randomly select an image from the testing set as the response y R784. We run 100 trials and report the average performance of the screening rules. From Figure 1, we can see that both Improvements 1 and 2 are able to discard more inactive features than DPP, and thus lead to a higher speedup. Compared to Improvement 2, we can also observe that Improvement 1 is more effective in discarding the inactive features. For the three data sets, the second row of Figure 1 shows that Improvement 1 leads to about 20, 60, 70 times speedup respectively, which are much higher than the ones gained by Improvement 1 (roughly 10 times for all the three cases). Moreover, the EDPP rule, which combines the ideas of both Improvements 1 and 2, is even more effective in discarding the inactive features than Improvement 1. We can see that, for all of the three data sets and most of the 100 parameter values, the rejection ratios of EDPP are very close to 100%. In other words, EDPP is able to discard almost all of the inactive features. Thus, the resulting speedup of EDPP is significantly better than the ones gained by the other three DPP rules. For the PIE and MNIST data sets, we can see that the speedup gained EDPP is about 150 and 230 times, which are two orders of magnitude. In view of Table 1, for the MNIST data set, the solver without screening needs about 2566.26 seconds to solve the 100 Lasso problems. In contrast, the solver with EDPP only needs 11.12 seconds, leading to substantial savings in the computational cost. Moreover, from 1. http://www.cad.zju.edu.cn/home/dengcai/Data/Face Data.html Wang, Wonka, and Ye the last four columns of Table 1, we can also observe that the computational cost of the family of DPP rules are very low. Compared to that of the solver without screening, the computational cost of the family of DPP rules is negligible. In Section 4, we will only compare the performance of EDPP against several other state-of-the-art screening rules. 3. Extensions to Group Lasso To demonstrate the flexibility of the family of DPP rules, we extend the idea of EDPP to the group Lasso problem (Yuan and Lin, 2006) in this section. Although the Lasso and group Lasso problems are very different from each other, we will see that their dual problems share a lot of similarities. For example, both of the dual problems can be formulated as looking for projections onto nonempty closed convex subsets of a Hilbert space. Recall that, the EDPP rule for the Lasso problem is entirely based on the properties of the projection operators. Therefore, the framework of the EDPP screening rule we developed for Lasso is also applicable for the group Lasso problem. In Section 3.1, we briefly review some basics of the group Lasso problem and explore the geometric properties of its dual problem. In Section 3.2, we develop the EDPP rule for the group Lasso problem. With the group information available, the group Lasso problem takes the form of: inf β Rp 1 2 g=1 ng βg 2, (43) where Xg RN ng is the data matrix for the gth group and p = PG g=1 ng. The dual problem of (43) is (see detailed derivation in the appendix): 2 : XT g θ 2 ng, g = 1, 2, . . . , G (44) The KKT conditions are given by g=1 Xgβ g(λ) + λθ (λ), (45) (θ (λ))T Xg ( ng β g(λ) β g(λ) 2 , ifβ g(λ) = 0, ngu, u 2 1, ifβ g(λ) = 0. (46) for g = 1, 2, . . . , G. Clearly, in view of (46), we can see that (θ (λ))T Xg 2 < ng β g(λ) = 0 (R2) However, since θ (λ) is generally unknown, (R2) is not applicable to identify the inactive groups, i.e., the groups which have 0 coefficients in the solution vector, for the group Lasso problem. Therefore, similar to the Lasso problem, we can first find a region Θ which contains θ (λ), and then (R2) can be relaxed as follows: sup θ Θ (θ)T Xg 2 < ng β g(λ) = 0. (R2 ) Lasso Screening Rules via Dual Polytope Projection Therefore, to develop screening rules for the group Lasso problem, we only need to estimate the region Θ which contains θ (λ), solve the maximization problem in (R2 ), and plug it into (R2 ). In other words, the three steps proposed in Section 2.1 can also be applied to develop screening rules for the group Lasso problem. Moreover, (R2 ) also implies that the smaller the region Θ is, the more accurate the estimation of the dual optimal solution is. As a result, the more effective the resulting screening rule is in discarding the inactive features. The dual problem of group Lasso has similar geometric interpretations to the one of Lasso. For notational convenience, let F be the feasible set of problem (44). Similar to the case of Lasso, problem (44)implies that the dual optimal θ (λ) is the projection of y/λ onto the feasible set F, i.e., θ (λ) = PF y , λ > 0. (47) Compared to (6), the only difference in (47) is that the feasible set F is the intersection of a set of ellipsoids, and thus not a polytope. However, similar to F, F is also a nonempty closed and convex (notice that 0 is a feasible point). Therefore, we can make use of all the aforementioned properties of the projection operators, e.g., Lemmas 9 and 13, Theorems 12 and 15, to develop screening rules for the group Lasso problem. Moreover, similar to the case of Lasso, we also have a specific parameter value (Tibshirani et al., 2012) for the group Lasso problem, i.e., λmax = max g XT g y 2 ng . (48) Indeed, λmax is the smallest parameter value such that the optimal solution of problem (43) is 0. More specifically, we have: β (λ) = 0, λ [λmax, ). (49) Combining the result in (49) and (45), we immediately have λ, λ [λmax, ). (50) Therefore, all through the subsequent sections, we will focus on the cases with λ (0, λmax). 3.2 Enhanced DPP rule for Group Lasso In view of (R2 ), we can see that the estimation of the dual optimal solution is the key step to develop a screening rule for the group Lasso problem. Because θ (λ) is the projection of y/λ onto the nonempty closed convex set F [please refer to (47)], we can make use of all the properties of projection operators, e.g., Lemmas 9 and 13, Theorems 12 and 15, to estimate the dual optimal solution. First, let us develop a useful technical result as follows. Lemma 21 For the group Lasso problem, let λmax be given by (48) and X := argmax Xg XT g y 2 ng . (51) Wang, Wonka, and Ye Suppose that the dual optimal solution θ ( ) is known at λ0 (0, λmax], let us define y λ0 θ (λ0), if λ0 (0, λmax), X XT y, if λ0 = λmax. (52) θλ0(t) = θ (λ0) + tv1(λ0), t 0. Then, we have the following result holds PF (θλ0(t)) = θ (λ0), t 0. (53) Proof Let us first consider the cases with λ0 (0, λmax). In view of the definition of λmax, it is easy to see that y/λ0 / F. Therefore, in view of (47) and Lemma 9, the statement in (53) follows immediately. We next consider the case with λ0 = λmax. By Theorem 12, we only need to check if v1(λmax) NF (θ (λmax)) v1(λmax), θ θ (λmax) 0, θ F. (54) Indeed, in view of (48) and (50), we can see that v1(λmax), θ (λmax) = X XT y, y λmax = XT y 2 2 λmax . (55) On the other hand, by (48) and (51), we can see that XT y 2 = λmax n , (56) where n is the number of columns of X . By plugging (56) into (55), we have v1(λmax), θ (λmax) = λmax n . Moreover, for any feasible point θ F, we can see that XT θ 2 n . (57) In view of the result in (57) and (56), it is easy to see that v1(λmax), θ = X XT y, θ = XT y, XT θ XT y 2 XT θ 2 = λmax n . (58) Combining the result in (55) and (58), it is easy to see that the inequality in (54) holds for all θ F, which completes the proof. By Lemma 21, we can accurately estimate the dual optimal solution of the group Lasso problem in the following theorem. It is easy to see that the result in Theorem 22 is very similar to the one in Theorem 18 for the Lasso problem. Lasso Screening Rules via Dual Polytope Projection Theorem 22 For the group Lasso problem, suppose that the dual optimal solution θ ( ) at θ0 (0, λmax] is known, and v1(λ0) is given by (52). For any λ (0, λ0], let us define v2(λ, λ0) = y v 2 (λ, λ0) = v2(λ, λ0) v1(λ0), v2(λ, λ0) Then, the dual optimal solution θ (λ) can be estimated as follows: θ (λ) θ (λ0) + 1 2v 2 (λ, λ0) 2 1 2 v 2 (λ, λ0) 2. We omit the proof of Theorem 22 since it is exactly the same as the one of Theorem 18. Indeed, Theorem 22 is equivalent to estimating θ (λ) in a ball as follows: θ (λ) B θ (λ0) + 1 2v 2 (λ, λ0), 1 2 v 2 (λ, λ0) 2 Based on this estimation and (R2 ), we immediately have the following result. Theorem 23 For the group Lasso problem, assume the dual optimal solution θ ( ) is known at λ0 (0, λmax], and λ (0, λ0]. Then β g(λ) = 0 if the following holds XT g 2v 2 (λ, λ0) 2 < ng 1 2 v 2 (λ, λ0) 2 Xg 2. (60) Proof In view of (R2 ), we only need to check if XT g θ (λ) 2 < ng. To simplify notations, let o = θ (λ0) + 1 2v 2 (λ, λ0), r = 1 2 v 2 (λ, λ0) 2. It is easy to see that XT g θ (λ) 2 XT g (θ (λ) o) 2 + XT g o 2 (61) < Xg 2 θ (λ) o 2 + ng r Xg 2 r Xg 2 + ng r Xg 2 = ng, which completes the proof. The second and third inequalities in (61) are due to (60) and Theorem 22, respectively. In view of (45) and Theorem 23, we can derive the EDPP rule to discard the inactive groups for the group Lasso problem as follows. Corollary 24 EDPP: For the group Lasso problem (43), suppose we are given a sequence of parameter values λmax = λ0 > λ1 > . . . > λK. For any integer 0 k < K, we have β g(λk+1) = 0 if β (λk) is known and the following holds: XT g y PG g=1 Xgβ g(λk) 2v 2 (λk+1, λk) 2 v 2 (λk+1, λk) 2 Xg 2. Wang, Wonka, and Ye 4. Experiments In this section, we evaluate the proposed EDPP rules for Lasso and group Lasso on both synthetic and real data sets. To measure the performance of our screening rules, we compute the rejection ratio and speedup (please refer to Section 2.3.3 for details). Because the EDPP rule is safe, i.e., no active features/groups will be mistakenly discarded, the rejection ratio will be less than one. In Section 4.1, we conduct two sets of experiments to compare the performance of EDPP against several state-of-the-art screening methods. We first compare the performance of the basic versions of EDPP, DOME, SAFE, and strong rule. Then, we focus on the sequential versions of EDPP, SAFE, and strong rule. Notice that, SAFE and EDPP are safe. However, strong rule may mistakenly discard features with nonzero coefficients in the solution. Although DOME is also safe for the Lasso problem, it is unclear if there exists a sequential version of DOME. Recall that, real applications usually favor the sequential screening rules because we need to solve a sequence of of Lasso problems to determine an appropriate parameter value (Tibshirani et al., 2012). Moreover, DOME assumes special structure on the data, i.e., each feature and the response vector should be normalized to have unit length. In Section 4.2, we compare EDPP with strong rule for the group Lasso problem on synthetic data sets. We are not aware of any safe screening rules for the group Lasso problem at this point. For SAFE and Dome, it is not straightforward to extend them to the group Lasso problem. An efficient MATLAB implementation of the EDPP screening rules combined with the solvers from SLEP package (Liu et al., 2009) for both Lasso and group Lasso is available at http://dpc-screening.github.io/. 4.1 EDPP for the Lasso Problem For the Lasso problem, we first compare the performance of the basic versions of EDPP, DOME, SAFE and strong rule in Section 4.1.1. Then, we compare the performance of the sequential versions of EDPP, SAFE and strong rule in Section 4.1.2. 4.1.1 Evaluation of the Basic EDPP Rule In this section, we perform experiments on six real data sets to compare the performance of the basic versions of SAFE, DOME, strong rule and EDPP. Briefly speaking, suppose that we are given a parameter value λ. Basic versions of the aforementioned screening rules always make use of β (λmax) to identify the zero components of β (λ). Take EDPP for example. The basic version of EDPP can be obtained by replacing β (λk) and v 2 (λk+1, λk) with β (λ0) and v 2 (λk, λ0), respectively, in (42) for all k = 1, . . . , K. In this experiment, we report the rejection ratios of the basic SAFE, DOME, strong rule and EDPP along a sequence of 100 parameter values equally spaced on the λ/λmax scale from 0.05 to 1.0. We note that DOME requires that all features of the data sets have unit length. Therefore, to compare the performance of DOME with SAFE, strong rule and EDPP, we normalize the features of all the data sets used in this section. However, Lasso Screening Rules via Dual Polytope Projection 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 Rejection Ratio SAFE DOME Strong Rule EDPP (a) Colon Cancer, X R62 2000 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 Rejection Ratio SAFE DOME Strong Rule EDPP (b) Lung Cancer, X R203 12600 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 Rejection Ratio SAFE DOME Strong Rule EDPP (c) Prostate Cancer, X R132 15154 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 Rejection Ratio SAFE DOME Strong Rule EDPP (d) PIE, X R1024 11553 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 Rejection Ratio SAFE DOME Strong Rule EDPP (e) MNIST, X R784 50000 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 Rejection Ratio SAFE DOME Strong Rule EDPP (f) COIL-100, X R1024 7199 Figure 2: Comparison of basic versions of SAFE, DOME, Strong Rule and EDPP on six real data sets. it is worthwhile to mention that SAFE, strong rule and EDPP do not assume any specific structures on the data set. The data sets used in this section are listed as follows: a) Colon Cancer data set (Alon et al., 1999); b) Lung Cancer data set (Bhattacharjee et al., 2001); c) Prostate Cancer data set (Petricoin et al., 2002); d) PIE face image data set (Sim et al., 2003; Cai et al., 2007); e) MNIST handwritten digit data set (Lecun et al., 1998); f) COIL-100 image data set (Nene et al., 1996; Cai et al., 2011). The Colon Cancer data set contains gene expression information of 22 normal tissues and 40 colon cancer tissues, and each has 2000 gene expression values. The Lung Cancer data set contains gene expression information of 186 lung tumors and 17 normal lung specimens. Each specimen has 12600 expression values. The COIL-100 image data set consists of images of 100 objects. The images of each object are taken every 5 degree by rotating the object, yielding 72 images per object. The Wang, Wonka, and Ye dimension of each image is 32 32. In each trial, we randomly select one image as the response vector and use the remaining ones as the data matrix. We run 100 trials and report the average performance of the screening rules. The description and the experimental settings for the Prostate Cancer data set, the PIE face image data set and the MNIST handwritten digit data set are given in Section 2.3.3. Figure 2 reports the rejection ratios of the basic versions of SAFE, DOME, strong rule and EDPP. We can see that EDPP significantly outperforms the other three screening methods on five of the six data sets, i.e., the Colon Cancer, Lung Cancer, Prostate Cancer, MNIST, and COIL-100 data sets. On the PIE face image data set, EDPP and DOME provide similar performance and both significantly outperform SAFE and strong rule. However, as pointed out by Tibshirani et al. (2012), the real strength of screening methods stems from their sequential versions. The reason is because the optimal parameter value is unknown in real applications. Typical approaches for model selection usually involve solving the Lasso problems many times along a sequence of parameter values. Thus, the sequential screening methods are more suitable in facilitating the aforementioned scenario and more useful than their basic-version counterparts in practice (Tibshirani et al., 2012). 4.1.2 Evaluation of the Sequential EDPP Rule In this section, we compare the performance of the sequential versions of SAFE, strong rule and EDPP by the rejection ratio and speedup. We first perform experiments on two synthetic data sets. We then apply the three screening rules to six real data sets. We first perform experiments on several synthetic problems, which have been commonly used in the sparse learning literature (Bondell and Reich, 2008; Zou and Hastie, 2005; Tibshirani, 1996). We simulate data from the true model y = Xβ + σϵ, ϵ N(0, 1). (62) We generate two data sets with 250 10000 entries: Synthetic 1 and Synthetic 2. For Synthetic 1, the entries of the data matrix X are i.i.d. standard Gaussian with pairwise correlation zero, i.e., corr(xi, xi) = 0. For Synthetic 2, the entries of the data matrix X are drawn from i.i.d. standard Gaussian with pairwise correlation 0.5|i j|, i.e., corr(xi, xj) = 0.5|i j|. To generate the response vector y R250 by the model in (62), we need to set the parameter σ and construct the ground truth β R10000. Throughout this section, σ is set to be 0.1. To construct β , we randomly select p components which are populated from a uniform [ 1, 1] distribution, and set the remaining ones as 0. After we generate the data matrix X and the response vector y, we run the solver with or without screening rules to solve the Lasso problems along a sequence of 100 parameter values equally spaced on the λ/λmax scale from 0.05 to 1.0. We then run 100 trials and report the average performance. We first apply the screening rules, i.e., SAFE, strong rule and EDPP to Synthetic 1 with p = 100, 1000, 5000 respectively. Figure 3(a), Figure 3(b) and Figure 3(c) present the corresponding rejection ratios and speedup of SAFE, strong rule and EDPP. We can see that the rejection ratios of strong rule and EDPP are comparable to each other, and both of them are more effective in discarding inactive features than SAFE. In terms of the speedup, EDPP provides better performance than strong rule. The reason is because strong rule is a heuristic screening method, i.e., it may mistakenly discard active features which have Lasso Screening Rules via Dual Polytope Projection 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 Rejection Ratio SAFE Strong Rule EDPP 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 Rejection Ratio SAFE Strong Rule EDPP 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 Rejection Ratio SAFE Strong Rule EDPP 0 10 20 30 40 50 Strong Rule (a) Synthetic 1, p = 100 0 10 20 30 40 50 Strong Rule (b) Synthetic 1, p = 1000 0 10 20 30 40 50 Strong Rule (c) Synthetic 1, p = 5000 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 Rejection Ratio SAFE Strong Rule EDPP 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 Rejection Ratio SAFE Strong Rule EDPP 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 Rejection Ratio SAFE Strong Rule EDPP 0 10 20 30 40 50 Strong Rule (d) Synthetic 2, p = 100 0 10 20 30 40 50 Strong Rule (e) Synthetic 2, p = 1000 0 10 20 30 40 50 Strong Rule (f) Synthetic 2, p = 5000 Figure 3: Comparison of SAFE, Strong Rule and EDPP on two synthetic data sets with different numbers of nonzero components of the ground truth. nonzero components in the solution. Thus, strong rule needs to check the KKT conditions to ensure the correctness of the screening result. In contrast, the EDPP rule does not need to check the KKT conditions since the discarded features are guaranteed to be absent from the resulting sparse representation. From the last two columns of Table 2, we can observe that the running time of strong rule is about twice of that of EDPP. Figure 3(d), Figure 3(e) and Figure 3(f) present the rejection ratios and speedup of SAFE, strong rule and EDPP on Synthetic 2 with p = 100, 1000, 5000 respectively. We can Wang, Wonka, and Ye Data p solver SAFE+solver Strong Rule+solver EDPP+solver SAFE Strong Rule EDPP Synthetic 1 100 109.01 100.09 2.67 2.47 4.60 0.65 0.36 1000 123.60 111.32 2.97 2.71 4.59 0.66 0.37 5000 124.92 113.09 3.00 2.72 4.57 0.65 0.36 Synthetic 2 100 107.50 96.94 2.62 2.49 4.61 0.67 0.37 1000 113.59 104.29 2.84 2.67 4.57 0.63 0.35 5000 125.25 113.35 3.02 2.81 4.62 0.65 0.36 Table 2: Running time (in seconds) for solving the Lasso problems along a sequence of 100 tuning parameter values equally spaced on the scale of λ/λmax from 0.05 to 1 by (a): the solver (Liu et al., 2009) (reported in the third column) without screening; (b): the solver combined with different screening methods (reported in the 4th to the 6th columns). The last four columns report the total running time (in seconds) for the screening methods. observe patterns similar to Synthetic 1. Clearly, our method, EDPP, is very robust to the variations of the intrinsic structures of the data sets and the sparsity of the ground truth. We next compare the performance of the EDPP rule with SAFE and strong rule on six real data sets along a sequence of 100 parameter values equally spaced on the λ/λmax scale from 0.05 to 1.0. The data sets are listed as follows: a) Breast Cancer data set (West et al., 2001; Shevade and Keerthi, 2003); b) Leukemia data set (Armstrong et al., 2002); c) Prostate Cancer data set (Petricoin et al., 2002); d) PIE face image data set (Sim et al., 2003; Cai et al., 2007); e) MNIST handwritten digit data set (Lecun et al., 1998); f) Street View House Number (SVHN) data set (Netzer et al., 2001). We present the rejection ratios and speedup of EDPP, SAFE and strong rule in Figure 4. Table 3 reports the running time of the solver with or without screening for solving the 100 Lasso problems, and that of the screening rules. The Breast Cancer data set contains 44 tumor samples, each of which is represented by 7129 genes. Therefore, the data matrix X is of 44 7129. The response vector y {1, 1}44 contains the binary label of each sample. The Leukemia data set is a DNA microarray data set, containing 52 samples and 11225 genes. Therefore, the data matrix X is of 52 11225. The response vector y {1, 1}52 contains the binary label of each sample. The SVHN data set contains color images of street view house numbers, including 73257 images for training and 26032 for testing. The dimension of each image is 32 32. Lasso Screening Rules via Dual Polytope Projection 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 Rejection Ratio SAFE Strong Rule EDPP 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 Rejection Ratio SAFE Strong Rule EDPP 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 Rejection Ratio SAFE Strong Rule EDPP Strong Rule (a) Breast Cancer, X R44 7129 0 5 10 15 20 Strong Rule (b) Leukemia, X R52 11225 0 10 20 30 40 Strong Rule (c) Prostate Cancer, X R132 15154 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 Rejection Ratio SAFE Strong Rule EDPP 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 Rejection Ratio SAFE Strong Rule EDPP 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 Rejection Ratio SAFE Strong Rule EDPP 0 50 100 150 200 Strong Rule (d) PIE, X R1024 11553 0 50 100 150 200 250 Strong Rule (e) MNIST, X R784 50000 0 50 100 150 200 Strong Rule (f) SVHN, X R3072 99288 Figure 4: Comparison of SAFE, Strong Rule, and EDPP on six real data sets. In each trial, we first randomly select an image as the response y R3072, and then use the remaining ones to form the data matrix X R3072 99288. We run 100 trials and report the average performance. The description and the experiment settings for the Prostate Cancer data set, the PIE face image data set and the MNIST handwritten digit data set are given in Section 2.3.3. From Figure 4, we can see that the rejection ratios of strong rule and EDPP are comparable to each other. Compared to SAFE, both of strong rule and EDPP are able to identify far more inactive features, leading to a much higher speedup. However, because strong rule needs to check the KKT conditions to ensure the correctness of the screening Wang, Wonka, and Ye Data solver SAFE+solver Strong Rule+solver EDPP+solver SAFE Strong Rule EDPP Breast Cancer 12.70 7.20 1.31 1.24 0.44 0.06 0.05 Leukemia 16.99 9.22 1.15 1.03 0.91 0.09 0.07 Prostate Cancer 121.41 47.17 4.83 3.70 3.60 0.46 0.23 PIE 629.94 138.33 4.84 4.13 19.93 2.54 1.33 MNIST 2566.26 702.21 15.15 11.12 64.81 8.14 4.19 SVHN 11023.30 5220.88 90.65 59.71 583.12 61.02 31.64 Table 3: Running time (in seconds) for solving the Lasso problems along a sequence of 100 tuning parameter values equally spaced on the scale of λ/λmax from 0.05 to 1 by (a): the solver (Liu et al., 2009) (reported in the second column) without screening; (b): the solver combined with different screening methods (reported in the 3rd to the 5th columns). The last three columns report the total running time (in seconds) for the screening methods. results, the speedup gained by EDPP is higher than that by strong rule. When the size of the data matrix is not very large, e.g., the Breast Cancer and Leukemia data sets, the speedup gained by EDPP are slightly higher than that by strong rule. However, when the size of the data matrix is large, e.g., the MNIST and SVHN data sets, the speedup gained by EDPP are significantly higher than that by strong rule. Moreover, we can also observe from Figure 4 that, the larger the data matrix is, the higher the speedup can be gained by EDPP. More specifically, for the small data sets, e.g., the Breast Cancer, Leukemia and Prostate Cancer data sets, the speedup gained by EDPP is about 10, 17 and 30 times. In contrast, for the large data sets, e.g., the PIE, MNIST and SVHN data sets, the speedup gained by EDPP is two orders of magnitude. Take the SVHN data set for example. The solver without screening needs about 3 hours to solve the 100 Lasso problems. Combined with the EDPP rule, the solver only needs less than 1 minute to complete the task. Clearly, the proposed EDPP screening rule is very effective in accelerating the computation of Lasso especially for large-scale problems, and outperforms the state-of-the-art approaches like SAFE and strong rule. Notice that, the EDPP method is safe in the sense that the discarded features are guaranteed to have zero coefficients in the solution. 4.1.3 EDPP with Least-Angle Regression (LARS) As we mentioned in the introduction, we can combine EDPP with any existing solver. In this experiment, we integrate EDPP and strong rule with another state-of-the-art solver for Lasso, i.e., Least-Angle Regression (LARS) (Efron et al., 2004). We perform experiments on the same real data sets used in the last section with the same experiment settings. Because the rejection ratios of screening methods are irrelevant to the solvers, we only report the speedup. Table 4 reports the running time of LARS with or without screening for solving the 100 Lasso problems, and that of the screening methods. Figure 5 shows the speedup of these two methods. We can still observe a substantial speedup gained by EDPP. The Lasso Screening Rules via Dual Polytope Projection Data LARS Strong Rule+LARS EDPP+LARS Strong Rule EDPP Breast Cancer 1.30 0.06 0.04 0.04 0.03 Leukemia 1.46 0.09 0.05 0.07 0.04 Prostate Cancer 5.76 1.04 0.37 0.42 0.24 PIE 22.52 2.42 1.31 2.30 1.21 MNIST 92.53 8.53 4.75 8.36 4.34 SVHN 1017.20 65.83 35.73 62.53 32.00 Table 4: Running time (in seconds) for solving the Lasso problems along a sequence of 100 tuning parameter values equally spaced on the scale of λ/λmax from 0.05 to 1 by (a): the solver (Efron et al., 2004; Mairal et al., 2010) (reported in the second column) without screening; (b): the solver combined with different screening methods (reported in the 3rd and 4th columns). The last two columns report the total running time (in seconds) for the screening methods. 0 10 20 30 40 Strong Rule (a) Breast Cancer, X R44 7129 0 10 20 30 40 Strong Rule (b) Leukemia, X R52 11225 0 5 10 15 20 Strong Rule (c) Prostate Cancer, X R132 15154 0 5 10 15 20 Strong Rule (d) PIE, X R1024 11553 0 5 10 15 20 25 Strong Rule (e) MNIST, X R784 50000 Strong Rule (f) SVHN, X R3072 99288 Figure 5: The speedup gained by Strong Rule and EDPP combined with LARS on six real data sets. reason is that EDPP has a very low computational cost (see Table 4) and it is very effective in discarding inactive features (see Figure 4). 4.2 EDPP for the Group Lasso Problem In this experiment, we evaluate the performance of EDPP and strong rule with different numbers of groups. The data matrix X is fixed to be 250 200000. The entries of the response vector y and the data matrix X are generated i.i.d. from a standard Gaussian distribution. For each experiment, we repeat the computation 20 times and report the average results. Moreover, let ng denote the number of groups and sg be the average group size. For example, if ng is 10000, then sg = p/ng = 20. Wang, Wonka, and Ye 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 Rejection Ratio Strong Rule EDPP 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 Rejection Ratio Strong Rule EDPP 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 Rejection Ratio Strong Rule EDPP 0 20 40 60 80 100 Strong Rule (a) ng = 10000 0 50 100 150 Strong Rule (b) ng = 20000 0 50 100 150 200 Strong Rule (c) ng = 40000 Figure 6: Comparison of EDPP and strong rules with different numbers of groups. ng solver Strong Rule+solver EDPP+solver Strong Rule EDPP 10000 4535.54 296.60 53.81 13.99 8.32 20000 5536.18 179.48 46.13 14.16 8.61 40000 6144.48 104.50 37.78 13.13 8.37 Table 5: Running time (in seconds) for solving the group Lasso problems along a sequence of 100 tuning parameter values equally spaced on the scale of λ/λmax from 0.05 to 1.0 by (a): the solver from SLEP (reported in the second column) without screening; (b): the solver combined with different screening methods (reported in the 3rd and 4th columns). The last two columns report the total running time (in seconds) for the screening methods. The data matrix X is of size 250 200000. From Figure 6, we can see that EDPP and strong rule are able to discard more inactive groups when the number of groups ng increases. The intuition behind this observation is that the estimation of the dual optimal solution is more accurate with a smaller group size. Notice that, a large ng implies a small average group size. Figure 6 also implies that compared to strong rule, EDPP is able to discard more inactive groups and is more robust with respect to different values of ng. Table 5 further demonstrates the effectiveness of EDPP in improving the efficiency of the solver. When ng = 10000, the efficiency of the solver is improved by about 80 times. When ng = 20000 and 40000, the efficiency of the solver is boosted by about 120 and 160 times with EDPP respectively. Lasso Screening Rules via Dual Polytope Projection 5. Conclusion In this paper, we develop new screening rules for the Lasso problem by making use of the properties of the projection operators with respect to a closed convex set. Our proposed methods, i.e., DPP screening rules, are able to effectively identify inactive predictors of the Lasso problem, thus greatly reducing the size of the optimization problem. Moreover, we further improve DPP rule and propose the enhanced DPP rule, which is more effective in discarding inactive features than DPP rule. The idea of the family of DPP rules can be easily generalized to identify the inactive groups of the group Lasso problem. Extensive numerical experiments on both synthetic and real data demonstrate the effectiveness of the proposed rules. It is worthwhile to mention that the family of DPP rules can be combined with any Lasso solver as a speedup tool. In the future, we plan to generalize our ideas to other sparse formulations consisting of more general structured sparse penalties, e.g., tree/graph Lasso, fused Lasso. Acknowledgments We would like to acknowledge support for this project from the National Science Foundation (IIS-0953662, IIS-1421057, and IIS-1421100) and the National Institutes of Health (R01 LM010730 and U54 EB020403). Appendix A. The Dual Problem of Lasso In this appendix, we give the detailed derivation of the dual problem of Lasso. A.1 Dual Formulation Assuming the data matrix is X RN p, the standard Lasso problem is given by: inf β Rp 1 2 y Xβ 2 2 + λ β 1. (63) For completeness, we give a detailed deviation of the dual formulation of (63) in this section. Note that problem (63) has no constraints. Therefore the dual problem is trivial and useless. A common trick (Boyd and Vandenberghe, 2004) is to introduce a new set of variables z = y Xβ such that problem (63) becomes: inf β 1 2 z 2 2 + λ β 1, (64) subject to z = y Xβ. By introducing the dual variables η RN, we get the Lagrangian of problem (64): L(β, z, η) = 1 2 z 2 2 + λ β 1 + ηT (y Xβ z). For the Lagrangian, the primal variables are β and z. And the dual function g(η) is: g(η) = inf β,z L(β, z, η) = ηT y + inf β ( ηT Xβ + λ β 1) + inf z 1 2 z 2 2 ηT z . Wang, Wonka, and Ye In order to get g(η), we need to solve the following two optimization problems. inf β ηT Xβ + λ β 1, (65) and inf z 1 2 z 2 2 ηT z. (66) Let us first consider problem (65). Denote the objective function of problem (65) as f1(β) = ηT Xβ + λ β 1. (67) f1(β) is convex but not smooth. Therefore let us consider its subgradient f1(β) = XT η + λv, in which v 1 and v Tβ = β 1, i.e., v is the subgradient of β 1. The necessary condition for f1 to attain an optimum is β , such that 0 f1(β ) = { XT η + λv }, where v β 1. In other words, β , v should satisfy λ , v 1, v T β = β 1, which is equivalent to |x T i η| λ, i = 1, 2, . . . , p. Then we plug v = XT η λ and v T β = β 1 into (67): f1(β ) = inf β f1(β) = ηT Xβ + λ XT η Therefore, the optimum value of problem (65) is 0. Next, let us consider problem (66). Denote the objective function of problem (66) as f2(z). Let us rewrite f2(z) as: 2( z η 2 2 η 2 2). Clearly, z = argmin z f2(z) = η, and inf z f2(z) = 1 Combining everything above, we get the dual problem: sup η g(η) = ηT y 1 subject to |x T i η| λ, i = 1, 2, . . . , p. Lasso Screening Rules via Dual Polytope Projection which is equivalent to sup η g(η) = 1 2 η y 2 2, (68) subject to |x T i η| λ, i = 1, 2, . . . , p. By a simple re-scaling of the dual variables η, i.e., let θ = η λ, problem (68) transforms to: sup θ g(θ) = 1 subject to |x T i θ| 1, i = 1, 2, . . . , p. A.2 The KKT Conditions Problem (64) is clearly convex and its constraints are all affine. By Slater s condition, as long as problem (64) is feasible we will have strong duality. Denote β , z and θ as optimal primal and dual variables. The Lagrangian is L(β, z, θ) = 1 2 z 2 2 + λ β 1 + λθT (y Xβ z). From the KKT condition, we have 0 βL(β , z , θ ) = λXT θ + λv, in which v 1 and v T β = β 1, (69) z L(β , z , θ ) = z λθ = 0, (70) θL(β , z , θ ) = λ(y Xβ z ) = 0. (71) From (70) and (71), we have: y = Xβ + λθ . From (69), we know there exists v β 1 such that XT θ = v , v 1 and (v )T β = β 1, which is equivalent to |x T i θ | 1, i = 1, 2, . . . , p, and (θ )T Xβ = β 1. (72) From (72), it is easy to conclude: ( sign(β i ) if β i = 0, [ 1, 1] if β i = 0. Appendix B. The Dual Problem of Group Lasso In this appendix, we present the detailed derivation of the dual problem of group Lasso. Wang, Wonka, and Ye B.1 Dual Formulation Assuming the data matrix is Xg RN ng and p = PG g=1 ng, the group Lasso problem is given by: inf β Rp 1 2 y g=1 Xgβg 2 2 + λ ng βg 2. (73) Let z = y PG g=1 Xgβg and problem (73) becomes: inf β 1 2 z 2 2 + λ ng βg 2, (74) subject to z = y By introducing the dual variables η RN, the Lagrangian of problem (74) is: L(β, z, η) = 1 2 z 2 2 + λ ng βg 2 + ηT (y g=1 Xgβg z). and the dual function g(η) is: g(η) = inf β,z L(β, z, η) = ηT y + inf β g=1 Xgβg + λ 2 z 2 2 ηT z . In order to get g(η), let us solve the following two optimization problems. inf β ηT G X g=1 Xgβg + λ ng βg 2, (75) and inf z 1 2 z 2 2 ηT z. (76) Let us first consider problem (75). Denote the objective function of problem (75) as ˆf(β) = ηT G X g=1 Xgβg + λ Let ˆfg(βg) = ηT Xgβg + λ ng βg 2, g = 1, 2, . . . , G. then we can split problem (75) into a set of subproblems. Clearly ˆfg(βg) is convex but not smooth because it has a singular point at 0. Consider the subgradient of ˆfg, ˆfg(βg) = XT g η + λ ngvg, g = 1, 2, . . . , G, Lasso Screening Rules via Dual Polytope Projection where vg is the subgradient of βg 2: ( βg βg 2 if βg = 0, u, u 2 1 if βg = 0. (77) Let β g be the optimal solution of ˆfg, then β g satisfy v g β g 2, XT g η + λ ngv g = 0. If β g = 0, clearly, ˆfg(β g) = 0. Otherwise, since λ ngv g = XT g η and v g = β g β g 2 , we have ˆfg(β g) = λ ng (β g)T β g 2 β g + λ ng β g 2 = 0. All together, we can conclude the inf βg ˆfg(βg) = 0, g = 1, 2, . . . , G inf β ˆf(β) = inf β g=1 ˆfg(βg) = g=1 inf βg ˆfg(βg) = 0. The second equality is due to the fact that βg s are independent. Note, from (77), it is easy to see vg 2 1. Since λ ngv g = XT g η, we get a constraint on η, i.e., η should satisfy: XT g η 2 λ ng, g = 1, 2, . . . , G. Next, let us consider problem (76). Since problem (76) is exactly the same as problem (66), we conclude: z = argmin z 1 2 z 2 2 ηT z = η, and inf z 1 2 z 2 2 ηT z = 1 Therefore the dual function g(η) is: g(η) = ηT y 1 Combining everything above, we get the dual formulation of the group Lasso: sup η g(η) = ηT y 1 subject to XT g η 2 λ ng, g = 1, 2, . . . , G. Wang, Wonka, and Ye which is equivalent to sup η g(η) = 1 2 η y 2 2, (78) subject to XT g η 2 λ ng, g = 1, 2, . . . , G. By a simple re-scaling of the dual variables η, i.e., let θ = η λ, problem (78) transforms to: sup θ g(θ) = 1 subject to XT g θ 2 ng, g = 1, 2, . . . , G. B.2 The KKT Conditions Clearly, problem (74) is convex and its constraints are all affine. By Slater s condition, as long as problem (74) is feasible we will have strong duality. Denote β , z and θ as optimal primal and dual variables. The Lagrangian is L(β, z, θ) = 1 2 z 2 2 + λ ng βg 2 + λθT (y g=1 Xgβg z). From the KKT condition, we have 0 βg L(β , z , θ ) = λXT g θ + λ ngvg, in which vg β g 2, g = 1, 2, . . . , G, (79) z L(β , z , θ ) = z λθ = 0, (80) θL(β , z , θ ) = λ (y g=1 Xgβ g z ) = 0. (81) From (80) and (81), we have: g=1 Xgβ g + λθ . From (79), we know there exists v g β g 2 such that XT g θ = ngv g ( β g β g 2 if β g = 0, u, u 2 1 if β g = 0, Then the following holds: ( ng β g β g 2 if β g = 0, ngu, u 2 1 if β g = 0, for g = 1, 2, . . . , G. Clearly, if XT g θ 2 < ng, we can conclude β g = 0. Lasso Screening Rules via Dual Polytope Projection U. Alon, N. Barkai, D. Notterman, K. Gish, S. Ybarra, D. Mack, and A. Levine. Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Cell Biology, 96:6745 6750, 1999. S. Armstrong, J. Staunton, L. Silverman, R. Pieters, M. den Boer, M. Minden, S. Sallan, E. Lander, T. Golub, and S. Korsmeyer. MLL translocations specify a distinct gene expression profile that distinguishes a unique leukemia. Nature Genetics, 30:41 47, 2002. H. H. Bauschke and P. L. Combettes. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, 2011. S. R. Becker, E. Cand es, and M. Grant. Templates for convex cone problems with applications to sparse signal recovery. Technical report, Standford University, 2010. D. P. Bertsekas. Convex Analysis and Optimization. Athena Scientific, 2003. A. Bhattacharjee, W. Richards, J. Staunton, C. Li, S. Monti, P. Vasa, C. Ladd, J. Beheshti, R. Bueno, M. Gillette, M. Loda, G. Weber, E. Mark, E. Lander, W. Wong, B. Johnson, T. Golub, D. Sugarbaker, and M. Meyerson. Classification of human lung carcinomas by mrna expression profiling reveals distinct adenocarcinoma subclasses. Proceedings of the National Academy of Sciences, 98:13790 13795, 2001. H. Bondell and B. Reich. Simultaneous regression shrinkage, variable selection and clustering of predictors with OSCAR. Biometrics, 64:115 123, 2008. S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. A. Bruckstein, D. Donoho, and M. Elad. From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Review, 51:34 81, 2009. D. Cai, X. He, and J. Han. Efficient kernel discriminant analysis via spectral regression. In ICDM, 2007. D. Cai, X. He, J. Han, and T. Huang. Graph regularized non-negative matrix factorization for data representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33:1548 1560, 2011. E. Cand es. Compressive sampling. In Proceedings of the International Congress of Mathematics, 2006. S. S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. SIAM Review, 43:129 159, 2001. D. L. Donoho and Y. Tsaig. Fast solution of l-1 norm minimization problems when the solution may be sparse. IEEE Transactions on Information Theory, 54:4789 4812, 2008. B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Annals of Statistics, 32:407 499, 2004. Wang, Wonka, and Ye L. El Ghaoui, V. Viallon, and T. Rabbani. Safe feature elimination in sparse supervised learning. Pacific Journal of Optimization, 8:667 698, 2012. J. Fan and J. Lv. Sure independence screening for ultrahigh dimensional feature spaces. Journal of the Royal Statistical Society Series B, 70:849 911, 2008. J. Friedman, T. Hastie, H. H efling, and R. Tibshirani. Pathwise coordinate optimization. Annals of Applied Statistics, 1:302 332, 2007. J. Friedman, T. Hastie, and R. Tibshirani. Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33:1 22, 2010. S. J. Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinevsky. An interior-point method for large scale l1-regularized least squares. IEEE Journal on Selected Topics in Signal Processing, 1:606 617, 2007. Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. In Proceedings of the IEEE, 1998. J. Liu, S. Ji, and J. Ye. SLEP: Sparse Learning with Efficient Projections. Arizona State University, 2009. J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online learning for matrix factorization and sparse coding. Journal of Machine Learning Research, 11:19 60, 2010. S. Nene, S. Nayar, and H. Murase. Columbia object image library (coil-100). Technical report, CUCS-006-96, Columbia University, 1996. Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Ng. Reading digits in nature images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2001. M. Y. Park and T. Hastie. L1-regularized path algorithm for generalized linear models. Journal of the Royal Statistical Society Series B, 69:659 677, 2007. E. Petricoin, D. Ornstein, C. Paweletz, A. Ardekani, P. Hackett, B. Hitt, A. Velassco, C. Trucco, L. Wiegand, K. Wood, C. Simone, P. Levine, W. Linehan, M. Emmert-Buck, S. Steinberg, E. Kohn, and L. Liotta. Serum proteomic patterns for detection of prostate cancer. Journal of National Cancer Institute, 94:1576 1578, 2002. A. Ruszczy nski. Nonlinear Optimization. Princeton University Press, 2006. S. Shevade and S. Keerthi. A simple and efficient algorithm for gene selection using sparse logistic regression. Bioinformatics, 19:2246 2253, 2003. T. Sim, B. Baker, and M. Bsat. The CMU pose, illumination, and expression database. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25:1615 1618, 2003. R. Tibshirani. Regression shringkage and selection via the lasso. Journal of the Royal Statistical Society Series B, 58:267 288, 1996. Lasso Screening Rules via Dual Polytope Projection R. Tibshirani, J. Bien, J. Friedman, T. Hastie, N. Simon, J. Taylor, and R. Tibshirani. Strong rules for discarding predictors in lasso-type problems. Journal of the Royal Statistical Society Series B, 74:245 266, 2012. M. West, C. Blanchette, H. Dressman, E. Huang, S. Ishida, R. Spang, H. Zuzan, J. Olson, J. Marks, and J. Nevins. Predicting the clinical status of human breast cancer by using gene expression profiles. Proceedings of the National Academy of Sciences, 98:11462 11467, 2001. J. Wright, Y. Ma, J. Mairal, G. Sapiro, T. Huang, and S. Yan. Sparse representation for computer vision and pattern recognition. In Proceedings of IEEE, 2010. Z. J. Xiang, H. Xu, and P. J. Ramadge. Learning sparse representation of high dimensional data on large scale dictionaries. In NIPS, 2011. M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society Series B, 68:49 67, 2006. P. Zhao and B. Yu. On model selection consistency of lasso. Journal of Machine Learning Research, 7:2541 2563, 2006. H. Zou and T. Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society Series B, 67:301 320, 2005.