# primaldual_rates_and_certificates__098db856.pdf Primal-Dual Rates and Certificates Celestine D unner CDU@ZURICH.IBM.COM IBM Research, Z urich, Switzerland Simone Forte FORTESIMONE90@GMAIL.COM ETH Z urich, Switzerland Martin Tak aˇc TAKAC.MT@GMAIL.COM Lehigh University, USA Martin Jaggi JAGGIM@INF.ETHZ.CH ETH Z urich, Switzerland We propose an algorithm-independent framework to equip existing optimization methods with primal-dual certificates. Such certificates and corresponding rate of convergence guarantees are important for practitioners to diagnose progress, in particular in machine learning applications. We obtain new primal-dual convergence rates, e.g., for the Lasso as well as many L1, Elastic Net, group Lasso and TV-regularized problems. The theory applies to any norm-regularized generalized linear model. Our approach provides efficiently computable duality gaps which are globally defined, without modifying the original problems in the region of interest. 1. Introduction The massive growth of available data has moved data analysis and machine learning to center stage in many industrial as well as scientific fields, ranging from web and sensor data to astronomy, health science, and countless other applications. With the increasing size of datasets, machine learning methods are limited by the scalability of the underlying optimization algorithms to train these models, which has spurred significant research interest in recent years. However, practitioners face a significant problem arising with the larger model complexity in large-scale machine learning and in particular deep-learning methods - it is increasingly hard to diagnose if the optimization algorithm Proceedings of the 33 rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 48. Copyright 2016 by the author(s). used for training works well or not. With the optimization algorithms also becoming more complex (e.g., in a distributed setting), it can often be very hard to pin down if bad performance of a predictive model either comes from slow optimization, or from poor modeling choices. In this light, easily verifiable guarantees for the quality of an optimization algorithm are very useful note that the optimum solution of the problem is unknown in most cases. For convex optimization problems, a primal-dual gap can serve as such a certificate. If available, the gap also serves as a useful stopping criterion for the optimizer. So far, the majority of popular optimization algorithms for learning applications comes without a notion of primaldual gap. In this paper, we aim to change this for a relevant class of machine learning problems. We propose a primaldual framework which is algorithm-independent, and allows to equip existing algorithms with additional primaldual certificates as an add-on. Our approach is motivated by the recent analysis of SDCA (Shalev-Shwartz & Zhang, 2013). We extend their setting to the significantly larger class of convex optimization problems of the form min α Rn f(Aα) + g(α) for a given matrix A Rd n, f being smooth, and g being a general convex function. This problem class includes the most prominent regression and classification methods as well as generalized linear models. We will formalize the setting in more details in Section 3, and highlight the associated dual problem, which has the same structure. An overview over some popular examples that can be formulated in this setting either as primal or dual problem is given in Table 1. Contributions. The main contributions in this work can be summarized as follows: Primal-Dual Rates and Certificates Our new primal-dual framework is algorithmindependent, that is it allows users to equip existing algorithms with primal-dual certificates and convergence rates. We introduce a new Lipschitzing trick allowing duality gaps (and thus accuracy certificates) which are globally defined, for a large class of convex problems which did not have this property so far. Our approach does not modify the original problems in the region of interest. In contrast to existing methods adding a small strongly-convex (L2) term as, e.g., in (Shalev-Shwartz & Zhang, 2014; Zhang & Lin, 2015), our approach leaves both the algorithms and the optima unaffected. Compared with the well-known duality setting of SDCA (Shalev-Shwartz & Zhang, 2013; 2014) which is restricted to strongly convex regularizers and finite sum optimization problems, our framework encompasses a significantly larger class of problems. We obtain new primal-dual convergence rates, e.g., for the Lasso as well as many L1, Elastic Net, group Lasso and TV-regularized problems. The theory applies to any norm-regularized generalized linear model. Existing primal-dual guarantees for the class of ERM problems with Lipschitz loss from (Shalev-Shwartz & Zhang, 2013) (e.g., SVM) are valid only for an average iteration. We show that the same rate of convergence can be achieved, e.g., for accelerated SDCA, but without the necessity of computing an average iterate. Our primal-dual theory captures a more precise notion of data-dependency compared with existing results (which relied on per-coordinate information only). To be precise, our shown convergence rate for the general algorithms is dependent on the spectral norm of the data, see also (Tak aˇc et al., 2013; 2015). 2. Related Work Linearized ADMM solvers. For the problem structure of our interest here, one of the most natural algorithms is the splitting method known as the Chambolle-Pock algorithm (also known as Primal-Dual Hybrid Gradient, Arrow Hurwicz method, or linearized ADMM) (Chambolle & Pock, 2010). While this algorithm can give rise to a duality gap, it is significantly less general compared to our framework. In each iteration, it requires a complete solution of the proximal operators of both f and g, which can be computationally expensive. Its convergence rate is sensitive to the used step-size (Goldstein et al., 2015). Our framework is not algorithm-specific, and holds for arbitrary iterate sequences. More recently, the SPDC method (Zhang & Lin, 2015) was proposed as a coordinate-wise variant of (Chambolle & Pock, 2010), but only for strongly convex g. Stochastic coordinate solvers. Coordinate descent/ascent methods have become state-of-the-art for many machine learning problems (Hsieh et al., 2008; Friedman et al., 2010). In recent years, theoretical convergence rate guarantees have been developed for the primal-only setting, e.g., by (Nesterov, 2012; Richt arik & Tak aˇc, 2014), as well as more recently also for primal-dual guarantees, see, e.g., (Lacoste-Julien et al., 2013; Shalev-Shwartz & Zhang, 2013; 2014). The influential Stochastic1 Dual Coordinate Ascent (SDCA) framework (Shalev-Shwartz & Zhang, 2013) was motivated by the L2-regularized SVM, where coordinate descent is very efficient on the dual SVM formulation, with every iteration only requiring access to a single datapoint (i.e. a column of the matrix A in our setup). In contrast to primal stochastic gradient descent (SGD) methods, the SDCA algorithm family is often preferred as it is free of learning-rate parameters, and has a fast linear (geometric) convergence rate. SDCA and recent extensions (Tak aˇc et al., 2013; Shalev-Shwartz & Zhang, 2014; Qu et al., 2014; Shalev-Shwartz, 2015; Zhang & Lin, 2015) require g to be strongly convex. Under the weaker assumption of weak strong convexity (Necoara, 2015), a linear rate for the primal-dual convergence of SDCA was recently shown by (Ma et al., 2015b). In the Lasso literature, a similar trend in terms of solvers has been observed recently, but with the roles of primal and dual problems reversed. For those problems, coordinate descent algorithms on the primal formulation have become the state-of-the-art, as in GLMNET (Friedman et al., 2010) and extensions (Shalev-Shwartz & Tewari, 2011; Yuan et al., 2012; 2010). Despite the prototypes of both problem types SVM for the L2-regularized case and Lasso for the L1-regularized case being closely related (Jaggi, 2014), we are not aware of existing primal-dual guarantees for coordinate descent on unmodified L1-regularized problems. Comparison to smoothing techniques. Existing techniques for bringing L1-regularized and related problems into a primal-dual setting compatible with SDCA do rely on the classical Nesterov-smoothing approach - By adding a small amount of L2 to the part g, the objective becomes strongly convex; see, e.g., (Nesterov, 2005; 2012; Tran-Dinh & Cevher, 2014; Richt arik & Tak aˇc, 2014; Shalev-Shwartz & Zhang, 2014). However, the appropriate strength of smoothing is difficult to tune. It will depend on the accuracy level, and will influence the chosen algorithm, as it will change the iterates, the resulting convergence rate as well as the tightness of the resulting duality gaps. The line of work of Tran-Dinh & Cevher (2014; 2015) provides duality gaps for smoothed problems and rates for proximally tractable objectives (Tran-Dinh & Cevher, 2015) and 1Here stochastic refers to randomized selection of the active coordinate. Primal-Dual Rates and Certificates Table 1. Primal-dual convergence rates of general algorithms applied to (A), for some machine learning and signal processing problems which are examples of our optimization problem formulations (A) and (B). Note that several can be mapped to either (A) or (B). λ > 0 is a regularization parameter specified by the user. We will discuss the most prominent examples in more detail in Sections 5.2 and 6. Problem f/f g/g Convergence Regularizer L1 (A) f = ℓ(Aα) g = λ α 1 Thm 6, Cor 10 Elastic Net (A) f = ℓ(Aα) g = λ η 2 α 2 2 + (1 η) α 1 Thm 2,8, Cor 11 (B) f = λ η 2 w 2 2 + (1 η) w 1 g = ℓ( A w) Thm 2,8, Cor 11 L2 (A) f = ℓ(Aα) g = λ 2 α 2 2 Thm 2,8 (B) f = λ 2 w 2 2 g = ℓ( A w) Thm 2,8 Fused Lasso (A) f = ℓ(Aα) g = λ Mα 1 Thm 6 Group Lasso (A) f = ℓ(Aα) g = λ P k αGk 2, {Gk} part. of {1..n} Thm 6 SVM (hinge loss) (A) f = λ 2 Aα 2 2 g = P i yiαi s.t. yiαi [0, 1] Thm 4,9 Loss ℓ Logistic Regression ℓLR(z) := 1 m Pm j=1 log(1+exp( yjzj)) for z = Aα or z = A w Least Squares ℓLS(z) := 1 2 z b 2 2 for z = Aα or z = A w also objectives with efficient Fenchel-type operator (Yurtsever et al., 2015). In contrast, our approach preserves all solutions of the original L1-optimization it leaves the iterate sequences of existing algorithms unchanged, which is desirable in practice, and allows the reusability of existing solvers. We do not assume proximal or Fenchel tractability. Distributed algorithms. For L1-problems exceeding the memory capacity of a single computer, a communicationefficient distributed scheme leveraging this Lipschitzing trick is presented in (Smith et al., 2015; Forte, 2015). 3. Setup and Primal-Dual Structure In this paper, we consider optimization problems of the following primal-dual structure. As we will see, the relationship between primal and dual objectives has many benefits, including computation of the duality gap, which allows us to have certificates for the approximation quality. We consider the following pair of optimization problems, which are dual2 to each other: h D(α) := f(Aα) + g(α) i , (A) h P(w) := f (w) + g ( A w) i . (B) The two problems are associated to a given data matrix A Rd n, and the functions f : Rd R and g : Rn R are allowed to be arbitrary closed convex functions. Here α Rn and w Rd are the respective variable vectors. The relation of (A) and (B) is called Fenchel-Rockafellar Duality where the functions f , g in formulation (B) are defined as the convex conjugates3 of their corresponding counterparts f, g in (A). The two main powerful features of this general duality structure are first that it includes many more machine learning methods than more traditional du- 2For a self-contained derivation see Appendix C. 3The conjugate is defined as h (v) := supu Rd v u h(u). ality notions, and secondly that the two problems are fully symmetric, when changing respective roles of f and g. In typical machine learning problems, the two parts typically play the roles of a data-fit (or loss) term as well as a regularization term. As we will see later, those two roles can be swapped, depending on the application. Optimality Conditions. The first-order optimality conditions for our pair of vectors w Rd, α Rn in problems (A) and (B) are given as w f(Aα) , (1a) Aα f (w) , (1b) A w g(α) , (2a) α g ( A w) (2b) see, e.g. (Bauschke & Combettes, 2011, Proposition 19.18). The stated optimality conditions are equivalent to α, w being a saddle-point of the Lagrangian, which is given as L(α, w) = f (w) Aα, w g(α) if α dom(g) and w dom(f ). Duality Gap. From the definition of the dual problems in terms of the convex conjugates, we always have P(w) P(w ) D(α ) D(α), giving rise to the definition of the general duality gap G(w, α) := P(w) ( D(α)). For differentiable f, the duality gap can be used more conveniently: Given α Rn s.t. Aα dom(f) in the context of (A), a corresponding variable vector w Rd for problem (B) is given by the first-order optimality condition (1a) as w = w(α) := f(Aα) . (3) Under strong duality, we have P(w ) = D(α ) and w(α ) = w , where α is an optimal solution of (A). This implies that the suboptimality P(w(α)) P(w ) is always bounded above by the simpler duality gap function G(α):=P(w(α)) ( D(α)) P(w(α)) P(w ) (4) which hence acts as a certificate of the approximation quality of the variable vector α. Primal-Dual Rates and Certificates 4. Primal-Dual Guarantees for Any Algorithm Solving (A) In this section we state an important lemma, which will later allow us to transform a suboptimality guarantee of any algorithm into a duality gap guarantee, for optimization problems of the form specified in the previous section. Lemma 1. Consider an optimization problem of the form (A). Let f be 1/β-smooth w.r.t. a norm . f and let g be µ-strongly convex with convexity parameter µ 0 w.r.t. a norm . g. The general convex case µ = 0 is explicitly allowed, but only if g has bounded support. Then, for any α dom(D) and any s [0, 1], it holds that D(α) D(α ) s G(α) (5) s u α 2 g 1 β A(u α) 2 f , where G(α) is the gap function defined in (4) and u g ( A w(α)). (6) We note that the improvement bound here bears similarity to the proof of (Bach, 2015, Prop 4.2) for the case of an extended Frank-Wolfe algorithm. In contrast, our result here is algorithm-independent, and leads to tighter results due to the more careful choice of u, as we ll see in the following. 4.1. Linear Convergence Rates In this section we assume that we are using an arbitrary optimization algorithm applied to problem (A). It is assumed that the algorithm produces a sequence of (possibly random) iterates {α(t)} t=0 such that there exists C (0, 1], D 0 such that E[D(α(t)) D(α )] (1 C)t D. (7) In the next two theorems, we define σ := maxα =0 Aα f/ α g 2, i.e., the squared spectral norm of the matrix A in the Euclidean norm case. 4.1.1. CASE I. STRONGLY CONVEX g Let us assume g is µ-strongly convex (µ > 0) (equivalently, its conjugate g has Lipschitz continuous gradient with a constant 1/µ). The following theorem provides a linear convergence guarantee for any algorithm with given linear convergence rate for the suboptimality D(α) D(α ). Theorem 2. Assume the function f is 1/β-smooth w.r.t. a norm . f and g is µ-strongly convex w.r.t. a norm . g. Suppose we are using a linearly convergent algorithm as specified in (7). Then, for any β +µ) µϵ (8) it holds that E[G(α(t))] ϵ. From (7) we can obtain that after 1 C log D ϵ iterations, we would have a point α(t) such that E[D(α(t)) D(α )] ϵ. Hence, comparing with (8) only few more iterations are needed to get the guarantees for the duality gap. The rate (7) is achieved by most of the first order algorithms, including proximal gradient descent (Nesterov, 2013) or SDCA (Richt arik & Tak aˇc, 2014) with C µ or accelerated SDCA (Lin et al., 2014) with C µ. 4.1.2. CASE II. GENERAL CONVEX g (OF BOUNDED SUPPORT) In this section we will assume that g is Lipschitz (in contrast to smooth as in Theorem 2) and show that the linear convergence rate is preserved. Theorem 3. Assume that the function f is 1/β-smooth w.r.t. a norm . , g is L-Lipschitz continuous w.r.t the dual norm . , and we are using a linearly convergent algorithm (7). Then, for any C log 2D max{1,2σL2/ϵβ} it holds that E[G(α(t))] ϵ. In (Wang & Lin, 2014), it was proven that feasible descent methods when applied to the dual of an SVM do improve the objective geometrically as in (7). Later, (Ma et al., 2015b) extended this to stochastic coordinate feasible descent algorithms (including SDCA). Using our new Theorem 3, we can therefore extend their results to linear convergence for the duality gap for the SVM application. 4.2. Sub-Linear Convergence Rates In this case we will focus only on general L-Lipschitz continuous functions g (if g is strongly convex, then many existing algorithms are available and converge with a linear rate). We will assume that we are applying some (possibly randomized) algorithm on optimization problem (A) which produces a sequence (of possibly random) iterates {α(t)} t=0 such that E[D(α(t)) D(α )] C D(t), (10) where D(t) is a function wich has usually a linear or quadratic growth (i.e. D(t) O(t) or D(t) O(t2)). The following theorem will allow to equip existing algorithms with sub-linear convergence in suboptimality, as specified in (10), with duality gap convergence guarantees. Theorem 4. Assume the function f is 1/β-smooth w.r.t. the norm . , g is L-Lipschitz continuous, w.r.t. the dual norm . , and we are using a sub-linearly convergent algorithm as quantified by (10). Then, for any t 0 such that D(t) max{ 2Cβ σL2 , 2CσL2 βϵ2 }, (11) it holds that E[G(α(t))] ϵ. Primal-Dual Rates and Certificates Let us comment on Theorem 4 stated above. If D(t) O(t) then this shows a rate of O(ϵ 2). We note two important facts: 1. The guarantee holds for the duality gap of the iterate α(t) and not for some averaged solution. 2. For the SVM case, this rate is consistent with the result of (Hush et al., 2006). Our result is much more general as it holds for any L-Lipschitz continuous convex function g and any β-strongly convex f . Let us make one more important remark. In (Shalev Shwartz & Zhang, 2013) the authors showed that SDCA (when applied on L-Lipschitz continuous g ) has D(t) O(t) and they also showed that an averaged solution (over few last iterates) needs only O(ϵ 1) iterations to have duality gap ϵ. However, as a direct consequence of our Theorem 4 we can show, e.g., that FISTA (Beck & Teboulle, 2009) (aka. accelerated gradient descent algorithm) or APPROX (Fercoq & Richt arik, 2015)4 (a.k.a. accelerated coordinate descent algorithm) will need O(ϵ 1) iterations to produce an iterate α such that G(α) ϵ. Indeed, e.g., for APPROX, the function D(t) = ((t 1)τ +2n)2 (where τ is the size of a mini-batch) and C is a constant which depends on α(0) and α and τ. Hence, to obtain an iterate α(t) such that E[G(α(t))] ϵ it is sufficient to choose τ 1 such that t (11) 1 2n 2Cβ σL2 , L β } is satisfied. 5. Extending Duality to Non-Lipschitz Functions 5.1. Lipschitzing Trick In this section we present a trick that allows to generalize our results of the previous section from Lipschitz functions g to non-Lipschitz functions. The approach we propose, which we call the Lipschitzing trick, will make a convex function Lipschitz on the entire domain Rd, by modifying its conjugate dual to have bounded support. Formally, the modification is as follows: Let B Rd be some closed, bounded convex set. We modify g : Rn R by restricting its support to the set B, i.e. ( g(α) if α B + otherwise . (12) By definition, this function has bounded support, and hence, by Lemma 5, its conjugate function g is Lipschitz continuous. Motivation. We will apply this trick to the part g of optimization problems of the form (A) (such that g will become Lipschitz). We want that this modification to have no impact on the outcome of any optimization algorithm run- 4APPROX requires g being separable. Figure 1. Illustration Lipschitzing trick for the scalar function g(α) = | | with B = {x : |x| B}. ning on (A). Instead, this trick should only affect convergence theory in that it allows us to present a strong primaldual rate. In the following, we will discuss how this can indeed be achieved by imposing only very weak assumptions on the original problem (A). The modification is based on the following duality of Lipschitzness and bounded support, as given in Lemma 5 below, which is a generalization of (Rockafellar, 1997, Corollary 13.3.3). We need the following definition: Definition 1 (B-Bounded Support). A function g : Rd R has B-bounded support if its effective domain is bounded by B w.r.t. a norm . , i.e., g(u) < + u B . (13) Lemma 5 (Duality between Lipschitzness and L-Bounded Support). Given a proper convex function g, it holds that g has L-bounded support w.r.t. the norm . if and only if g is L-Lipschitz w.r.t. the dual norm . . The following Theorem 6 generalizes our previous convergence results, which were restricted to Lipschitz g . Theorem 6. For an arbitrary optimization algorithm running on problem (A), let α(t) denote its iterate sequence. Assume there is some closed convex set B containing all these iterates. Then, the same optimization algorithm run on the modified problem given by Lipschitzing of g using B would produce exactly the same iterate sequence. Furthermore, Theorem 3 as well as Theorem 4 give primaldual convergence guarantees for this algorithm (for L such that B is L-bounded). Corollary 7. Assume the objective of optimization problem (A) has bounded level sets. For α(t) being the iterate sequence of a monotone optimization algorithm on problem (A) we denote δt := D(α(t)) and let Bt be the δt-level set of D. Write Bt > 0 for a value such that Bt is Btbounded w.r.t. the norm . . Then, at any state t of the algorithm, the set Bt contains all future iterates and Theorem 6 applies for L := Bt. 5.2. Norm-Regularized Problems We now focus on some applications. First, we demonstrate how the Lipschitzing trick can be applied to find primaldual convergence rates for problems regularized by an arbitrary norm. We discuss in particular the Lasso problem Primal-Dual Rates and Certificates and show how the suboptimality bound can be evaluated in practice. In a second part, we discuss the Elastic Net regularizer and show how it fits into our framework. We consider a special structure of problem (A), namely min α Rn f(Aα) + λ α . (14) where f is some convex non-negative smooth loss function regularized by any norm . . We choose B to be the . - norm ball of radius B, such that α < B for every iterate. The size, B, of this ball can be controled by the amount of regularization λ. Using the Lipschitzing trick with this B, the convergence guarantees of Theorem 6 apply to the general class of norm regularized problems. Note that for any monotone algorithm (initialized at 0) applied to (14) we have α 1 λf(0) for every iterate. Furthermore, at every iterate α we can bound α+ 1 λ(f(Aα) + λ α ) for every future iterate α+. Hence, B := 1 λf(0) is a safe choice, e.g. B := 1 2λ b 2 2 for least squares loss and B := m λ log(2) for logistic regression loss. Duality Gap. For any problem of the form (14) we can now determine the duality gap. We apply the Lipschitzing trick to g(α) := λ α as in (12), then the convex conjugate of g is ( 0 u λ max α:α Bu α λ α else . (15) where . denotes the dual norm of . . Hence, using the primal-dual mapping w(α) := f(Aα) we can write the duality gap of the modified problem as G(α) = w(α), Aα + λ α + g ( A w(α)) (16) Note that the computation of the modified duality gap G(α) is not harder than the computation of the original gap G(α) - it requires one pass over the data, assuming the choice of a suitable set B in (15). Furthermore, note that in contrast to the unmodified duality gap G(α), which is only defined on the set α : A w(α) λ , our new gap G(α) is defined on the entire space Rd. As an alternative, (Mairal, 2010, Sec. 1.4.1 and App. D.2) has defined a different duality gap by shrinking the dual w variable until it becomes feasible in A w λ. 5.2.1. L1-REGULARIZED PROBLEMS The results from the previous section can be ported to the important special case of L1-regularization: min α Rn f(Aα) + λ α 1 . (17) We choose B to be the L1-norm ball of radius B and then apply the Lipschitzing trick with B to the regularization term in (17). An illustration of this modification as well as the impact on its dual are illustrated in Figure 1. Hence, our theory (Theorem 6) gives primal-dual convergence guarantees for any algorithm applied to the Lasso problem (17). Furthermore, if the algorithm is monotone (initialized at α := 0) we know that B is 1 λf(0)-bounded. Duality Gap. The duality gap (16) for the modified Lasso problem can be computed at every iterate α as G(α) = w, Aα + B A w λ + + λ α 1 (18) for w = w(α). For the derivation, see Appendix I.1. 5.2.2. GROUP LASSO, FUSED LASSO AND TV-NORM Our algorithm-independent, primal-dual convergence guarantees and certificates presented so far are not restricted to Lp-regularized problems, but do in fact directly apply to many more general structured regularizers, some of them shown in see Table 1. This includes group Lasso (L1/L2) and other norms inducing structured sparsity (Bach et al., 2012), as well as other penalties such as e.g. the fused Lasso g(α) := λ Mα 1. The total variation denoising problem is obtained for suitable choice of the matrix M. 5.2.3. ELASTIC NET REGULARIZED PROBLEMS The second application we will discuss is Elastic Net regularization min α Rn ℓ(Aα) + λ η 2 α 2 2 + (1 η) α 1 , (19) for fixed trade-off parameter η (0, 1]. Our framework allows two different ways to solve problem (19): Either mapping it to formulation (A) (for ℓ=f smooth) as in row 2 of Table 1, or to (B) (for general ℓ) as in row 3 of Table 1. In both scenarios, Theorem 2 gives us a fast linear convergence guarantee for the duality gap, if ℓis smooth. The other theorems apply accordingly for general ℓwhen the problem is mapped to (B). Whether the choice of a dual or primal optimizer will be more beneficial in practice depends on the case, and will be discussed in more detail for coordinate descent methods in Section 6. Duality Gap. For the Elastic Net problem (19) mapped to (A), we can compute the duality gap (4) as follows: G(α) = w, Aα + 1 2ηλ A :i w (1 η)λ 2 2 α 2 2 + (1 η) α 1 with w = w(α) = ℓ(Aα), see Appendix I.2. Remark 1. As η 0 we approach the pure L1-case and this gap blows up as G(α) . Comparing this to (18), we see that the Lipschitzing trick allows to get certificates even in cases where the duality gap of the unmodified problem is infinity. Primal-Dual Rates and Certificates 6. Coordinate Descent Algorithms We now focus on a very important class of algorithms, that is coordinate descent methods. In this section, we show how our theory implies much more general primal-dual convergence guarantees for coordinate descent algorithms. Partially Separable Problems. A widely used subclass of optimization problems arises when one part of the objective becomes separable. Formally, this is expressed as g(α) = Pn i=1 gi(αi) for univariate functions gi : R R for i [n]. Nicely in this case, the conjugate of g also separates as g (y) = P i g i (yi). Therefore, the two optimization problems (A) and (B) write as D(α) := f(Aα) + P i gi(αi) (SA) P(w) := f (w) + P i g i ( A :i w) , (SB) where A:i Rd denotes the i-th column of A. The Algorithm. We consider the coordinate descent algorithm described in Algorithm 1. Initialize α(0) = 0 and then, at each iteration, sample and update a random coordinate i [n] of the parameter vector α to iteratively minimize (SA). Finally, after T iterations output α, the average vector over the latest T T0 iterates. The parameter T0 is some positive number smaller than T. Algorithm 1 Coordinate Descent on D(α) 1: Input: Data matrix A. Starting point α(0) := 0 Rn, w(0) = w(α(0)). 2: for t = 1, 2, . . . T do 3: Pick i [n] randomly 4: Find αi minimizing D(α(t 1) + ei αi) 5: α(t) α(t 1) + αiei 6: w(t) w(α(t)) 7: end for 8: Let α = 1 T T0 PT 1 t=T0 α(t) As we will show in the following section, coordinate descent on D(α) is not only an efficient optimizer of the objective D(α), but also provably reduces the duality gap. Therefore, the same algorithm will simultaneously optimize the dual objective P(w). 6.1. Primal-Dual Analysis for Coordinate Descent We first show linear primal-dual convergence rate of Algorithm 1 applied to (SA) for strongly convex gi. Later, we will generalize this result to also apply to the setting of general Lipschitz gi. This generalization together with the Lipschitzing trick will allow us to derive primal-dual convergence guarantees of coordinate descent for a much broader class of problems, including the Lasso problem. For the following theorems we assume that the columns of the data matrix A are scaled such that A:i R for all i [n] and Aj: P for all j [d], for some norm . . Theorem 8. Consider Algorithm 1 applied to (SA). Assume f is a 1/β-smooth function w.r.t. the norm . . Then, if gi is µ-strongly convex for all i, it suffices to have a total number of iterations of µβ log h n + n R2 µβ i ϵ(0) D to get E[G(α(T ))] ϵ. Moreover, to obtain an expected duality gap of E[G( α)] ϵ it suffices to have T > T0 with T0 n + n R2 µβ log h n + n R2 µβ i ϵ(0) D (T T0)ϵ where ϵ(0) D is the initial suboptimality in D(α). Theorem 8 allows us to upper bound the duality gap, and hence the suboptimality, for every iterate α(T ), as well as the average α returned by Algorithm 1. In the following we generalize this result to apply to L-Lipschitz functions gi. Theorem 9. Consider Algorithm 1 applied to (SA). Assume f is a 1/β-smooth function w.r.t. the norm . . Then, if g i is L-Lipschitz for all i, it suffices to have a total number of iterations of T max 0, n log ϵ(0) D β 2L2R2n + n + 20n2L2R2 to get E[G( α)] ϵ. Moreover, when t T0 with T0 = max 0, n log ϵ(0) D β 2L2R2n we have the suboptimality bound of E[D(α(t)) D(α )] ϵ/2, where ϵ(0) D is the initial suboptimality. Remark 2. Theorem 9 shows that for Lipschitz g i , Algorithm 1 has O(ϵ 1) convergence in the suboptimality and O(ϵ 1) convergence in G( α). Comparing this result to Theorem 4 which suggests O(ϵ 2) convergence in G(α) for O(ϵ 1) convergent algorithms, we see that averaging the parameter vector crucially improves convergence in the case of non-smooth f. Remark 3. Note that our Algorithm 1 recovers the widely used SDCA setting (Shalev-Shwartz & Zhang, 2013) as a special case, when we choose f := λ 2 . 2 2 in (SB). Furthermore, their convergence results for SDCA are consistent with our results and can be recovered as a special case of our analysis. See Corollaries 16, 18, 19 in Appendix J. 6.2. Application to L1 and Elastic Net Regularized Problems We now apply Algorithm 1 to the L1-regularized problems, as well as Elastic Net regularized problems. We state improved primal-dual convergence rates which are more tailored to the coordinate-wise setting. Primal-Dual Rates and Certificates Coordinate Descent on L1-Regularized Problems. In contrast to the general analysis of L1-regularized problems in Section 5.2.1, we can now exploit separability of g(α) := λ α 1 and apply the Lipschitzing trick coordinate-wise, choosing B := {α : |α| B} R. This results in the following stronger convergence results: Corollary 10. We can use the Lipschitzing trick together with Theorem 9 to derive a primal-dual convergence result for the Lasso problem (17). We find that the g i are BLipschitz after applying the Lipschitzing trick to every gi, and hence the total number of iterations needed on the Lasso problem to get a duality gap of E[G( α)] ϵ is T max 0, n log βϵ(0) D 2B2R2n + n + 20n2B2R2 Remark 4. We specify the different parameters of Corollary 10 for least squares loss as well as the logistic regression loss (defined in Table 1). Both are 1-smooth (f is 1-strongly convex) and we have β := 1. The initial suboptimality ϵ(0) D can be upper bounded by 1 2 b 2 2 for the former and by m log(2) for the latter. For B we choose 1 Coordinate Descent on Elastic Net Regularized Problems. In Section 5.2.3 we discussed how the Elastic Net problem in (19) can be mapped to our setup. In the first scenario (row 2, Table 1) we note that the resulting problem is partially separable and an instance of (SA). In the second scenario we map (19) to (B) (row 3, Table 1). Assuming that the loss function ℓis separable, this problem is an instance of (SB). The convergence guarantees when applying Algorithm 1 on the primal or on the dual are summarized in Corollary 11. Corollary 11. Consider Algorithm 1 for an Elastic Net regularized problem (19), running on either the primal or the dual formulation. Then, to obtain a duality gap of E[G(α(T ))] ϵ, it suffices to have a total of T (n + n R2 ληζ ) log([n + n R2 ληζ ] ϵ(0) D iterations for coordinate descent on the primal (19) and T (d + d P 2 ληζ ) log([d + d P 2 ληζ ] ϵ(0) D for coordinate descent on the dual of (19). According to Corollary 11, the convergence rate is comparable for both scenarios. The constants however depend on the data matrix A for d n the primal version is beneficial, whereas for n d the dual version is leading. 7. Numerical Experiments Here we illustrate the usefulness of our framework by showcasing it for two important applications, each one showing two algorithm examples for optimizing (A). 10 0 10 2 10 4 10 6 news20 Lasso Duality Gap SDCA APPROX mushrooms Lasso Duality Gap SDCA APPROX 10 0 10 1 10 2 10 3 10 0 news20 SVM Dual Duality Gap SDCA APPROX 10 0 10 1 10 2 10 3 10 0 rcv1 test SVM Dual Duality Gap SDCA APPROX Figure 2. Comparison of CD with its accelerated variant APPROX on Lasso and SVM problems. Lasso. The top row of Figure 2 shows the primal-dual convergence of Algorithm 1 (CD) as well as the accelerated variant of CD (APPROX, Fercoq & Richt arik (2015)), both applied to the Lasso problem (A). We have applies the Lipschitzing trick as described in Section 5.1. This makes sure that w(α) will be always feasible for the modified dual (B), and hence the duality gap can be evaluated. SVM. It was shown in (Shalev-Shwartz & Zhang, 2013) that if CD (SDCA) is run on the dual SVM formulation, and we consider an average solution (over last few iterates), then the duality gap evaluated at averaged iterates has a sub-linear convergence rate O(1/t). As a consequence of Theorem 4, we have that the APPROX algorithm (Fercoq & Richt arik, 2015) will provide the same sub-linear convergence in duality gap, but holding for the iterates themselves, not only for an average. On the bottom row of Figure 2 we compare CD with its accelerated variant on two benchmark datasets.5 We have chosen λ = 1/n. 8. Conclusions We have presented a general framework allowing to equip existing optimization algorithms with primal-dual certificates. For future research, it will be interesting to study more applications and algorithms fitting into the studied problem structure, including more cases of structured sparsity, and generalizations to matrix problems. Acknowledgments. We thank Francis Bach, Michael P. Friedlander, Ching-pei Lee, Dmytro Perekrestenko, Aaditya Ramdas, Virginia Smith and an anonymous reviewer for fruitful discussions. 5Available from csie.ntu.edu.tw/ cjlin/libsvmtools/datasets. Primal-Dual Rates and Certificates Bach, Francis. Duality Between Subgradient and Conditional Gradient Methods. SIAM Journal on Optimization, 25(1):115 129, 2015. Bach, Francis, Jenatton, Rodolphe, Mairal, Julien, and Obozinski, Guillaume. Optimization with Sparsity-Inducing Penalties. Foundations and Trends in Machine Learning, 4(1):1 106, 2012. Bauschke, Heinz H and Combettes, Patrick L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. CMS Books in Mathematics. Springer, 2011. Beck, Amir and Teboulle, Marc. A fast iterative shrinkagethresholding algorithm for linear inverse problems. SIAM journal on imaging sciences, 2(1):183 202, 2009. Borwein, J M and Zhu, Q. Techniques of Variational Analysis and Nonlinear Optimization. Canadian Mathematical Society Books in Math, Springer, 2005. Boyd, Stephen P and Vandenberghe, Lieven. Convex optimization. Cambridge University Press, 2004. Chambolle, Antonin and Pock, Thomas. A First-Order Primal Dual Algorithm for Convex Problems with Applications to Imaging. Journal of Mathematical Imaging and Vision, 40(1): 120 145, 2010. Fercoq, Olivier and Richt arik, Peter. Accelerated, Parallel, and Proximal Coordinate Descent. SIAM Journal on Optimization, 25(4):1997 2023, 2015. Forte, Simone. Distributed Optimization for Non-Strongly Convex Regularizers. Master s thesis, ETH Z urich, 2015. Friedman, Jerome, Hastie, Trevor, and Tibshirani, Robert. Regularization Paths for Generalized Linear Models via Coordinate Descent. Journal of Statistical Software, 33(1):1 22, 2010. Goldstein, Tom, Li, Min, and Yuan, Xiaoming. Adaptive Primal Dual Splitting Methods for Statistical Learning and Image Processing. In NIPS 2015 - Advances in Neural Information Processing Systems 28, pp. 2080 2088, 2015. Hsieh, Cho-Jui, Chang, Kai-Wei, Lin, Chih-Jen, Keerthi, S Sathiya, and Sundararajan, Sellamanickam. A dual coordinate descent method for large-scale linear SVM. In ICML 2008 - Proceedings of the 25th International Conference on Machine Learning, pp. 408 415, 2008. Hush, Don, Kelly, Patrick, Scovel, Clint, and Steinwart, Ingo. Qp algorithms with guaranteed accuracy and run time for support vector machines. The Journal of Machine Learning Research, 7:733 769, 2006. Jaggi, Martin. An Equivalence between the Lasso and Support Vector Machines. In Regularization, Optimization, Kernels, and Support Vector Machines, pp. 1 26. Chapman and Hall/CRC, 2014. Kakade, Sham M, Shalev-Shwartz, Shai, and Tewari, Ambuj. On the duality of strong convexity and strong smoothness: Learning applications and matrix regularization. Technical report, Toyota Technological Institute - Chicago, USA, 2009. Lacoste-Julien, Simon and Jaggi, Martin. On the Global Linear Convergence of Frank-Wolfe Optimization Variants. In NIPS 2015 - Advances in Neural Information Processing Systems 28, 2015. Lacoste-Julien, Simon, Jaggi, Martin, Schmidt, Mark, and Pletscher, Patrick. Block-Coordinate Frank-Wolfe Optimization for Structural SVMs. In ICML 2013 - Proceedings of the 30th International Conference on Machine Learning, 2013. Lin, Qihang, Lu, Zhaosong, and Xiao, Lin. An accelerated proximal coordinate gradient method and its application to regularized empirical risk minimization. ar Xiv:1407.1296, 2014. Ma, Chenxin, Smith, Virginia, Jaggi, Martin, Jordan, Michael I, Richt arik, Peter, and Tak aˇc, Martin. Adding vs. averaging in distributed primal-dual optimization. In ICML 2015 - 32th International Conference on Machine Learning, 2015a. Ma, Chenxin, Tappenden, Rachael, and Tak aˇc, Martin. Linear convergence of the randomized feasible descent method under the weak strong convexity assumption. ar Xiv:1506.02530, 2015b. Mairal, Julien. Sparse coding for machine learning, image processing and computer vision. Ph D thesis, ENS Cachan, 2010. Necoara, Ion. Linear convergence of first order methods under weak nondegeneracy assumptions for convex programming. ar Xiv:1504.06298, 2015. Nesterov, Yurii. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM Journal on Optimization, 22(2):341 362, 2012. Nesterov, Yurii. Gradient methods for minimizing composite functions. Mathematical Programming, 140(1):125 161, 2013. Nesterov, Yurii. Smooth minimization of non-smooth functions. Mathematical Programming, 103(1):127 152, 2005. Qu, Zheng, Richt arik, Peter, and Zhang, Tong. Randomized Dual Coordinate Ascent with Arbitrary Sampling. ar Xiv, 2014. Richt arik, Peter and Tak aˇc, Martin. Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function. Mathematical Programming, 144(1-2):1 38, 2014. Rockafellar, R Tyrrell. Convex analysis. Princeton University Press, 1997. Shalev-Shwartz, Shai. Online Learning and Online Convex Optimization. Foundations and Trends in Machine Learning, 4(2): 107 194, 2011. Shalev-Shwartz, Shai. SDCA without duality. ar Xiv:1502.06177, 2015. Shalev-Shwartz, Shai and Tewari, Ambuj. Stochastic Methods for l1-regularized Loss Minimization. JMLR, 12:1865 1892, 2011. Shalev-Shwartz, Shai and Zhang, Tong. Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization. JMLR, 14:567 599, 2013. Primal-Dual Rates and Certificates Shalev-Shwartz, Shai and Zhang, Tong. Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization. Mathematical Programming, Series A:1 41, 2014. Smith, Virginia, Forte, Simone, Jordan, Michael I, and Jaggi, Martin. L1-Regularized Distributed Optimization: A Communication-Efficient Primal-Dual Framework. ar Xiv, 2015. Tak aˇc, Martin, Bijral, Avleen, Richt arik, Peter, and Srebro, Nathan. Mini-batch primal and dual methods for SVMs. In In ICML 2013 - 30th International Conference on Machine Learning, 2013. Tak aˇc, Martin, Richt arik, Peter, and Srebro, Nathan. Distributed mini-batch SDCA. ar Xiv:1507.08322, 2015. Tran-Dinh, Quoc and Cevher, Volkan. Constrained convex minimization via model-based excessive gap. In NIPS 2014 - Advances in Neural Information Processing Systems 27, 2014. Tran-Dinh, Quoc and Cevher, Volkan. Splitting the Smoothed Primal-Dual Gap: Optimal Alternating Direction Methods. ar Xiv, 2015. Wang, Po-Wei and Lin, Chih-Jen. Iteration complexity of feasible descent methods for convex optimization. The Journal of Machine Learning Research, 15(1):1523 1548, 2014. Yuan, Guo-Xun, Chang, Kai-Wei, Hsieh, Cho-Jui, and Lin, Chih Jen. A Comparison of Optimization Methods and Software for Large-scale L1-regularized Linear Classification. Journal of Machine Learning Research, 11:3183 3234, 2010. Yuan, Guo-Xun, Ho, Chia-Hua, and Lin, Chih-Jen. An Improved GLMNET for L1-regularized Logistic Regression. JMLR, 13: 1999 2030, 2012. Yurtsever, Alp, Dinh, Quoc Tran, and Cevher, Volkan. A Universal Primal-Dual Convex Optimization Framework. In NIPS 2015 - Advances in Neural Information Processing Systems 28, pp. 3132 3140, 2015. Zhang, Yuchen and Lin, Xiao. Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization. In ICML 2015 - Proceedings of the 32th International Conference on Machine Learning, pp. 353 361, 2015.