# variancereduced_and_projectionfree_stochastic_optimization__98817403.pdf Variance-Reduced and Projection-Free Stochastic Optimization Elad Hazan EHAZAN@CS.PRINCETON.EDU Princeton University, Princeton, NJ 08540, USA Haipeng Luo HAIPENGL@CS.PRINCETON.EDU Princeton University, Princeton, NJ 08540, USA The Frank-Wolfe optimization algorithm has recently regained popularity for machine learning applications due to its projection-free property and its ability to handle structured constraints. However, in the stochastic learning setting, it is still relatively understudied compared to the gradient descent counterpart. In this work, leveraging a recent variance reduction technique, we propose two stochastic Frank-Wolfe variants which substantially improve previous results in terms of the number of stochastic gradient evaluations needed to achieve 1 ϵ accuracy. For example, we improve from O( 1 ϵ ) to O(ln 1 ϵ ) if the objective function is smooth and strongly convex, and from O( 1 ϵ2 ) to O( 1 ϵ1.5 ) if the objective function is smooth and Lipschitz. The theoretical improvement is also observed in experiments on real-world datasets for a multiclass classification application. 1. Introduction We consider the following optimization problem min w Ωf(w) = min w Ω 1 n which is an extremely common objective in machine learning. We are interested in the case where 1) n, usually corresponding to the number of training examples, is very large and therefore stochastic optimization is much more efficient; and 2) the domain Ωadmits fast linear optimization, while projecting onto it is much slower, necessitating projection-free optimization algorithms. Examples of such problem include multiclass classification, multitask learning, recommendation systems, matrix learning and many Proceedings of the 33 rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 48. Copyright 2016 by the author(s). more (see for example (Hazan & Kale, 2012; Hazan et al., 2012; Jaggi, 2013; Dudik et al., 2012; Zhang et al., 2012; Harchaoui et al., 2015)). The Frank-Wolfe algorithm (Frank & Wolfe, 1956) (also known as conditional gradient) and it variants are natural candidates for solving these problems, due to its projectionfree property and its ability to handle structured constraints. However, despite gaining more popularity recently, its applicability and efficiency in the stochastic learning setting, where computing stochastic gradients is much faster than computing exact gradients, is still relatively understudied compared to variants of projected gradient descent methods. In this work, we thus try to answer the following question: what running time can a projection-free algorithm achieve in terms of the number of stochastic gradient evaluations and the number of linear optimizations needed to achieve a certain accuracy? Utilizing Nesterov s acceleration technique (Nesterov, 1983) and the recent variance reduction idea (Johnson & Zhang, 2013; Mahdavi et al., 2013), we propose two new algorithms that are substantially faster than previous work. Specifically, to achieve 1 ϵ accuracy, while the number of linear optimization is the same as previous work, the improvement of the number of stochastic gradient evaluations is summarized in Table 1: previous work this work Smooth O( 1 ϵ2 ) O( 1 ϵ1.5 ) Strongly Convex O( 1 Table 1: Comparisons of number of stochastic gradients The extra overhead of our algorithms is computing at most O(ln 1 ϵ ) exact gradients, which is computationally insignificant compared to the other operations. A more detailed comparisons to previous work is included in Table 2, which will be further explained in Section 2. While the idea of our algorithms is quite straightforward, Variance-Reduced and Projection-Free Stochastic Optimization we emphasize that our analysis is non-trivial, especially for the second algorithm where the convergence of a sequence of auxiliary points in Nesterov s algorithm needs to be shown. To support our theoretical results, we also conducted experiments on three large real-word datasets for a multiclass classification application. These experiments show significant improvement over both previous projection-free algorithms and algorithms such as projected stochastic gradient descent and its variance-reduced version. The rest of the paper is organized as follows: Section 2 setups the problem more formally and discusses related work. Our two new algorithms are presented and analyzed in Section 3 and 4, followed by experiment details in Section 5. 2. Preliminary and Related Work We assume each function fi is convex and L-smooth, that is, for any w, v Ω, fi(v) (w v) fi(w) fi(v) fi(v) (w v) + L We will use two more important properties of smoothness. The first one is fi(w) fi(v) 2 2L(fi(w) fi(v) fi(v) (w v)) (1) (proven in Appendix A for completeness), and the second one is fi(λw + (1 λ)v) λfi(w) + (1 λ)fi(v) L 2 λ(1 λ) w v 2 (2) for any w, v Ωand λ [0, 1]. Notice that f = 1 n Pn i=1 fi is also L-smooth since smoothness is preserved under convex combinations. For some cases, we also assume each fi is G-Lipschitz: fi(w) G for any w Ω, and f (although not necessarily each fi) is α-strongly convex, that is, f(w) f(v) f(w) (w v) α for any w, v Ω. As usual, µ = L α is called the condition number of f. We assume the domain Ω Rd is a compact convex set with diameter D. We are interested in the case where linear optimization on Ω, formally argminv Ωw v for any w Rd, is much faster than projection onto Ω, formally argminv Ω w v 2. Examples of such domains include the set of all bounded trace norm matrices, the convex hull of all rotation matrices, flow polytope and many more (see for instance (Hazan & Kale, 2012)). 2.1. Example Application: Multiclass Classification Consider a multiclass classification problem where a set of training examples (ei, yi)i=1,...,n is given beforehand. Here ei Rm is a feature vector and yi {1, . . . , h} is the label. Our goal is to find an accurate linear predictor, a matrix w = [w 1 ; . . . , w h ] Rh m that predicts argmaxℓw ℓe for any example e. Note that here the dimensionality d is hm. Previous work (Dudik et al., 2012; Zhang et al., 2012) found that finding w by minimizing a regularized multivariate logistic loss gives a very accurate predictor in general. Specifically, the objective can be written in our notation with fi(w) = log 1 + X ℓ =yi exp(w ℓei w yiei) and Ω= {w Rh m : w τ} where denotes the matrix trace norm. In this case, projecting onto Ωis equivalent to performing an SVD, which takes O(hm min{h, m}) time, while linear optimization on Ω amounts to finding the top singular vector, which can be done in time linear to the number of non-zeros in the corresponding h by m matrix, and is thus much faster. One can also verify that each fi is smooth. The number of examples n can be prohibitively large for non-stochastic methods (for instance, tens of millions for the Image Net dataset (Deng et al., 2009)), which makes stochastic optimization necessary. 2.2. Detailed Efficiency Comparisons We call fi(w) a stochastic gradient for f at some w, where i is picked from {1, . . . , n} uniformly at random. Note that a stochastic gradient fi(w) is an unbiased estimator of the exact gradient f(w). The efficiency of a projection-free algorithm is measured by how many numbers of exact gradient evaluations, stochastic gradient evaluations and linear optimizations respectively are needed to achieve 1 ϵ accuracy, that is, to output a point w Ωsuch that E[f(w) f(w )] ϵ where w argminw Ωf(w) is any optimum. In Table 2, we summarize the efficiency (and extra assumptions needed beside convexity and smoothness1) of existing algorithms in the literature as well as the two new algorithms we propose. Below we briefly explain these results from top to bottom. 1In general, condition G-Lipschitz in Table 2 means each fi is G-Lipschitz, except for our STORC algorithm which only requires f being G-Lipschitz. Variance-Reduced and Projection-Free Stochastic Optimization Algorithm Extra Conditions #Exact Gradients #Stochastic Gradients #Linear Optimizations Frank-Wolfe O( LD2 ϵ ) 0 O( LD2 (Garber & Hazan, 2013) α-strongly convex Ωis polytope O(dµρ ln LD2 ϵ ) 0 O(dµρ ln LD2 SFW G-Lipschitz 0 O( G2LD4 ϵ3 ) O( LD2 Online-FW (Hazan & Kale, 2012) G-Lipschitz 0 O( d2(LD2+GD)4 ϵ4 ) O( d(LD2+GD)2 G-Lipschitz (L = allowed) 0 O( G4D4 ϵ4 ) O( G4D4 SCGS (Lan & Zhou, 2014) G-Lipschitz 0 O( G2D2 ϵ2 ) O( LD2 G-Lipschitz α-strongly convex 0 O( G2 αϵ ) O( LD2 SVRF (this work) O(ln LD2 ϵ ) O( L2D4 ϵ2 ) O( LD2 STORC (this work) G-Lipschitz O(ln LD2 ϵ1.5 ) O( LD2 ϵ ) f(w ) = 0 O(ln LD2 ϵ ) α-strongly convex O(ln LD2 ϵ ) O(µ2 ln LD2 Table 2: Comparisons of different Frank-Wolfe variants (see Section 2.2 for further explanations). The standard Frank-Wolfe algorithm: vk = argmin v Ω f(wk 1) v wk = (1 γk)wk 1 + γkvk (3) for some appropriate chosen γk requires O( 1 ϵ ) iteration without additional conditions (Frank & Wolfe, 1956; Jaggi, 2013). In a recent paper, Garber & Hazan (2013) give a variant that requires O(dµρ ln 1 ϵ ) iterations when f is strongly convex and smooth, and Ωis a polytope2. Although the dependence on ϵ is much better, the geometric constant ρ depends on the polyhedral set and can be very large. Moreover, each iteration of the algorithm requires further computation besides the linear optimization step. The most obvious way to obtain a stochastic Frank-Wolfe variant is to replace f(wk 1) by some fi(wk 1), or more generally the average of some number of iid samples of fi(wk 1) (mini-batch approach). We call this method SFW and include its analysis in Appendix B since we do not find it explicitly analyzed before. SFW needs O( 1 ϵ3 ) stochastic gradients and O( 1 ϵ ) linear optimization steps to reach an ϵ-approximate optimum. The work by Hazan & Kale (2012) focuses on a online learning setting. One can extract two results from this work for the setting studied here.3 In any case, the result is worse 2See also recent follow up work (Lacoste-Julien & Jaggi, 2015). 3The first result comes from the setting where the online loss functions are stochastic, and the second one comes from a completely online setting with the standard online-to-batch conversion. than SFW for both the number of stochastic gradients and the number of linear optimizations. Stochastic Condition Gradient Sliding (SCGS), recently proposed by (Lan & Zhou, 2014), uses Nesterov s acceleration technique to speed up Frank-Wolfe. Without strong convexity, SCGS needs O( 1 ϵ2 ) stochastic gradients, improving SFW. With strong convexity, this number can even be improved to O( 1 ϵ ). In both cases, the number of linear optimization steps is O( 1 The key idea of our algorithms is to combine the variance reduction technique proposed in (Johnson & Zhang, 2013; Mahdavi et al., 2013) with some of the above-mentioned algorithms. For example, our algorithm SVRF combines this technique with SFW, also improving the number of stochastic gradients from O( 1 ϵ3 ) to O( 1 ϵ2 ), but without any extra conditions (such as Lipschitzness required for SCGS). More importantly, despite having seemingly same convergence rate, SVRF substantially outperforms SCGS empirically (see Section 5). On the other hand, our second algorithm STORC combines variance reduction with SCGS, providing even further improvements. Specifically, the number of stochastic gradients is improved to: O( 1 ϵ1.5 ) when f is Lipschitz; O( 1 ϵ ) when f(w ) = 0; and finally O(ln 1 ϵ ) when f is strongly convex. Note that the condition f(w ) = 0 essentially means that w is in the interior of Ω, but it is still an interesting case when the optimum is not unique and doing unconstraint optimization would not necessary return a point in Ω. Variance-Reduced and Projection-Free Stochastic Optimization Both of our algorithms require O( 1 ϵ ) linear optimization steps as previous work, and overall require computing O(ln LD2 ϵ ) exact gradients. However, we emphasize that this extra overhead is much more affordable compared to non-stochastic Frank-Wolfe (that is, computing exact gradients every iteration) since it does not have any polynomial dependence on parameters such as d, L or µ. 2.3. Variance-Reduced Stochastic Gradients Originally proposed in (Johnson & Zhang, 2013) and independently in (Mahdavi et al., 2013), the idea of variancereduced stochastic gradients is proven to be highly useful and has been extended to various different algorithms (such as (Frostig et al., 2015; Moritz et al., 2016)). A variance-reduced stochastic gradient at some point w Ωwith some snapshot w0 Ωis defined as f(w; w0) = fi(w) ( fi(w0) f(w0)), where i is again picked from {1, . . . , n} uniformly at random. The snapshot w0 is usually a decision point from some previous iteration of the algorithm and its exact gradient f(w0) has been pre-computed before, so that computing f(w; w0) only requires two standard stochastic gradient evaluations: fi(w) and fi(w0). A variance-reduced stochastic gradient is clearly also unbiased, that is, E[ f(w; w0)] = f(w). More importantly, the term fi(w0) f(w0) serves as a correction term to reduce the variance of the stochastic gradient. Formally, one can prove the following Lemma 1. For any w, w0 Ω, we have E[ f(w; w0) f(w) 2] 6L(2E[f(w) f(w )] + E[f(w0) f(w )]). In words, the variance of the variance-reduced stochastic gradient is bounded by how close the current point and the snapshot are to the optimum. The original work proves a bound on E[ f(w; w0) 2] under the assumption f(w ) = 0, which we do not require here. However, the main idea of the proof is similar and we defer it to Section 6. 3. Stochastic Variance-Reduced Frank-Wolfe With the previous discussion, our first algorithm is pretty straightforward: compared to the standard Frank-Wolfe, we simply replace the exact gradient with the average of a mini-batch of variance-reduced stochastic gradients, and take snapshots every once in a while. We call this algorithm Stochastic Variance-Reduced Frank-Wolfe (SVRF), whose pseudocode is presented in Alg 1. The convergence rate of this algorithm is shown in the following theorem. Algorithm 1 Stochastic Variance-Reduced Frank-Wolfe (SVRF) 1: Input: Objective function f = 1 n Pn i=1 fi. 2: Input: Parameters γk, mk and Nk. 3: Initialize: w0 = minw Ω f(x) w for some arbitrary x Ω. 4: for t = 1, 2, . . . , T do 5: Take snapshot: x0 = wt 1 and compute f(x0). 6: for k = 1 to Nt do 7: Compute k, the average of mk iid samples of f(xk 1, x0). 8: Compute vk = minv Ω k v. 9: Compute xk = (1 γk)xk 1 + γkvk. 10: end for 11: Set wt = x Nt. 12: end for Theorem 1. With the following parameters, γk = 2 k + 1, mk = 96(k + 1), Nt = 2t+3 2, Algorithm 1 ensures E[f(wt) f(w )] LD2 2t+1 for any t. Before proving this theorem, we first show a direct implication of this convergence result. Corollary 1. To achieve 1 ϵ accuracy, Algorithm 1 requires O(ln LD2 ϵ ) exact gradient evaluations, O( L2D4 ϵ2 ) stochastic gradient evaluations and O( LD2 ϵ ) linear optimizations. Proof. According to the algorithm and the choice of parameters, it is clear that these three numbers are T + 1, PT t=1 PNt k=1 mk = O(4T ) and PT t=1 Nt = O(2T ) respectively. Theorem 1 implies that T should be of order Θ(log2 LD2 ϵ ). Plugging in all parameters concludes the proof. To prove Theorem 1, we first consider a fixed iteration t and prove the following lemma: Lemma 2. For any k, we have E[f(xk) f(w )] 4LD2 if E[ s f(xs 1) 2] L2D2 (s+1)2 for all s k. We defer the proof of this lemma to Section 6 for coherence. With the help of Lemma 2, we are now ready to prove the main convergence result. Variance-Reduced and Projection-Free Stochastic Optimization Proof of Theorem 1. We prove by induction. For t = 0, by smoothness, the optimality of w0 and convexity, we have f(w0) f(x) + f(x) (w0 x) + L f(x) + f(x) (w x) + LD2 f(w ) + LD2 Now assuming E[f(wt 1) f(w )] LD2 2t , we consider iteration t of the algorithm and use another induction to show E[f(xk) f(w )] 4LD2 k+2 for any k Nt. The base case is trivial since x0 = wt 1. Suppose E[f(xs 1) f(w )] 4LD2 s+1 for any s k. Now because s is the average of ms iid samples of f(xs 1; x0), its variance is reduced by a factor of ms. That is, with Lemma 1 we have E[ s f(xs 1) 2] ms (2E[f(xs 1) f(w )] + E[f(x0) f(w )]) s + 1 + LD2 s + 1 + 8LD2 where the last inequality is by the fact s Nt = 2t+3 2 and the last equality is by plugging the choice of ms. Therefore the condition of Lemma 2 is satisfied and the induction is completed. Finally with the choice of Nt we thus prove E[f(wt) f(w )] = E[f(x Nt) f(w )] 4LD2 Nt+2 = LD2 We remark that in Alg 1, we essentially restart the algorithm (that is, reseting k to 1) after taking a new snapshot. However, another option is to continue increasing k and never reset it. Although one can show that this only leads to constant speed up for the convergence, it provides more stable update and is thus what we implement in experiments. 4. Stochastic Variance-Reduced Conditional Gradient Sliding Our second algorithm applies variance reduction to the SCGS algorithm (Lan & Zhou, 2014). Again, the key difference is that we replace the stochastic gradients with the average of a mini-batch of variance-reduced stochastic gradients, and take snapshots every once in a while. See pseudocode in Alg 2 for details. The algorithm makes use of two auxiliary sequences xk and zk (Line 8 and 12), which is standard for Nesterov s algorithm. xk is obtained by approximately solving a square norm regularized linear optimization so that it is close to Algorithm 2 STOchastic variance-Reduced Conditional gradient sliding (STORC) 1: Input: Objective function f = 1 n Pn i=1 fi. 2: Input: Parameters γk, βk, ηt,k, mt,k and Nt. 3: Initialize: w0 = minw Ω f(x) w for some arbitrary x Ω. 4: for t = 1, 2, . . . do 5: Take snapshot: y0 = wt 1 and compute f(y0). 6: Initialize x0 = y0. 7: for k = 1 to Nt do 8: Compute zk = (1 γk)yk 1 + γkxk 1. 9: Compute k, the average of mt,k iid samples of f(zk; y0). 10: Let g(x) = βk 2 x xk 1 2 + k x. 11: Compute xk, the output of using standard Frank Wolfe to solve minx Ωg(x) until the duality gap is at most ηt,k, that is, max x Ω g(xk) (xk x) ηt,k . (4) 12: Compute yk = (1 γk)yk 1 + γkxk. 13: end for 14: Set wt = y Nt. 15: end for xk 1 (Line 11). Note that this step does not require computing any extra gradients of f or fi, and is done by performing the standard Frank-Wolfe algorithm (Eq. (3)) until the duality gap is at most a certain value ηt,k. The duality gap is a certificate of approximate optimality (see (Jaggi, 2013)), and is a side product of the linear optimization performed at each step, requiring no extra cost. Also note that the stochastic gradients are computed at the sequence zk instead of yk, which is also standard in Nesterov s algorithm. However, according to Lemma 1, we thus need to show the convergence rate of the auxiliary sequence zk, which appears to be rarely studied previously to the best our knowledge. This is one of the key steps in our analysis. The main convergence result of STORC is the following: Theorem 2. With the following parameters (where Dt is defined later below): γk = 2 k + 1, βk = 3L k , ηt,k = 2LD2 t Ntk , Algorithm 2 ensures E[f(wt) f(w )] LD2 2t+1 for any t if any of the following three cases holds: (a) f(w ) = 0 and Dt = D, Nt = 2 t 2 +2 , mt,k = 900Nt. (b) f is G-Lipschitz and Dt = D, Nt = 2 t 2 +2 , mt,k = Variance-Reduced and Projection-Free Stochastic Optimization 700Nt + 24Nt G(k+1) (c) f is α-strongly convex and D2 t = µD2 2t 1 , Nt = 32µ , mt,k = 5600Ntµ where µ = L Again we first give a direct implication of the above result: Corollary 2. To achieve 1 ϵ accuracy, Algorithm 2 requires O(ln LD2 ϵ ) exact gradient evaluations and O( LD2 ϵ ) linear optimizations. The numbers of stochastic gradient evaluations for Case (a), (b) and (c) are respectively O( LD2 ϵ ), O( LD2 ϵ1.5 ) and O(µ2 ln LD2 Proof. Line 11 requires O( βk D2 ηt,k ) iterations of the standard Frank-Wolfe algorithm since g(x) is βk-smooth (see e.g. (Jaggi, 2013, Theorem 2)). So the numbers of exact gradient evaluations, stochastic gradient evaluations and linear optimizations are respectively T +1, PT t=1 PNt k=1 mt,k and O(PT t=1 PNt k=1 βk D2 ηt,k ). Theorem 2 implies that T should be of order Θ(log2 LD2 ϵ ). Plugging in all parameters proves the corollary. To prove Theorem 2, we again first consider a fixed iteration t and use the following lemma, which is essentially proven in (Lan & Zhou, 2014). We include a distilled proof in Appendix C for completeness. Lemma 3. Suppose E[ y0 w 2] D2 t holds for some positive constant Dt D. Then for any k, we have E[f(yk) f(w )] 8LD2 t k(k + 1) if E[ s f(zs) 2] L2D2 t Nt(s+1)2 for all s k. Proof of Theorem 2. We prove by induction. The base case t = 0 holds by the exact same argument as in the proof of Theorem 1. Suppose E[f(wt 1) f(w )] LD2 2t and consider iteration t. Below we use another induction to prove E[f(yk) f(w )] 8LD2 t k(k+1) for any 1 k Nt, which will concludes the proof since for any of the three cases, we have E[f(wt) f(w )] = E[f(y Nt) f(w )] which is at most 8LD2 t N 2 t LD2 We first show that the condition E[ y0 w 2] D2 t holds. This is trivial for Cases (a) and (b) when Dt = D. For Case (c), by strong convexity and the inductive assumption, we have E[ y0 w 2] 2 αE[f(y0) f(w )] LD2 α2t 1 = D2 t . Next note that Lemma 1 implies that E[ s f(zs) 2] is at most 6L mt,s (2E[f(zs) f(w )] + E[f(y0) f(w )]). So the key is to bound E[f(zs) f(w )]. With z1 = y0 one can verify that E[ 1 f(z1) 2] is at most 18L mt,1 E[f(y0) f(w )] 18L2D2 mt,12t L2D2 t 4Nt for all three cases, and thus E[f(ys) f(w )] 8LD2 t s(s+1) holds for s = 1 by Lemma 3. Now suppose it holds for any s < k, below we discuss the three cases separately to show that it also holds for s = k. Case (a). By smoothness, the condition f(w ) = 0, the construction of zs, and Cauchy-Schwarz inequality, we have for any 1 < s k, f(zs) f(ys 1) + ( f(ys 1) f(w )) (zs ys 1) 2 zs ys 1 2 = f(ys 1) + γs( f(ys 1) f(w )) (xs 1 ys 1) + Lγ2 s 2 xs 1 ys 1 2 f(ys 1) + γs D f(ys 1) f(w ) + LD2γ2 s 2 . Property (1) and the optimality of w implies: f(ys 1) f(w ) 2 2L(f(ys 1) f(w ) f(w ) (ys 1 w )) 2L(f(ys 1) f(w )). So subtracting f(w ) and taking expectation on both sides, and applying Jensen s inequality and the inductive assumption, we have E[f(zs) f(w )] E[f(ys 1) f(w )] + γs D q 2LE[f(ys 1) f(w )] (s 1)s + 8LD2 (s 1)s + 2LD2 (s + 1)2 < 55LD2 On the other hand, we have E[f(y0) f(w )] LD2 16LD2 (Nt 1)2 < 40LD2 (Nt+1)2 40LD2 (s+1)2 . So E[ s f(zs) 2 is at most 900L2D2 mt,s(s+1)2 , and the choice of mt,s ensures that this bound is at most L2D2 Nt(s+1)2 , satisfying the condition of Lemma 3 and thus completing the induction. Case (b). With the G-Lipschitz condition we proceed similarly and bound f(zs) by f(ys 1) + f(ys 1) (zs ys 1) + L 2 zs ys 1 2 = f(ys 1) + γs f(ys 1) (xs 1 ys 1) + LD2γ2 s 2 f(ys 1) + γs GD + LD2γ2 s 2 . Variance-Reduced and Projection-Free Stochastic Optimization So using bounds derived previously and the choice of mt,s, we bound E[ s f(zs) 2 as follows: (s 1)s + 4GD s + 1 + 4LD2 (s + 1)2 + 40LD2 s + 1 + 116LD2 Nt(s + 1)2 , again completing the induction. Case (c). Using the definition of zs and ys and direct calcalution, one can remove the dependence of xs and verify ys 1 = s + 1 2s 1zs + s 2 for any s 2. Now we apply Property (2) with λ = s+1 2s 1: f(ys 1) s + 1 2s 1f(zs) + s 2 2s 1f(ys 2) 2 (s + 1)(s 2) (2s 1)2 zs ys 2 2 = f(w ) + s + 1 2s 1(f(zs) f(w ))+ s 2 2s 1(f(ys 2) f(w )) L(s 2) 2(s + 1) ys 1 ys 2 2 2(f(zs) f(w )) L 2 ys 1 ys 2 2, where the equality is by adding and subtracting f(w ) and the fact ys 1 ys 2 = s+1 2s 1(zs ys 2), and the last inequality is by f(ys 2) f(w ) and trivial relaxations. Rearranging gives f(zs) f(w ) 2(f(ys 1 f(w ))+ L ys 1 ys 2 2. Applying Cauchy-Schwarz inequality, strong convexity and the fact µ 1, we continue with f(zs) f(w ) 2(f(ys 1 f(w )) + 2L( ys 1 w 2 + ys 2 w 2) 2(f(ys 1 f(w )) + 4µ(f(ys 1) f(w ) + f(ys 2) f(w )) 6µ(f(ys 1 f(w )) + 4µ(f(ys 2) f(w )), For s 3, we use the inductive assumption to show E[f(zs) f(w )] 48µLD2 t (s 1)s + 32µLD2 t (s 2)(s 1) 448µLD2 t (s+1)2 . The case for s = 2 can be verified similarly using the bound on E[f(y0) f(w )] and E[f(y1) f(w )] (base case). Finally we bound the term E[f(y0) f(w )] LD2 LD2 t 2µ 32LD2 t (Nt+1)2 32LD2 t (s+1)2 , and conclude that the variance E[ s f(zs) 2 is at most 6L mt,s ( 896µLD2 t (s+1)2 + 32LD2 t (s+1)2 ) L2D2 t Nt(s+1)2 , completing the induction by Lemma 3. dataset #features #categories #examples news20 62,061 20 15,935 rcv1 47,236 53 15,564 aloi 128 1,000 108,000 Table 3: Summary of datasets 5. Experiments To support our theory, we conduct experiments in the multiclass classification problem mentioned in Sec 2.1. Three datasets are selected from the LIBSVM repository4 with relatively large number of features, categories and examples, summarized in the Table 3. Recall that the loss function is multivariate logistic loss and Ωis the set of matrices with bounded trace norm τ. We focus on how fast the loss decreases instead of the final test error rate so that the tuning of τ is less important, and is fixed to 50 throughout. We compare six algorithms. Four of them (SFW, SCGS, SVRF, STORC) are projection-free as discussed, and the other two are standard projected stochastic gradient descent (SGD) and its variance-reduced version (SVRG (Johnson & Zhang, 2013)), both of which require expensive projection. For most of the parameters in these algorithms, we roughly follow what the theory suggests. For example, the size of mini-batch of stochastic gradients at round k is set to k2, k3 and k respectively for SFW, SCGS and SVRF, and is fixed to 100 for the other three. The number of iterations between taking two snapshots for variance-reduced methods (SVRG, SVRF and STORC) are fixed to 50. The learning rate is set to the typical decaying sequence c/ k for SGD and a constant c for SVRG as the original work suggests for some best tuned c and c . Since the complexity of computing gradients, performing linear optimization and projecting are very different, we measure the actual running time of the algorithms and see how fast the loss decreases. Results can be found in Figure 1, where one can clearly observe that for all datasets, SGD and SVRG are significantly slower compared to the others, due to the expensive projection step, highlighting the usefulness of projection-free algorithms. Moreover, we also observe large improvement gained from the variance reduction technique, especially when comparing SCGS and STORC, as well as SVF and SVRF on the aloi dataset. Interestingly, even though the STORC algorithm gives the best theoretical results, empirically the simpler algorithms 4https://www.csie.ntu.edu.tw/ cjlin/ libsvmtools/datasets/ Variance-Reduced and Projection-Free Stochastic Optimization 0 100 200 300 400 500 2.5 0 100 200 300 400 500 2.6 0 50 100 150 6 SGD SVRG SCGS STORC SFW SVRF Figure 1: Comparison of six algorithms on three multiclass datasets (best viewed in color) SFW and SVRF tend to have consistent better performance. 6. Omitted Proofs Proof of Lemma 1. Let Ei denotes the conditional expectation given all the past except the realization of i. We have Ei[ f(w; w0) f(w) 2] = Ei[ fi(w) fi(w0) + f(w0) f(w) 2] = Ei[ ( fi(w) fi(w )) ( fi(w0) fi(w )) + ( f(w0) f(w )) ( f(w) f(w )) 2] 3Ei[ fi(w) fi(w ) 2 + ( fi(w0) fi(w )) ( f(w0) f(w )) 2 + f(w) f(w ) 2] 3Ei[ fi(w) fi(w ) 2 + fi(w0) fi(w ) 2 + f(w) f(w ) 2] where the first inequality is Cauchy-Schwarz inequality, and the second one is by the fact Ei[ fi(w0) fi(w )] = f(w0) f(w ) and that the variance of a random variable is bounded by its second moment. We now apply Property (1) to bound each of the three terms above. For example, Ei fi(w) fi(w ) 2 2LEi[fi(w) fi(w ) fi(w ) (w w )] = 2L(f(w) f(w ) f(w ) (w w )), which is at most 2L(f(w) f(w )) by the optimality of w . Proceeding similarly for the other two terms concludes the proof. Proof of Lemma 2. For any s k, by smoothness we have f(xs) f(xs 1) + f(xs 1) (xs xs 1) + L 2 xs xs 1 2. Plugging in xs = (1 γs)xs 1 + γsvs gives f(xs) f(xs 1) + γs f(xs 1) (vs xs 1) + Lγ2 s 2 vs xs 1 2. Rewriting and using the fact that vs xs 1 D leads to f(xs) f(xs 1) + γs s (vs xs 1) + γs( f(xs 1) s) (vs xs 1) + LD2γ2 s 2 . The optimality of vs implies s vs s w . So with further rewriting we arrive at f(xs) f(xs 1) + γs f(xs 1) (w xs 1) + γs( f(xs 1) s) (vs w ) + LD2γ2 s 2 . By convexity, term f(xs 1) (w xs 1) is bounded by f(w ) f(xs 1), and by Cauchy-Schwarz inequality, term ( f(xs 1) s) (vs w ) is bounded by D s f(xs 1) , which in expectation is at most LD2 s+1 by the condition on E[ s f(xs 1) 2] and Jensen s inequality. Therefore we can bound E[f(xs) f(w )] by (1 γs)E[f(xs 1) f(w )] + LD2γs s + 1 + LD2γ2 s 2 = (1 γs)E[f(xs 1) f(w )] + LD2γ2 s. Finally we prove E[f(xk) f(w )] 4LD2 k+2 by induction. The base case is trival: E[f(x1) f(w )] is bounded by (1 γ1)E[f(x0) f(w )]+LD2γ2 1 = LD2 since γ1 = 1. Suppose E[f(xs 1) f(w )] 4LD2 s+1 then with γs = 2 s+1 we bound E[f(xs) f(w )] by 1 2 s + 1 + 1 s + 1 completing the induction. 7. Conclusion and Open Problems We conclude that the variance reduction technique, previously shown to be highly useful for gradient descent variants, can also be very helpful in speeding up projection-free algorithms. The main open question is, in the strongly convex case, whether the number of stochastic gradients for STORC can be improved from O(µ2 ln 1 ϵ ) to O(µ ln 1 ϵ ), which is typical for gradient descent methods, and whether the number of linear optimizations can be improved from O( 1 ϵ ) to O(ln 1 Acknowledgements The authors acknowledge support from the National Science Foundation grant IIS-1523815 and a Google research award. Variance-Reduced and Projection-Free Stochastic Optimization Deng, Jia, Dong, Wei, Socher, Richard, Li, Li-Jia, Li, Kai, and Fei-Fei, Li. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248 255. IEEE, 2009. Dudik, Miro, Harchaoui, Zaid, and Malick, J erˆome. Lifted coordinate descent for learning with trace-norm regularization. In Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, volume 22, pp. 327 336, 2012. Frank, Marguerite and Wolfe, Philip. An algorithm for quadratic programming. Naval research logistics quarterly, 3(1-2):95 110, 1956. Frostig, Roy, Ge, Rong, Kakade, Sham M, and Sidford, Aaron. Competing with the empirical risk minimizer in a single pass. In Proceedings of the 28th Annual Conference on Learning Theory, 2015. Garber, Dan and Hazan, Elad. A linearly convergent conditional gradient algorithm with applications to online and stochastic optimization. ar Xiv preprint ar Xiv:1301.4666, 2013. Harchaoui, Zaid, Juditsky, Anatoli, and Nemirovski, Arkadi. Conditional gradient algorithms for normregularized smooth convex optimization. Mathematical Programming, 152(1-2):75 112, 2015. Hazan, Elad and Kale, Satyen. Projection-free online learning. In Proceedings of the 29th International Conference on Machine Learning, 2012. Hazan, Elad, Kale, Satyen, and Shalev-Shwartz, Shai. Near-optimal algorithms for online matrix prediction. In COLT 2012 - The 25th Annual Conference on Learning Theory, June 25-27, 2012, Edinburgh, Scotland, pp. 38.1 38.13, 2012. Jaggi, Martin. Revisiting frank-wolfe: Projection-free sparse convex optimization. In Proceedings of the 30th International Conference on Machine Learning, pp. 427 435, 2013. Johnson, Rie and Zhang, Tong. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in Neural Information Processing Systems 27, pp. 315 323, 2013. Lacoste-Julien, Simon and Jaggi, Martin. On the global linear convergence of frank-wolfe optimization variants. In Advances in Neural Information Processing Systems 29, pp. 496 504, 2015. Lan, Guanghui and Zhou, Yi. Conditional gradient sliding for convex optimization. Optimization-Online preprint (4605), 2014. Mahdavi, Mehrdad, Zhang, Lijun, and Jin, Rong. Mixed optimization for smooth functions. In Advances in Neural Information Processing Systems, pp. 674 682, 2013. Moritz, Philipp, Nishihara, Robert, and Jordan, Michael I. A linearly-convergent stochastic l-bfgs algorithm. In Proceedings of the Nineteenth International Conference on Artificial Intelligence and Statistics, 2016. Nesterov, YU. E. A method of solving a convex programming problem with convergence rate o(1/k2). In Soviet Mathematics Doklady, volume 27, pp. 372 376, 1983. Zhang, Xinhua, Schuurmans, Dale, and Yu, Yao-liang. Accelerated training for matrix-norm regularization: A boosting approach. In Advances in Neural Information Processing Systems 26, pp. 2906 2914, 2012.