# sega_variance_reduction_via_gradient_sketching__222cd478.pdf SEGA: Variance Reduction via Gradient Sketching Filip Hanzely1 Konstantin Mishchenko1 Peter Richt arik1,2,3 1 King Abdullah University of Science and Technology, 2University of Edinburgh, 3Moscow Institute of Physics and Technology We propose a randomized first order optimization method SEGA (Sk Etched Gr Adient) which progressively throughout its iterations builds a variancereduced estimate of the gradient from random linear measurements (sketches) of the gradient. In each iteration, SEGA updates the current estimate of the gradient through a sketch-and-project operation using the information provided by the latest sketch, and this is subsequently used to compute an unbiased estimate of the true gradient through a random relaxation procedure. This unbiased estimate is then used to perform a gradient step. Unlike standard subspace descent methods, such as coordinate descent, SEGA can be used for optimization problems with a non-separable proximal term. We provide a general convergence analysis and prove linear convergence for strongly convex objectives. In the special case of coordinate sketches, SEGA can be enhanced with various techniques such as importance sampling, minibatching and acceleration, and its rate is up to a small constant factor identical to the best-known rate of coordinate descent. 1 Introduction Consider the optimization problem min x2Rn F(x) def = f(x) + R(x), (1) where f : Rn ! R is smooth and µ strongly convex, and R : Rn ! R [ {+1} is a closed convex regularizer. In some applications, R is either the indicator function of a convex set or a sparsity inducing non-smooth penalty such as 1-norm. We assume that the proximal operator of R, defined by prox R(x) def = argminy2Rn R(y) + 1 2 ky xk2 , is easily computable (e.g., in closed form). Above we use the weighted Euclidean norm kxk B def = hx, xi1/2 B , where hx, yi B def = h Bx, yi is a weighted inner product associated with a positive definite weight matrix B 0. Strong convexity of f is defined with respect to the same product and norm1. 1.1 Gradient sketching In this paper we design proximal gradient-type methods for solving (1) without assuming that the true gradient of f is available. Instead, we assume that an oracle provides a random linear transformation (i.e., a sketch) of the gradient, which is the information available to drive the iterative 1f is µ strongly convex if f(x) f(y) + hrf(y), x yi B + µ B for all x, y 2 Rn. 32nd Conference on Neural Information Processing Systems (Neur IPS 2018), Montr eal, Canada. process. In particular, given a fixed distribution D over matrices S 2 Rn b (b 1 can but does not need to be fixed), and a query point x 2 Rn, our oracle provides us the random linear transformation of the gradient given by def = S>rf(x) 2 Rb, S D. (2) Information of this type is available/used in a variety of scenarios. For instance, randomized coordinate descent (CD) methods use oracle (2) with D corresponding to a distribution over standard basis vectors. Minibatch/parallel variants of CD methods utilize oracle (2) with D corresponding to a distribution over random column submatrices of the identity matrix. If one is prepared to use difference of function values to approximate directional derivatives, one can apply our oracle model to zerothorder optimization [8]. Indeed, the directional derivative of f in a random direction S = s 2 Rn 1 can be approximated by (s, x) 1 (f(x + s) f(x)), where > 0 is sufficiently small. We now illustrate this concept using two examples. Example 1.1 (Sketches). (i) Coordinate sketch. Let D be the uniform distribution over standard unit basis vectors e1, e2, . . . , en of Rn. Then (ei, x) = e> i rf(x), i.e., the ith partial derivative of f at x. (ii) Gaussian sketch. Let D be the standard Gaussian distribution in Rn. Then for s D we have (s, x) = s>rf(x), i.e., the directional derivative of f at x in direction s. 1.2 Related work In the last decade, stochastic gradient-type methods for solving problem (1) have received unprecedented attention by theoreticians and practitioners alike. Specific examples of such methods are stochastic gradient descent (SGD) [43], variance-reduced variants of SGD such as SAG [44], SAGA [10], SVRG [22], and their accelerated counterparts [26, 1]. While these methods are specifically designed for objectives formulated as an expectation or a finite sum, we do not assume such a structure. Moreover, these methods utilize a fundamentally different stochastic gradient information: they have access to an unbiased gradient estimator. In contrast, we do not assume that (2) is an unbiased estimator of rf(x). In fact, (S, x) 2 Rb and rf(x) 2 Rn do not even necessarily belong to the same space. Therefore, our algorithms and results are complementary to the above line of research. While the gradient sketch (S, x) does not immediatey lead to an unbiased estimator of the gradient, SEGA uses the information provided in the sketch to construct an unbiased estimator of the gradient via a sketch-and-project process. Sketch-and-project iterations were introduced in [15] in the contex of linear feasibility problems. A dual view uncovering a direct relationship with stochastic subspace ascent methods was developed in [16]. The latest and most in-depth treatment of sketch-and-project for linear feasibility is based on the idea of stochastic reformulations [42]. Sketch-and-project can be combined with Polyak [29, 28] and Nesterov momentum [14, 47], extended to convex feasibility problems [30], matrix inversion [18, 17, 14], and empirical risk minimization [13, 19]. The line of work most closely related to our setup is that on randomized coordinate/subspace descent methods [34, 16]. Indeed, the information available to these methods is compatible with our oracle for specific distributions D. However, the main disadvantage of these methods is that they can not handle non-separable regularizers R. In contrast, the algorithm we propose SEGA works for any regularizer R. In particular, SEGA can handle non-separable constraints even with coordinate sketches, which is out of range of current CD methods. Hence, our work could be understood as extending the reach of coordinate and subspace descent methods from separable to arbitrary regularizers, which allows for a plethora of new applications. Our method is able to work with an arbitrary regularizer due to its ability to build an unbiased variance-reduced estimate of the gradient of f throughout the iterative process from the random sketches provided by the oracle. Moreover, and unlike coordinate descent, SEGA allows for general sketches from essentially any distribution D. Another stream of work on designing gradient-type methods without assuming perfect access to the gradient is represented by the inexact gradient descent methods [9, 11, 45]. However, these methods deal with deterministic estimates of the gradient and are not based on linear transformations of the gradient. Therefore, this second line of research is also significantly different from what we do here. 1.3 Outline We describe SEGA in Section 2. Convergence results for general sketches are described in Section 3. Refined results for coordinate sketches are presented in Section 4, where we also describe and analyze an accelerated variant of SEGA. Experimental results can be found in Section 5. Conclusions are drawn and potential extensions outlined in Appendix A. Proofs of the main results can be found in Appendices B and C. An aggressive subspace variant of SEGA is described and analyzed in Appendix D. A simplified analysis of SEGA in the case of coordinate sketches and for R 0 is developed in Appendix E (under standard assumptions as in the main paper) and F (under alternative assumptions). Extra experiments for additional insights are included in Appendix G. Notation. We introduce notation where needed. We also provide a notation table in Appendix H. 2 The SEGA Algorithm In this section we introduce a learning process for estimating the gradient from the sketched information provided by (2); this will be used as a subroutine of SEGA. Let xk be the current iterate, and let hk be the current estimate of the gradient of f. The oracle queried, and we receive new information in the form of the sketched gradient (2). Then, we would like to update hk based on the new information. We do this using a sketch-and-project process [15, 16, 42]: we set hk+1 to be the closest vector to hk (in a certain Euclidean norm) satisfying (2): hk+1 = arg min h2Rn kh hkk2 B subject to S> k rf(xk). (3) The closed-form solution of (3) is hk+1 = hk B 1Zk(hk rf(xk)) = (I B 1Zk)hk + B 1Zkrf(xk), (4) k . Notice that hk+1 is a biased estimator of rf(xk). In order to obtain an unbiased gradient estimator, we introduce a random variable2 k = (Sk) for which ED [ k Zk] = B. (5) If k satisfies (5), it is straightforward to see that the random vector gk def = (1 k)hk + khk+1 (4)= hk + k B 1Zk(rf(xk) hk) (6) is an unbiased estimator of the gradient: gk (5)+(6) = rf(xk). (7) Finally, we use gk instead of the true gradient, and perform a proximal step with respect to R. This leads to a new optimization method, which we call Sk Etched Gr Adient Method (SEGA) and describe in Algorithm 1. We stress again that the method does not need the access to the full gradient. 2Such a random variable may not exist. Some sufficient conditions are provided later. Algorithm 1: SEGA: Sk Etched Gr Adient Method 1 Initialize: x0, h0 2 Rn; B 0; distribution D; stepsize > 0 2 for k = 1, 2, . . . do 3 Sample Sk D 4 gk = hk + k B 1Zk(rf(xk) hk) 5 xk+1 = prox R(xk gk) 6 hk+1 = hk + B 1Zk(rf(xk) hk) Figure 1: Iterates of SEGA and CD 2.1 SEGA as a variance-reduced method As we shall show, both hk and gk become better at approximating rf(xk) as the iterates xk approach the optimum. Hence, the variance of gk as an estimator of the gradient tends to zero, which means that SEGA is a variance-reduced algorithm. The structure of SEGA is inspired by the Jack Sketch algorithm introduced in [19]. However, as Jack Sketch is aimed at solving a finitesum optimization problem with many components, it does not make much sense to apply it to (1). Indeed, when applied to (1) (with R = 0, since Jack Sketch was analyzed for smooth optimization only), Jack Sketch reduces to gradient descent. While Jack Sketch performs Jacobian sketching (i.e., multiplying the Jacobian by a random matrix from the right, effectively sampling a subset of the gradients forming the finite sum), SEGA multiplies the Jacobian by a random matrix from the left. In doing so, SEGA becomes oblivious to the finite-sum structure and transforms into the gradient sketching mechanism described in (2). 2.2 SEGA versus coordinate descent We now illustrate the above general setup on the simple example when D corresponds to a distribution over standard unit basis vectors in Rn. Example 2.1. Let B = Diag(b1, . . . , bn) 0 and let D be defined as follows. We choose Sk = ei with probability pi > 0, where e1, e2, . . . , en are the unit basis vectors in Rn. Then hk+1 (4)= hk + e> i (rf(xk) hk)ei, (8) which can equivalently be written as hk+1 i rf(xk) and hk+1 j for j 6= i. Note that hk+1 does not depend on B. If we choose k = (Sk) = 1/pi, then ED [ k Zk] = i B 1ei) 1e> which means that k is a bias-correcting random variable. We then get gk (6)= hk + 1 i (rf(xk) hk)ei. (9) In the setup of Example 2.1, both SEGA and CD obtain new gradient information in the form of a random partial derivative of f. However, the two methods perform a different update: (i) SEGA allows for arbitrary proximal term, CD allows for separable one only [46, 27, 12]; (ii) While SEGA updates all coordinates in every iteration, CD updates a single coordinate only; (iii) If we force hk = 0 in SEGA and use coordinate sketches, the method transforms into CD. Based on the above observations, we conclude that SEGA can be applied in more general settings for the price of potentially more expensive iterations3. For intuition-building illustration of how SEGA 3Forming vector g and computing the prox. works, Figure 1 shows the evolution of iterates of both SEGA and CD applied to minimizing a simple quadratic function in 2 dimensions. For more figures of this type, including the composite case where CD does not work, see Appendix G.1. In Section 4 we show that SEGA enjoys, up to a small constant factor, the same theoretical iteration complexity as CD. This remains true when comparing state-of-the-art variants of CD with importance sampling, parallelism/mini-batching and acceleration with the corresponding variants of SEGA. Remark 2.2. Nontrivial sketches S and metric B might, in some applications, bring a substantial speedup against the baseline choices mentioned in Example 2.1. Appendix D provides one example: there are problems where the gradient of f always lies in a particular d-dimensional subspace of Rn. In such a case, suitable choice of S and B leads to O times faster convergence compared to the setup of Example 2.1. In Section 5.3 we numerically demonstrate this claim. 3 Convergence of SEGA for General Sketches In this section we state a linear convergence result for SEGA (Algorithm 1) for general sketch distributions D under smoothness and strong convexity assumptions. 3.1 Smoothness assumptions We will use the following general version of smoothness. Assumption 3.1 (Q-smoothness). Function f is Q-smooth with respect to B, where Q 0 and B 0. That is, for all x, y, the following inequality is satisfied: f(x) f(y) hrf(y), x yi B 1 2krf(x) rf(y)k2 Assumption 3.1 is not standard in the literature. However, as Lemma B.1 states, in the special case of B = I and Q = M 1, it reduces to M-smoothness (see Assumption 3.2), which is a common assumption in modern analysis of CD methods. Assumption 3.2 (M-smoothness). Function f is M-smooth for some matrix M 0. That is, for all x, y, the following inequality is satisfied: f(x) f(y) + hrf(y), x yi + 1 Assumption 3.2 is fairly standard in the CD literature. It appears naturally in various application such as empirical risk minimization with linear predictors and is a baseline in the development of minibatch CD methods [41, 38, 36, 39]. We will adopt this notion in Section 4, when comparing SEGA to coordinate descent. Until then, let us consider the more general Assumption 3.1. 3.2 Main result Now we present one of the key theorems of the paper, stating a linear convergence of SEGA. Theorem 3.3. Assume that f is Q smooth with respect to B, and µ strongly convex. Fix x0, h0 2 dom(F) and let xk, hk be the random iterates produced by SEGA. Choose stepsize > 0 and Lyapunov parameter σ > 0 so that (2(C B) + σµB) σED [Z] , C 1 2 (Q σED [Z]) , (12) (1 µ)kΦ0 for Lyapunov function Φk def = kxk x k2 B + σ khk rf(x )k2 B, where x is a solution of (1). Nonaccelerated method importance sampling, b = 1 [34] 8.55 Trace(M) Nonaccelerated method arbitrary sampling Accelerated method importance sampling, b = 1 1.62 Mii pµ log 1 Mii pµ log 1 Accelerated method arbitrary sampling 1.62 Table 1: Complexity results for coordinate descent (CD) and our sketched gradient method (SEGA), specialized to coordinate sketching, for M smooth and µ strongly convex functions. Note that Φk ! 0 implies hk ! rf(x ). Therefore SEGA is variance reduced, in contrast to CD in the non-separable proximal setup, which does not converge to the solution. If σ is small enough so that Q σED [Z] 0, one can always choose stepsize satisfying λmin(ED[Z]) λmax(2σ 1(C B)+µB), λmin(Q σED[Z]) and inequalities (12) will hold. Therefore, we get the next corollary. Corollary 3.4. If σ < λmin(Q) λmax(ED[Z]), satisfies (13) and k 1 µ log Φ0 As Theorem 3.3 is rather general, we also provide a simplified version thereof, complete with a simplified analysis (Theorem E.1 in Appendix E). In the simplified version we remove the proximal setting (i.e., we set R = 0), assume L smoothness4, and only consider coordinate sketches with uniform probabilities. The result is provided as Corollary 3.5. Corollary 3.5. Let B = I and choose D to be the uniform distribution over unit basis vectors in Rn. If the stepsize satisfies 0 < min{(1 Lσ/n)/(2Ln), n 1 (µ + 2(n 1)/σ) 1}, then ED (1 µ)Φk, and therefore the iteration complexity is O(n L/µ). Remark 3.6. In the fully general case, one might choose to be bigger than bound (13), which depends on eigen properties of ED [Z] , C, Q, B, leading to a better overall complexity. However, in the simple case with B = I, Q = I and Sk = eik with uniform probabilities, bound (13) is tight. 4 Convergence of SEGA for Coordinate Sketches In this section we compare SEGA with coordinate descent. We demonstrate that, specialized to a particular choice of the distribution D (where S is a random column submatrix of the identity matrix), which makes SEGA use the same random gradient information as that used in modern randomized CD methods, SEGA attains, up to a small constant factor, the same convergence rate as CD methods. Firstly, in Section 4.2 we develop SEGA with in a general setup known as arbitrary sampling [41, 40, 37, 38, 6] (Theorem 4.2). Then, in Section 4.3 we develop an accelerated variant of SEGA (see Theorem C.5) for arbitrary sampling as well. Lastly, Corollary 4.3 and Corollary 4.4 provide us with importance sampling for both nonaccelerated and accelerated method, which matches up to a constant factor cutting-edge CD rates [41, 3] under the same oracle and assumptions5. Table 1 summarizes the results of this section. We provide all proofs for this section in Appendix C. 4The standard L smoothness assumption is a special case of M smoothness for M = LI, and hence is less general than both M smoothness and Q smoothness with respect to B. 5There was recently introduced a notion of importance minibatch sampling for coordinate descent [20]. We state, without a proof, that SEGA allows for the same importance sampling as developed in the mentioned paper. We now describe the setup and technical assumptions for this section. In order to facilitate a direct comparison with CD (which does not work with non-separable regularizer R), for simplicity we consider problem (1) in the simplified setting with R 0. Further, function f is assumed to be M smooth (Assumption 3.2) and µ strongly convex. 4.1 Defining D: samplings In order to draw a direct comparison with general variants of CD methods (i.e., with those analyzed in the arbitrary sampling paradigm), we consider sketches in (3) that are column submatrices of the identity matrix: S = IS, where S is a random subset (aka sampling) of [n] def = {1, 2, . . . , n}. Note that the columns of IS are the standard basis vectors ei for i 2 S and hence Range (S) = Range (ei : i 2 S) . So, distribution D from which we draw matrices is uniquely determined by the distribution of sampling S. Given a sampling S, define p = (p1, . . . , pn) 2 Rn to be the vector satisfying pi = P (ei 2 Range (S)) = P (i 2 S), and P to be the matrix for which Pij = P ({i, j} S) . Note that p and P are the probability vector and probability matrix of sampling S, respectively [38]. We assume throughout the paper that S is proper, i.e., we assume that pi > 0 for all i. State-of-the-art minibatch CD methods (including the ones we compare against [41, 20]) utilize large stepsizes related to the so-called ESO Expected Separable Overapproximation (ESO) [38] parameters v = (v1, . . . , vn). ESO parameters play a key role in SEGA as well, and are defined next. Assumption 4.1 (ESO). There exists a vector v satisfying the following inequality P M Diag(p)Diag(v), (14) where denotes the Hadamard (i.e., element-wise) product of matrices. In case of single coordinate sketches, parameters v are equal to coordinate-wise smoothness constants of f. An extensive study on how to choose them in general was performed in [38]. For notational brevity, let us set ˆP def = Diag(p) and ˆV def = Diag(v) throughout this section. 4.2 Non-accelerated method We now state the convergence rate of (non-accelerated) SEGA for coordinate sketches with arbitrary sampling of subsets of coordinates. The corresponding CD method was developed in [41]. Theorem 4.2. Assume that f is M smooth and µ strongly convex. Denote k def = f(xk) f(x )+ σkhkk2 ˆP 1. Choose , σ > 0 such that σI 2( ˆVˆP 1 M) γµσˆP 1, (15) def = 2 maxi{ vi pi } σ. Then the iterates of SEGA satisfy E We now give an importance sampling result for a coordinate version of SEGA. We recover, up to a constant factor, the same convergence rate as standard CD [34]. The probabilities we chose are optimal in our analysis and are proportional to the diagonal elements of matrix M. Corollary 4.3. Assume that f is M smooth and µ strongly convex. Suppose that D is such that at each iteration standard unit basis vector ei is sampled with probability pi / Mii. If we choose = 0.232 Trace(M), σ = 0.061 Trace(M), then E 1 0.117µ Trace(M) The iteration complexities from Theorem 4.2 and Corollary 4.3 are summarized in Table 1. We also state that σ, can be chosen so that (15) holds, and the rate from Theorem 4.2 coincides with the rate from Table 1. Theorem 4.2 and Corollary 4.3 hold even under a non-convex relaxation of strong convexity Polyak-Łojasiewicz inequality: µ(f(x) f(x )) 1 2. Thus, SEGA works for a certain class of non-convex problems. For an overview on relaxations of strong convexity, see [23]. 4.3 Accelerated method In this section, we propose an accelerated (in the sense of Nesterov s method [31, 32]) version of SEGA, which we call ASEGA. The analogous accelerated CD method, in which a single coordinate is sampled in every iteration, was developed and analyzed in [3]. The general variant utilizing arbitrary sampling was developed and analyzed in [20]. Algorithm 2: ASEGA: Accelerated SEGA 1 Initialize: x0 = y0 = z0 2 Rn; h0 2 Rn; S; parameters , β, , µ > 0 2 for k = 1, 2, . . . do 3 xk = (1 )yk 1 + zk 1 4 Sample Sk = ISk, where Sk S, and compute gk, hk+1 according to (4), (6) 5 yk = xk ˆP 1gk 6 zk = 1 1+βµ(zk + βµxk βgk) The method and analysis is inspired by [2]. Due to space limitations and technicality of the content, we state the main theorem of this section in Appendix C.4. Here, we provide Corollary 4.4, which shows that Algorithm 2 with single coordinate sampling enjoys, up to a constant factor, the same convergence rate as state-of-the-art accelerated coordinate descent method NUACDM [3]. Corollary 4.4. Let the sampling be defined as follows: S = {i} w. p. pi / p Mii, for i 2 [n]. Then there exist acceleration parameters and a Lyapunov function k such that f(yk) f(x ) k and E The iteration complexity provided by Theorem C.5 and Corollary 4.4 are summarized in Table 1. 5 Experiments In this section we perform numerical experiments to illustrate the potential of SEGA. Firstly, in Section 5.1, we compare it to projected gradient descent (PGD) algorithm. Then in Section 5.2, we study the performance of zeroth-order SEGA (when sketched gradients are being estimated through function value evaluations) and compare it to the analogous zeroth-order method. Lastly, in Section 5.3 we verify the claim from Remark 3.6 that in some applications, particular sketches and metric might lead to a significantly faster convergence. In the experiments where theory-supported stepsizes were used, we obtained them by precomputing strong convexity and smoothness measures. 5.1 Comparison to projected gradient In this experiment, we show the potential superiority of our method to PGD. We consider the 2 ball constrained problem (R is the indicator function of the unit ball) with the oracle providing the sketched gradient in the random Gaussian direction. As we mentioned, a method moving in the gradient direction (analogue of CD), will not converge due as R is not separable. Therefore, we can only compare against the projected gradient. In order to obtain the full gradient for PGD, one needs to gather n sketched gradients and solve a corresponding linear system. As for f, we choose 4 different quadratics, see Table 2 (appendix). We stress that these are synthetic problems generated for the purpose of illustrating the potential of our method against a natural baseline. Figure 2 compares SEGA and PGD under various relative cost scenarios of solving the linear system compared to the cost of the oracle calls. The results show that SEGA significantly outperforms PGD as soon as solving the linear system is expensive, and is as fast as PGD even if solving the linear system comes for free. Figure 2: Convergence of SEGA and PGD on synthetic problems with n = 500. The indicator Xn in the label indicates the setting where the cost of solving linear system is Xn times higher comparing to the oracle call. Recall that a linear system is solved after each n oracle calls. Stepsizes 1/λmax(M) and 1/(nλmax(M)) were used for PGD and SEGA, respectively. Figure 3: Comparison of SEGA and randomized direct search for various problems. Theory supported stepsizes were chosen for both methods. 500 dimensional problem. 5.2 Comparison to zeroth-order optimization methods In this section, we compare SEGA to the random direct search (RDS) method [5] under a zerothorder oracle and R = 0. For SEGA, we estimate the sketched gradient using finite differences. Note that RDS is a randomized version of the classical direct search method [21, 24, 25]. At iteration k, RDS moves to argmin f(xk + ksk), f(xk ksk), f(xk) for a random direction sk D and a suitable stepszie k. For illustration, we choose f to be a quadratic problem based on Table 2 and compare both Gaussian and coordinate sketches. Figure 3 shows that SEGA outperforms RDS. 5.3 Subspace SEGA: a more aggressive approach As mentioned in Remark 3.6, well designed sketches are capable of exploiting structure of f and lead to a better rate. We address this in detail in Appendix D where we develop and analyze a subspace variant of SEGA. To illustrate this phenomenon in a simple setting, we perform experiments for problem (1) with f(x) = k Ax bk2, where b 2 Rd and A 2 Rd n has orthogonal rows, and with R being the indicator function of the unit ball in Rn. We assume that n d. We compare two methods: naive SEGA, which uses coordinate sketches, and subspace SEGA, where sketches are chosen as rows of A. Figure 4 indicates that subspace SEGA outperforms naive SEGA roughly by the factor n d , as claimed in Appendix D. Figure 4: Comparison of SEGA with sketches from a correct subspace versus coordinate sketches naive SEGA. Stepsize chosen according to theory. 1000 dimensional problem. [1] Zeyuan Allen-Zhu. Katyusha: The first direct acceleration of stochastic gradient methods. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, pages 1200 1205. ACM, 2017. [2] Zeyuan Allen-Zhu and Lorenzo Orecchia. Linear coupling: An ultimate unification of gradient and mirror descent. In Innovations in Theoretical Computer Science, 2017. [3] Zeyuan Allen-Zhu, Zheng Qu, Peter Richt arik, and Yang Yuan. Even faster accelerated coor- dinate descent using non-uniform sampling. In Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 1110 1119, 2016. [4] Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183 202, 2009. [5] El Houcine Bergou, Peter Richt arik, and Eduard Gorbunov. Random direct search method for minimizing nonconvex, convex and strongly convex functions. Manuscript, 2018. [6] Antonin Chambolle, Matthias J Ehrhardt, Peter Richt arik, and Carola-Bibiane Sch oenlieb. Stochastic primal-dual hybrid gradient algorithm with arbitrary sampling and imaging applications. SIAM Journal on Optimization, 28(4):27832808, 2018. [7] Chih-Chung Chang and Chih-Jen Lin. Lib SVM: A library for support vector machines. ACM transactions on intelligent systems and technology (TIST), 2(3):27, 2011. [8] Andrew R Conn, Katya Scheinberg, and Luis N Vicente. Introduction to derivative-free opti- mization, volume 8. Siam, 2009. [9] Alexandre d Aspremont. Smooth optimization with approximate gradient. SIAM Journal on Optimization, 19(3):1171 1183, 2008. [10] Aaron Defazio, Francis Bach, and Simon Lacoste-Julien. SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. In Advances in Neural Information Processing Systems, pages 1646 1654, 2014. [11] Olivier Devolder, Franc ois Glineur, and Yurii Nesterov. First-order methods of smooth convex optimization with inexact oracle. Mathematical Programming, 146(1-2):37 75, 2014. [12] Olivier Fercoq and Peter Richt arik. Accelerated, parallel and proximal coordinate descent. SIAM Journal on Optimization, (25):1997 2023, 2015. [13] Robert M Gower, Donald Goldfarb, and Peter Richt arik. Stochastic block BFGS: squeezing more curvature out of data. In 33rd International Conference on Machine Learning, pages 1869 1878, 2016. [14] Robert M Gower, Filip Hanzely, Peter Richt arik, and Sebastian Stich. Accelerated stochastic matrix inversion: general theory and speeding up BFGS rules for faster second-order optimization. ar Xiv:1802.04079, 2018. [15] Robert M Gower and Peter Richt arik. Randomized iterative methods for linear systems. SIAM Journal on Matrix Analysis and Applications, 36(4):1660 1690, 2015. [16] Robert M Gower and Peter Richt arik. Stochastic dual ascent for solving linear systems. ar Xiv preprint ar Xiv:1512.06890, 2015. [17] Robert M Gower and Peter Richt arik. Linearly convergent randomized iterative methods for computing the pseudoinverse. ar Xiv:1612.06255, 2016. [18] Robert M Gower and Peter Richt arik. Randomized quasi-Newton updates are linearly con- vergent matrix inversion algorithms. SIAM Journal on Matrix Analysis and Applications, 38(4):1380 1409, 2017. [19] Robert M Gower, Peter Richt arik, and Francis Bach. Stochastic quasi-gradient methods: Vari- ance reduction via Jacobian sketching. ar Xiv preprint ar Xiv:1805.02632, 2018. [20] Filip Hanzely and Peter Richt arik. Accelerated coordinate descent with arbitrary sampling and best rates for minibatches. ar Xiv preprint ar Xiv:1809.09354, 2018. [21] Robert Hooke and Terry A Jeeves. Direct search solution of numerical and statistical prob- lems. Journal of the ACM (JACM), 8(2):212 229, 1961. [22] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive vari- ance reduction. In Advances in Neural Information Processing Systems, pages 315 323, 2013. [23] Hamed Karimi, Julie Nutini, and Mark Schmidt. Linear convergence of gradient and proximal- gradient methods under the Polyak-Lojasiewicz condition. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 795 811. Springer, 2016. [24] Tamara G Kolda, Robert M Lewis, and Virginia Torczon. Optimization by direct search: New perspectives on some classical and modern methods. SIAM Review, 45(3):385 482, 2003. [25] Jakub Koneˇcn y and Peter Richt arik. Simple complexity analysis of simplified direct search. ar Xiv preprint ar Xiv:1410.0390, 2014. [26] Hongzhou Lin, Julien Mairal, and Zaid Harchaoui. A universal catalyst for first-order opti- mization. In Advances in Neural Information Processing Systems, pages 3384 3392, 2015. [27] Qihang Lin, Zhaosong Lu, and Lin Xiao. An accelerated proximal coordinate gradient method. In Advances in Neural Information Processing Systems, pages 3059 3067, 2014. [28] Nicolas Loizou and Peter Richt arik. Linearly convergent stochastic heavy ball method for minimizing generalization error. In NIPS Workshop on Optimization for Machine Learning, 2017. [29] Nicolas Loizou and Peter Richt arik. Momentum and stochastic momentum for stochastic gra- dient, Newton, proximal point and subspace descent methods. ar Xiv:1712.09677, 2017. [30] Ion Necoara, Peter Richt arik, and Andrei Patrascu. Randomized projection methods for convex feasibility problems: conditioning and convergence rates. ar Xiv:1801.04873, 2018. [31] Yurii Nesterov. A method of solving a convex programming problem with convergence rate O(1/k2). Soviet Mathematics Doklady, 27(2):372 376, 1983. [32] Yurii Nesterov. Introductory lectures on convex optimization: A basic course. Kluwer Aca- demic Publishers, 2004. [33] Yurii Nesterov. Smooth minimization of nonsmooth functions. Mathematical Programming, 103:127 152, 2005. [34] Yurii Nesterov. Efficiency of coordinate descent methods on huge-scale optimization prob- lems. SIAM Journal on Optimization, 22(2):341 362, 2012. [35] Lam M Nguyen, Jie Liu, Katya Scheinberg, and Martin Tak aˇc. SARAH: A novel method for machine learning problems using stochastic recursive gradient. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 2613 2621. PMLR, 2017. [36] Zheng Qu and Peter Richt arik. Coordinate descent with arbitrary sampling I: Algorithms and complexity. Optimization Methods and Software, 31(5):829 857, 2016. [37] Zheng Qu and Peter Richt arik. Coordinate descent with arbitrary sampling I: Algorithms and complexity. Optimization Methods and Software, 31(5):829 857, 2016. [38] Zheng Qu and Peter Richt arik. Coordinate descent with arbitrary sampling II: Expected sepa- rable overapproximation. Optimization Methods and Software, 31(5):858 884, 2016. [39] Zheng Qu, Peter Richt arik, Martin Tak aˇc, and Olivier Fercoq. SDNA: Stochastic dual Newton ascent for empirical risk minimization. In Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 1823 1832. PMLR, 2016. [40] Zheng Qu, Peter Richt arik, and Tong Zhang. Quartz: Randomized dual coordinate ascent with arbitrary sampling. In Advances in Neural Information Processing Systems, pages 865 873, 2015. [41] Peter Richt arik and Martin Tak aˇc. On optimal probabilities in stochastic coordinate descent methods. Optimization Letters, 10(6):1233 1243, 2016. [42] Peter Richt arik and Martin Tak aˇc. Stochastic reformulations of linear systems: algorithms and convergence theory. ar Xiv:1706.01108, 2017. [43] Herbert Robbins and Sutton Monro. A stochastic approximation method. Annals of Mathe- matical Statistics, 22:400 407, 1951. [44] Nicolas Le Roux, Mark Schmidt, and Francis Bach. A stochastic gradient method with an exponential convergence rate for finite training sets. In Advances in Neural Information Processing Systems, pages 2663 2671, 2012. [45] Mark Schmidt, Nicolas Le Roux, and Francis R Bach. Convergence rates of inexact proximal- gradient methods for convex optimization. In Advances in Neural Information Processing Systems, pages 1458 1466, 2011. [46] Shai Shalev-Shwartz and Tong Zhang. Proximal stochastic dual coordinate ascent. ar Xiv preprint ar Xiv:1211.2717, 2012. [47] Stephen Tu, Shivaram Venkataraman, Ashia C. Wilson, Alex Gittens, Michael I. Jordan, and Benjamin Recht. Breaking locality accelerates block Gauss-Seidel. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 3482 3491. PMLR, 2017.