# integration_methods_and_optimization_algorithms__e4308045.pdf Integration Methods and Optimization Algorithms Damien Scieur INRIA, ENS, PSL Research University, Paris France damien.scieur@inria.fr Vincent Roulet INRIA, ENS, PSL Research University, Paris France vincent.roulet@inria.fr Francis Bach INRIA, ENS, PSL Research University, Paris France francis.bach@inria.fr Alexandre d Aspremont CNRS, ENS PSL Research University, Paris France aspremon@ens.fr We show that accelerated optimization methods can be seen as particular instances of multi-step integration schemes from numerical analysis, applied to the gradient flow equation. Compared with recent advances in this vein, the differential equation considered here is the basic gradient flow, and we derive a class of multi-step schemes which includes accelerated algorithms, using classical conditions from numerical analysis. Multi-step schemes integrate the differential equation using larger step sizes, which intuitively explains the acceleration phenomenon. Introduction Applying the gradient descent algorithm to minimize a function f has a simple numerical interpretation as the integration of the gradient flow equation, written x(t) = f(x(t)), x(0) = x0 (Gradient Flow) using Euler s method. This appears to be a somewhat unique connection between optimization and numerical methods, since these two fields have inherently different goals. On one hand, numerical methods aim to get a precise discrete approximation of the solution x(t) on a finite time interval. On the other hand, optimization algorithms seek to find the minimizer of a function, which corresponds to the infinite time horizon of the gradient flow equation. More sophisticated methods than Euler s were developed to get better consistency with the continuous time solution but still focus on a finite time horizon [see e.g. Süli and Mayers, 2003]. Similarly, structural assumptions on f lead to more sophisticated optimization algorithms than the gradient method, such as the mirror gradient method [see e.g. Ben-Tal and Nemirovski, 2001; Beck and Teboulle, 2003], proximal gradient method [Nesterov, 2007] or a combination thereof [Duchi et al., 2010; Nesterov, 2015]. Among them Nesterov s accelerated gradient algorithm [Nesterov, 1983] is proven to be optimal on the class of smooth convex or strongly convex functions. This latter method was designed with optimal complexity in mind, but the proof relies on purely algebraic arguments and the key mechanism behind acceleration remains elusive, with various interpretations discussed in e.g. [Bubeck et al., 2015; Allen Zhu and Orecchia, 2017; Lessard et al., 2016]. Another recent stream of papers used differential equations to model the acceleration behavior and offer another interpretation of Nesterov s algorithm [Su et al., 2014; Krichene et al., 2015; Wibisono et al., 2016; Wilson et al., 2016]. However, the differential equation is often quite complex, being 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. reverse-engineered from Nesterov s method itself, thus losing the intuition. Moreover, integration methods for these differential equations are often ignored or are not derived from standard numerical integration schemes. Here, we take another approach. Rather than using a complicated differential equation, we use advanced multistep discretization methods on the basic gradient flow equation in (Gradient Flow). Ensuring that the methods effectively integrate this equation for infinitesimal step sizes is essential for the continuous time interpretation and leads to a family of integration methods which contains various well-known optimization algorithms. A full analysis is carried out for linear gradient flows (quadratic optimization) and provides compelling explanations for the acceleration phenomenon. In particular, Nesterov s method can be seen as a stable and consistent gradient flow discretization scheme that allows bigger step sizes in integration, leading to faster convergence. 1 Gradient flow We seek to minimize a L-smooth µ-strongly convex function defined on Rd. We discretize the gradient flow equation (Gradient Flow), given by the following ordinary differential equation x(t) = g(x(t)), x(0) = x0, (ODE) where g comes from a potential f, meaning g = f. Smoothness of f means Lipschitz continuity of g, i.e. g(x) g(y) L x y , for every x, y Rd, where . is the Euclidean norm. This property ensures existence and uniqueness of the solution of (ODE) (see [Süli and Mayers, 2003, Theorem 12.1]). Strong convexity of f also means strong monotonicity of g, i.e., µ x y 2 x y, g(x) g(y) , for every x, y Rd, and ensures that (ODE) has a unique point x such that g(x ) = 0, called the equilibrium. This is the minimizer of f and the limit point of the solution, i.e. x( ) = x . Finally this assumption allows us to control the convergence rate of the potential f and the solution x(t) as follows. Proposition 1.1 Let f be a L-smooth and µ-strongly convex function and x0 dom(f). Writing x the minimizer of f, the solution x(t) of (Gradient Flow) satisfies f(x(t)) f(x ) (f(x0) f(x ))e 2µt, x(t) x x0 x e µt. (1) A proof of this last result is recalled in the Supplementary Material. We now focus on numerical methods to integrate (ODE). 2 Numerical integration of differential equations 2.1 Discretization schemes In general, we do not have access to an explicit solution x(t) of (ODE). We thus use integration algorithms to approximate the curve (t, x(t)) by a grid (tk, xk) (tk, x(tk)) on a finite interval [0, tmax]. For simplicity here, we assume the step size hk = tk tk 1 is constant, i.e., hk = h and tk = kh. The goal is then to minimize the approximation error xk x(tk) for k [0, tmax/h]. We first introduce Euler s method to illustrate this on a basic example. Euler s explicit method. Euler s (explicit) method is one of the oldest and simplest schemes for integrating the curve x(t). The idea stems from a Taylor expansion of x(t) which reads x(t + h) = x(t) + h x(t) + O(h2). When t = kh, Euler s method approximates x(t + h) by xk+1, neglecting the second order term, xk+1 = xk + hg(xk). In optimization terms, we recognize the gradient descent algorithm used to minimize f. Approximation errors in an integration method accumulate with iterations, and as Euler s method uses only the last point to compute the next one, it has only limited control over the accumulated error. Linear multistep methods. Multi-step methods use a combination of past iterates to improve convergence. Throughout the paper, we focus on linear s-step methods whose recurrence can be written i=0 ρixk+i + h i=0 σig(xk+i), for k 0, where ρi, σi R are the parameters of the multistep method and h is again the step size. Each new point xk+s is a function of the information given by the s previous points. If σs = 0, each new point is given explicitly by the s previous points and the method is called explicit. Otherwise each new point requires solving an implicit equation and the method is called implicit. To simplify notations we use the shift operator E, which maps Exk xk+1. Moreover, if we write gk = g(xk), then the shift operator also maps Egk gk+1. Recall that a univariate polynomial is called monic if its leading coefficient is equal to 1. We now give the following concise definition of s-step linear methods. Definition 2.1 Given an (ODE) defined by g, x0, a step size h and x1, . . . , xs 1 initial points, a linear s-step method generates a sequence (tk, xk) which satisfies ρ(E)xk = hσ(E)gk, for every k 0, (2) where ρ is a monic polynomial of degree s with coefficients ρi, and σ a polynomial of degree s with coefficients σi. A linear s step method is uniquely defined by the polynomials (ρ, σ). The sequence generated by the method then depends on the initial points and the step size. We now recall a few results describing the performance of multistep methods. 2.2 Stability Stability is a key concept for integration methods. First of all, consider two curves x(t) and y(t), both solutions of (ODE), but starting from different points x(0) and y(0). If the function g is Lipchitzcontinuous, it is possible to show that the distance between x(t) and y(t) is bounded on a finite interval, i.e. x(t) y(t) C x(0) y(0) t [0, tmax], where C may depend exponentially on tmax. We would like to have a similar behavior for our sequences xk and yk, approximating x(tk) and y(tk), i.e. xk yk x(tk) y(tk) C x(0) y(0) k [0, tmax/h], (3) when h 0, so k . Two issues quickly arise. First, for a linear s-step method, we need s starting values x0, ..., xs 1. Condition (3) will therefore depend on all these starting values and not only x0. Secondly, any discretization scheme introduces at each step an approximation error, called local error, which accumulates over time. We write this error ϵloc(xk+s) and define it as ϵloc(xk+s) xk+s x(tk+s), where xk+s is computed using the real solution x(tk), ..., x(tk+s 1). In other words, the difference between xk and yk can be described as follows xk yk Error in the initial condition + Accumulation of local errors. We now write a complete definition of stability, inspired by Definition 6.3.1 from Gautschi [2011]. Definition 2.2 (Stability) A linear multistep method is stable iff, for two sequences xk, yk generated by (ρ, σ) using a sufficiently small step size h > 0, from the starting values x0, ..., xs 1, and y0, ..., ys 1, we have xk yk C max i {0,...,s 1} xi yi + i=1 ϵloc(xi+s) + ϵloc(yi+s) , (4) for any k [0, tmax/h]. Here, the constant C may depend on tmax but is independent of h. When h tends to zero, we may recover equation (3) only if the accumulated local error also tends to zero. We thus need lim h 0 1 h ϵloc(xi+s) = 0 i [0, tmax/h]. This condition is called consistency. The following proposition shows there exist simple conditions to check consistency, which rely on comparing a Taylor expansion of the solution with the coefficients of the method. Its proof and further details are given in the Supplementary Material. Proposition 2.3 (Consistency) A linear multistep method defined by polynomials (ρ, σ) is consistent if and only if ρ(1) = 0 and ρ (1) = σ(1). (5) Assuming consistency, we still need to control sensitivity to initial conditions, written xk yk C max i {0,...,s 1} xi yi . (6) Interestingly, analyzing the special case where g = 0 is completely equivalent to the general case and this condition is therefore called zero-stability. This reduces to standard linear algebra results as we only need to look at the solution of the homogeneous difference equation ρ(E)xk = 0. This is captured in the following theorem whose technical proof can be found in [Gautschi, 2011, Theorem 6.3.4]. Theorem 2.4 (Root condition) Consider a linear multistep method (ρ, σ). The method is zero-stable if and only if all roots of ρ(z) are in the unit disk, and the roots on the unit circle are simple. 2.3 Convergence of the global error Numerical analysis focuses on integrating an ODE on a finite interval of time [0, tmax]. It studies the behavior of the global error defined by x(tk) xk, as a function of the step size h. If the global error converges to 0 with the step size, the method is guaranteed to approximate correctly the ODE on the time interval, for h small enough. We now state Dahlquist s equivalence theorem, which shows that the global error converges to zero when h does if the method is stable, i.e. when the method is consistent and zero-stable. This naturally needs the additional assumption that the starting values x0, . . . , xs 1 are computed such that they converge to the solution (x(0), . . . , x(ts 1)). The proof of the theorem can be found in Gautschi [2011]. Theorem 2.5 (Dahlquist s equivalence theorem) Given an (ODE) defined by g and x0 and a consistent linear multistep method (ρ, σ), whose starting values are computed such that limh 0 xi = x(ti) for any i {0, . . . , s 1}, zero-stability is necessary and sufficient for convergence, i.e. to ensure x(tk) xk 0 for any k when the step size h goes to zero. 2.4 Region of absolute stability The results above ensure stability and global error bounds on finite time intervals. Solving optimization problems however requires looking at infinite time horizons. We start by finding conditions ensuring that the numerical solution does not diverge when the time interval increases, i.e. that the numerical solution is stable with a constant C which does not depend on tmax. Formally, for a fixed step-size h, we want to ensure xk C max i {0,...,s 1} xi for all k [0, tmax/h] and tmax > 0. (7) This is not possible without further assumptions on the function g as in the general case the solution x(t) itself may diverge. We begin with the simple scalar linear case which, given λ > 0, reads x(t) = λx(t), x(0) = x0. (Scalar Linear ODE) The recurrence of a linear multistep methods with parameters (ρ, σ) applied to (Scalar Linear ODE) then reads ρ(E)xk = λhσ(E)xk [ρ + λhσ](E)xk = 0, where we recognize a homogeneous recurrence equation. Condition (7) is then controlled by the step size h and the constant λ, ensuring that this homogeneous recurrent equation produces bounded solutions. This leads us to the definition of the region of absolute stability, also called A-stability. Definition 2.6 (Absolute stability) The region of absolute stability of a linear multistep method defined by (ρ, σ) is the set of values λh such that the characteristic polynomial πλh(z) ρ(z) + λh σ(z) (8) of the homogeneous recurrence equation πλh(E)xk = 0 produces bounded solutions. Standard linear algebra links this condition to the roots of the characteristic polynomial as recalled in the next proposition (see e.g. Lemma 12.1 of Süli and Mayers [2003]). Proposition 2.7 Let π be a polynomial and write xk a solution of the homogeneous recurrence equation π(E)xk = 0 with arbitrary initial values. If all roots of π are inside the unit disk and the ones on the unit circle have a multiplicity exactly equal to one, then xk . Absolute stability of a linear multistep method determines its ability to integrate a linear ODE defined by x(t) = Ax(t), x(0) = x0, (Linear ODE) where A is a positive symmetric matrix whose eigenvalues belong to [µ, L] for 0 < µ L. In this case the step size h must indeed be chosen such that for any λ [µ, L], λh belongs to the region of absolute stability of the method. This (Linear ODE) is a special instance of (Gradient Flow) where f is a quadratic function. Therefore absolute stability gives a necessary (but not sufficient) condition to integrate (Gradient Flow) on L-smooth, µ-strongly convex functions. 2.5 Convergence analysis in the linear case By construction, absolute stability also gives hints on the convergence of xk to the equilibrium in the linear case. More precisiely, it allows us to control the rate of convergence of xk, approximating the solution x(t) of (Linear ODE) as shown in the following proposition whose proof can be found in Supplementary Material. Proposition 2.8 Given a (Linear ODE) defined by x0 and a positive symmetric matrix A whose eigenvalues belong to [µ, L] with 0 < µ L, using a linear multistep method defined by (ρ, σ) and applying a fixed step size h, we define rmax as rmax = max λ [µ,L] max r roots(πλh(z)) |r|, where πλh is defined in (8). If rmax < 1 and its multiplicity is equal to m, then the speed of convergence of the sequence xk produced by the linear multistep method to the equilibrium x of the differential equation is given by xk x = O(km 1rk max). (9) We can now use these properties to analyze and design multistep methods. 3 Analysis and design of multi-step methods As shown previously, we want to integrate (Gradient Flow) and Proposition 1.1 gives a rate of convergence in the continuous case. If the method tracks x(t) with sufficient accuracy, then the rate of the method will be close to the rate of convergence of x(kh). So, larger values of h yield faster convergence of x(t) to the equilibrium x . However h cannot be too large, as the method may be too inaccurate and/or unstable as h increases. Convergence rates of optimization algorithms are thus controlled by our ability to discretize the gradient flow equation using large step sizes. We recall the different conditions that proper linear multistep methods should satisfy. Monic polynomial (Section 2.1). Without loss of generality (dividing both sides of the difference equation of the multistep method (2) by ρs does not change the method). Explicit method (Section 2.1). We assume that the scheme is explicit in order to avoid solving a non-linear system at each step. Consistency (Section 2.2). If the method is not consistent, then the local error does not converge when the step size goes to zero. Zero-stability (Section 2.2). Zero-stability ensures convergence of the global error (Section 2.3) when the method is also consistent. Region of absolute stability (Section 2.4). If λh is not inside the region of absolute stability for any λ [µ, L], then the method is divergent when tmax increases. Using the remaining degrees of freedom, we can tune the algorithm to improve the convergence rate on (Linear ODE), which corresponds to the optimization of a quadratic function. Indeed, as showed in Proposition 2.8, the largest root of πλh(z) gives us the rate of convergence on quadratic functions (when λ [µ, L]). Since smooth and strongly convex functions are close to quadratic (being sandwiched between two quadratics), this will also give us a good idea of the rate of convergence on these functions. We do not derive a proof of convergence of the sequence for general smooth and (strongly) convex function (but convergence is proved by Nesterov [2013] or using Lyapunov techniques by Wilson et al. [2016]). Still our results provide intuition on why accelerated methods converge faster. 3.1 Analysis of two-step methods We now analyze convergence of two-step methods (an analysis of Euler s method is provided in the Supplementary Material). We first translate the conditions multistep method, listed at the beginning of this section, into constraints on the coefficients: ρ2 = 1 (Monic polynomial) σ2 = 0 (Explicit method) ρ0 + ρ1 + ρ2 = 0 (Consistency) σ0 + σ1 + σ2 = ρ1 + 2ρ2 (Consistency) |Roots(ρ)| 1 (Zero-stability). Equality contraints yield three linear constraints, defining the set L such that L = {ρ0, ρ1, σ0, σ1 : ρ1 = (1 + ρ0), σ1 = 1 ρ0 σ0, |ρ0| < 1}. (10) We now seek conditions on the remaining parameters to produce a stable method. Absolute stability requires that all roots of the polynomial πλh(z) in (8) are inside the unit circle, which translates into condition on the roots of second order equations here. The following proposition gives the values of the roots of πλh(z) as a function of the parameters ρi and σi. Proposition 3.1 Given constants 0 < µ L, a step size h > 0 and a linear two-step method defined by (ρ, σ), under the conditions (ρ1 + µhσ1)2 4(ρ0 + µhσ0), (ρ1 + Lhσ1)2 4(ρ0 + Lhσ0), (11) the roots r (λ) of πλh, defined in (8), are complex conjugate for any λ [µ, L]. Moreover, the largest root modulus is equal to max λ [µ,L] |r (λ)|2 = max {ρ0 + µhσ0, ρ0 + Lhσ0} . (12) The proof can be found in the Supplementary Material. The next step is to minimize the largest modulus (12) in the coefficients ρi and σi to get the best rate of convergence, assuming the roots are complex (the case were the roots are real leads to weaker results). 3.2 Design of a family of two-step methods for quadratics We now have all ingredients to build a two-step method for which the sequence xk converges quickly to x for quadratic functions. Optimizing the convergence rate means solving the following problem, min max {ρ0 + µhσ0, ρ0 + Lhσ0} s.t. (ρ0, ρ1, σ0, σ1) L (ρ1 + µhσ1)2 4(ρ0 + µhσ0) (ρ1 + Lhσ1)2 4(ρ0 + Lhσ0), in the variables ρ0, ρ1, σ0, σ1, h > 0, where L is defined in (10). If we use the equality constraints in (10) and make the following change of variables, ˆh = h(1 ρ0), cµ = ρ0 + µhσ0, c L = ρ0 + Lhσ0, (13) the problem can be solved, for fixed ˆh, in the variables cµ, c L. In that case, the optimal solution is given by µˆh)2, c L = (1 p Lˆh)2, (14) obtained by tightening the two first inequalities, for ˆh ]0, (1+µ/L)2 L [. Now if we fix ˆh we can recover a two step linear method defined by (ρ, σ) and a step size h by using the equations in (13). We define the following quantity β = (1 p µ/L)/(1 + p A suboptimal two-step method. Setting ˆh = 1/L for example, the parameters of the corresponding two-step method, called method M1, are M1 = ρ(z) = β (1 + β)z + z2, σ(z) = β(1 β) + (1 β2)z, h = 1 L(1 β) (15) and its largest modulus root (12) is given by rate(M1) = q max{cµ, c L} = cµ = 1 p Optimal two-step method for quadratics. We can compute the optimal ˆh which minimizes the maximum of the two roots c µ and c L defined in (14). The solution simply balances the two terms in the maximum, with ˆh = (1 + β)2/L. This choice of ˆh leads to the method M2, described by M2 = ρ(z) = β2 (1 + β2)z + z2, σ(z) = (1 β2)z, h = 1 µL with convergence rate rate(M2) = cµ = c L = β = (1 p µ/L)/(1 + p µ/L) < rate(M1). We will now see that methods M1 and M2 are actually related to Nesterov s accelerated method and Polyak s heavy ball algorithms. 4 On the link between integration and optimization In the previous section, we derived a family of linear multistep methods, parametrized by ˆh. We will now compare these methods to common optimization algorithms used to minimize L-smooth, µ-strongly convex functions. 4.1 Polyak s heavy ball method The heavy ball method was proposed by Polyak [1964]. It adds a momentum term to the gradient step xk+2 = xk+1 c1 f(xk+1) + c2(xk+1 xk), where c1 = 4/( L + µ)2 and c2 = β2. We can organize the terms in the sequence to match the general structure of linear multistep methods, to get β2xk (1 + β2)xk+1 + xk+2 = c1 ( f(xk+1)) . We easily identify ρ(z) = β2 (1+β2)z+z2 and hσ(z) = c1z. To extract h, we will assume that the method is consistent (see conditions (5)). All computations done, we can identify the corresponding linear multistep method as MPolyak = ρ(z) = β2 (1 + β2)z + 1, σ(z) = (1 β2)z, h = 1 µL This shows that MPolyak = M2. In fact, this result was expected since Polyak s method is known to be optimal for quadratic functions. However, it is also known that Polyak s algorithm does not converge for a general smooth and strongly convex function [Lessard et al., 2016]. 4.2 Nesterov s accelerated gradient Nesterov s accelerated method in its simplest form is described by two sequences xk and yk, with yk+1 = xk 1 xk+1 = yk+1 + β(yk+1 yk). As above, we will write Nesterov s accelerated gradient as a linear multistep method by expanding yk in the definition of xk, to get βxk (1 + β)xk+1 + xk+2 = 1 L ( β( f(xk)) + (1 + β)( f(xk+1))) . Again, assuming as above that the method is consistent to extract h, we identify the linear multistep method associated to Nesterov s algorithm. After identification, MNest = ρ(z) = β (1 + β)z + z2, σ(z) = β(1 β) + (1 β2)z, h = 1 L(1 β), which means that M1 = MNest. 4.3 The convergence rate of Nesterov s method Pushing the analysis a little bit further, we have a simple intuitive argument that explains why Nesterov s algorithm is faster than the gradient method. There is of course a complete proof of its rate of convergence [Nesterov, 2013], even using differential equations arguments [Wibisono et al., 2016; Wilson et al., 2016], but we take a simpler approach here. The key parameter is the step size h. If we compare it with the step size in the classical gradient method, Nesterov s method uses a step size which is (1 β) 1 p L/µ times larger. Recall that, in continuous time, the rate of convergence of x(t) to x is given by f(x(t)) f(x ) e 2µt(f(x0) f(x )). The gradient method tries to approximate x(t) using an Euler scheme with step size h = 1/L, which means x(grad) k x(k/L), so f(x(grad) k ) f(x ) f(x(k/L)) f(x ) (f(x0) f(x ))e 2k µ However, Nesterov s method has a step size equal to h Nest = 1 L(1 β) = 1 + p µ/L 2 µL 1 4µL which means xnest k x k/ p while maintaining stability. In that case, the estimated rate of convergence becomes f(xnest k ) f(x ) f x k/ 4µL f(x ) (f(x0) f(x ))e k which is approximatively the rate of convergence of Nesterov s algorithm in discrete time and we recover the accelerated rate in p µ/L versus µ/L for gradient descent. Overall, the accelerated method is more efficient because it integrates the gradient flow faster than simple gradient descent, making longer steps. A numerical simulation in Figure 1 makes this argument more visual. 5 Generalization and Future Work We showed that accelerated optimization methods can be seen as multistep integration schemes applied to the basic gradient flow equation. Our results give a natural interpretation of acceleration: multistep schemes allow for larger steps, which speeds up convergence. In the Supplementary Material, we detail further links between integration methods and other well-known optimization algorithms such as proximal point methods, mirror gradient decent, proximal gradient descent, and 5 5.5 6 6.5 7 x0 xstar Exact Euler Nesterov Polyak 5 5.5 6 6.5 x0 xstar Exact Euler Nesterov Polyak Figure 1: Integration of a linear ODE with optimal (left) and small (right) step sizes. discuss the weakly convex case. The extra-gradient algorithm and its recent accelerated version Diakonikolas and Orecchia [2017] can also be linked to another family of integration methods called Runge-Kutta which include notably predictor-corrector methods. Our stability analysis is limited to the quadratic case, the definition of A-stability being too restrictive for the class of smooth and strongly convex functions. A more appropriate condition would be Gstability, which extends A-stability to non-linear ODEs, but this condition requires strict monotonicity of the error (which is not the case with accelerated algorithms). Stability may also be tackled by recent advances in lower bound theory provided by Taylor [2017] but these yield numerical rather than analytical convergence bounds. Our next objective is thus to derive a new stability condition in between A-stability and G-stability. Acknowledgments The authors would like to acknowledge support from a starting grant from the European Research Council (ERC project SIPA), from the European Union s Seventh Framework Programme (FP7PEOPLE-2013-ITN) under grant agreement number 607290 Spa RTa N, an AMX fellowship, as well as support from the chaire Économie des nouvelles données with the data science joint research initiative with the fonds AXA pour la recherche and a gift from Société Générale Cross Asset Quantitative Research. Allen Zhu, Z. and Orecchia, L. [2017], Linear coupling: An ultimate unification of gradient and mirror descent, in Proceedings of the 8th Innovations in Theoretical Computer Science , ITCS 17. Beck, A. and Teboulle, M. [2003], Mirror descent and nonlinear projected subgradient methods for convex optimization , Operations Research Letters 31(3), 167 175. Ben-Tal, A. and Nemirovski, A. [2001], Lectures on modern convex optimization: analysis, algorithms, and engineering applications, SIAM. Bubeck, S., Tat Lee, Y. and Singh, M. [2015], A geometric alternative to nesterov s accelerated gradient descent , Ar Xiv e-prints . Diakonikolas, J. and Orecchia, L. [2017], Accelerated extra-gradient descent: A novel accelerated first-order method , ar Xiv preprint ar Xiv:1706.04680 . Duchi, J. C., Shalev-Shwartz, S., Singer, Y. and Tewari, A. [2010], Composite objective mirror descent., in COLT , pp. 14 26. Gautschi, W. [2011], Numerical analysis, Springer Science & Business Media. Krichene, W., Bayen, A. and Bartlett, P. L. [2015], Accelerated mirror descent in continuous and discrete time, in Advances in neural information processing systems , pp. 2845 2853. Lessard, L., Recht, B. and Packard, A. [2016], Analysis and design of optimization algorithms via integral quadratic constraints , SIAM Journal on Optimization 26(1), 57 95. Nesterov, Y. [1983], A method of solving a convex programming problem with convergence rate o (1/k2), in Soviet Mathematics Doklady , Vol. 27, pp. 372 376. Nesterov, Y. [2007], Gradient methods for minimizing composite objective function . Nesterov, Y. [2013], Introductory lectures on convex optimization: A basic course, Vol. 87, Springer Science & Business Media. Nesterov, Y. [2015], Universal gradient methods for convex optimization problems , Mathematical Programming 152(1-2), 381 404. Polyak, B. T. [1964], Some methods of speeding up the convergence of iteration methods , USSR Computational Mathematics and Mathematical Physics 4(5), 1 17. Su, W., Boyd, S. and Candes, E. [2014], A differential equation for modeling nesterov s accelerated gradient method: Theory and insights, in Advances in Neural Information Processing Systems , pp. 2510 2518. Süli, E. and Mayers, D. F. [2003], An introduction to numerical analysis, Cambridge University Press. Taylor, A. [2017], Convex Interpolation and Performance Estimation of First-order Methods for Convex Optimization, Ph D thesis, Université catholique de Louvain. Wibisono, A., Wilson, A. C. and Jordan, M. I. [2016], A variational perspective on accelerated methods in optimization , Proceedings of the National Academy of Sciences p. 201614734. Wilson, A. C., Recht, B. and Jordan, M. I. [2016], A lyapunov analysis of momentum methods in optimization , ar Xiv preprint ar Xiv:1611.02635 .