# convex_and_nonconvex_optimization_under_generalized_smoothness__b327e769.pdf Convex and Non-convex Optimization Under Generalized Smoothness Haochuan Li MIT haochuan@mit.edu Jian Qian* MIT jianqian@mit.edu Yi Tian MIT yitian@mit.edu Alexander Rakhlin MIT rakhlin@mit.edu Ali Jadbabaie MIT jadbabai@mit.edu Classical analysis of convex and non-convex optimization methods often requires the Lipschitz continuity of the gradient, which limits the analysis to functions bounded by quadratics. Recent work relaxed this requirement to a non-uniform smoothness condition with the Hessian norm bounded by an affine function of the gradient norm, and proved convergence in the non-convex setting via gradient clipping, assuming bounded noise. In this paper, we further generalize this non-uniform smoothness condition and develop a simple, yet powerful analysis technique that bounds the gradients along the trajectory, thereby leading to stronger results for both convex and non-convex optimization problems. In particular, we obtain the classical convergence rates for (stochastic) gradient descent and Nesterov s accelerated gradient method in the convex and/or non-convex setting under this general smoothness condition. The new analysis approach does not require gradient clipping and allows heavy-tailed noise with bounded variance in the stochastic setting. 1 Introduction In this paper, we study the following unconstrained optimization problem minx X f(x), (1) where X Rd is the domain of f. Classical textbook analyses [Nemirovskij and Yudin, 1983, Nesterov, 2003] of (1) often require the Lipschitz smoothness condition, which assumes 2f(x) L almost everywhere for some L 0 called the smoothness constant. This condition, however, is rather restrictive and only satisfied by functions that are both upper and lower bounded by quadratic functions. Recently, Zhang et al. [2019] proposed the more general (L0, L1)-smoothness condition, which assumes 2f(x) L0 + L1 f(x) for some constants L0, L1 0, motivated by their extensive language model experiments. This notion generalizes the standard Lipschitz smoothness condition and also contains e.g. univariate polynomial and exponential functions. For non-convex and (L0, L1)-smooth functions, they prove convergence of gradient descent (GD) and stochastic gradient descent (SGD) with gradient clipping and also provide a complexity lower bound for constant-stepsize GD/SGD without clipping. Based on these results, they claim gradient clipping or other forms of adaptivity provably accelerate the convergence for (L0, L1)-smooth functions. Perhaps due to the Equal contribution. 37th Conference on Neural Information Processing Systems (Neur IPS 2023). lower bound, all the follow-up works under this condition that we are aware of limit their analyses to adaptive methods. Most of these focus on non-convex functions. See Section 2 for more discussions of related works. In this paper, we significantly generalize the (L0, L1)-smoothness condition to the ℓ-smoothness condition which assumes 2f(x) ℓ( f(x) ) for some non-decreasing continuous function ℓ. We develop a simple, yet powerful approach, which allows us to obtain stronger results for both convex and non-convex optimization problems when ℓis sub-quadratic (i.e., limu ℓ(u)/u2 = 0) or even more general. The ℓ-smooth function class with a sub-quadratic ℓalso contains e.g. univariate rational and double exponential functions. In particular, we prove the convergence of constant-stepsize GD/SGD and Nesterov s accelerated gradient method (NAG) in the convex or non-convex settings. For each method and setting, we obtain the classical convergence rate, under a certain requirement of ℓ. In addition, we relax the assumption of bounded noise to the weaker one of bounded variance with the simple SGD method. See Table 1 for a summary of our results and assumptions for each method and setting. At first glance, our results contradict the lower bounds on constant-stepsize GD/SGD in [Zhang et al., 2019, Wang et al., 2022]; this will be reconciled in Section 5.3. Our approach analyzes boundedness of gradients along the optimization trajectory. The idea behind it can be informally illustrated by the following circular reasoning. On the one hand, if gradients along the trajectory are bounded by a constant G, then the Hessian norms are bounded by the constant ℓ(G). Informally speaking, we essentially have the standard Lipschitz smoothness condition2 and can apply classical textbook analyses to prove convergence, which implies that gradients converge to zero. On the other hand, if gradients converge, they must be bounded, since any convergent sequence is bounded. In other words, the bounded gradient condition implies convergence, and convergence also implies the condition back, which forms a circular argument. If we can break this circularity of reasoning in a rigorous way, both the bounded gradient condition and convergence are proved. In this paper, we will show how to break the circularity using induction or contradiction arguments for different methods and settings in Sections 4 and 5. We note that the idea of bounding gradients can be applied to the analysis of other optimization methods, e.g., the concurrent work [Li et al., 2023] by subset of the authors, which uses a similar idea to obtain a rigorous and improved analysis of the Adam method [Kingma and Ba, 2014]. Contributions. In light of the above discussions, we summarize our main contributions as follows. We generalize the standard Lipschitz smoothness and also the (L0, L1)-smoothness condition to the ℓ-smoothness condition, and develop a new approach for analyzing convergence under this condition by bounding the gradients along the optimization trajectory. We prove the convergence of constant-stepsize GD/SGD/NAG in the convex and non-convex settings, and obtain the classical rates for all of them, as summarized in Table 1. Besides the generalized smoothness condition and the new approach, our results are also novel in the following aspects. The convergence results of constant-stepsize methods challenge the folklore belief on the necessity of adaptive stepsize for generalized smooth functions. We obtain new convergence results for GD and NAG in the convex setting under the generalized smoothness condition. We relax the assumption of bounded noise to the weaker one of bounded variance of noise in the stochastic setting with the simple SGD method. 2 Related work Gradient-based optimizaiton. The classical gradient-based optimization problems for the standard Lipschitz smooth functions have been well studied for both convex [Nemirovskij and Yudin, 1983, 2This statement is informal because we can only bound Hessian norms along the trajectory, rather than almost everywhere within a convex set as in the standard Lipschitz smoothness condition. For example, even if the Hessian norm is bounded at both xt and xt+1, it does not directly mean the Hessian norm is also bounded over the line segment between them, which is required in classical analysis. A more formal statement will need Lemma 3.3 presented later in the paper. Table 1: Summary of the results. ϵ denotes the sub-optimality gap of the function value in convex settings, and the gradient norm in non-convex settings. denotes optimal rates. Method Convexity ℓ-smoothness Gradient complexity Strongly convex No requirement O(log(1/ϵ)) (Theorem 4.3) Convex O(1/ϵ) (Theorem 4.2 ) Non-convex Sub-quadratic ℓ O(1/ϵ2)* (Theorem 5.2) Quadratic ℓ Ω(exp. in cond #) (Theorem 5.4 ) NAG Convex Sub-quadratic ℓ O(1/ ϵ)* (Theorem 4.4 ) SGD Non-convex Sub-quadratic ℓ O(1/ϵ4)* (Theorem 5.3) Nesterov, 2003, d Aspremont et al., 2021] and non-convex functions. In the convex setting, the goal is to reach an ϵ-sub-optimal point x satisfying f(x) infx f(x) ϵ. It is well known that GD achieves the O(1/ϵ) gradient complexity and NAG achieves the accelerated O(1/ ϵ) complexity which is optimal among all gradient-based methods. For strongly convex functions, GD and NAG achieve the O(κ log(1/ϵ)) and O( κ log(1/ϵ)) complexity respectively, where κ is the condition number and the latter is again optimal. In the non-convex setting, the goal is to find an ϵ-stationary point x satisfying f(x) ϵ, since finding a global minimum is NP-hard in general. It is well known that GD achieves the optimal O(1/ϵ2) complexity which matches the lower bound in [Carmon et al., 2017]. In the stochastic setting for unbiased stochastic gradient with bounded variance, SGD achieves the optimal O(1/ϵ4) complexity [Ghadimi and Lan, 2013], matching the lower bound in [Arjevani et al., 2019]. In this paper, we obtain the classical rates in terms of ϵ for all the above-mentioned methods and settings, under a far more general smoothness condition. Generalized smoothness. The (L0, L1)-smoothness condition proposed by Zhang et al. [2019] was studied by many follow-up works. Under the same condition, [Zhang et al., 2020] considers momentum in the updates and improves the constant dependency of the convergence rate for SGD with clipping derived in [Zhang et al., 2019]. [Qian et al., 2021] studies gradient clipping in incremental gradient methods, [Zhao et al., 2021] studies stochastic normalized gradient descent, and [Crawshaw et al., 2022] studies a generalized Sign SGD method, under the (L0, L1)-smoothess condition. [Reisizadeh et al., 2023] studies variance reduction for (L0, L1)-smooth functions. [Chen et al., 2023] proposes a new notion of α-symmetric generalized smoothness, which is roughly as general as (L0, L1)-smoothness. [Wang et al., 2022] analyzes convergence of Adam and provides a lower bound which shows non-adaptive SGD may diverge. In the stochastic setting, the above-mentioned works either consider the strong assumption of bounded gradient noise or require a very large batch size that depends on ϵ, which essentially reduces the analysis to the deterministic setting. [Faw et al., 2023] proposes an Ada Grad-type algorithm in order to relax the bounded noise assumption. Perhaps due to the lower bounds in [Zhang et al., 2019, Wang et al., 2022], all the above works study methods with an adaptive stepsize. In this and our concurrent work [Li et al., 2023], we further generalize the smoothness condition and analyze various methods under this condition through bounding the gradients along the trajectory. 3 Function class In this section, we discuss the function class of interest where the objective function f lies. We start with the following two standard assumptions in the literature of unconstrained optimization, which will be assumed throughout Sections 4 and 5 unless explicitly stated. Assumption 1. The objective function f is differentiable and closed within its open domain X. Assumption 2. The objective function f is bounded from below, i.e., f := infx X f(x) > . A function f is said to be closed if its sub-level set {x dom(f) | f(x) a} is closed for each a R. A continuous function f with an open domain is closed if and only f(x) tends to positive infinity when x approaches the boundary of its domain [Boyd and Vandenberghe, 2004]. Assumption 1 is necessary for our analysis to ensure that the iterates of a method with a reasonably small stepsize stays within the domain X. Note that for X = Rd considered in most unconstrained optimization papers, the assumption is trivially satisfied as all continuous functions over Rd are closed. We consider a more general domain which may not be the whole space because that is the case for some interesting examples in our function class of interest (see Section 3.1.3). However, it actually brings us some additional technical difficulties especially in the stochastic setting, as we need to make sure the iterates do not go outside of the domain. 3.1 Generalized smoothness In this section, we formally define the generalized smoothness condition, and present its properties and examples. 3.1.1 Definitions Definitions 1 and 2 below are two equivalent ways of stating the definition, where we use B(x, R) to denote the Euclidean ball with radius R centered at x. Definition 1 (ℓ-smoothness). A real-valued differentiable function f : X R is ℓ-smooth for some non-decreasing continuous function ℓ: [0, + ) (0, + ) if 2f(x) ℓ( f(x) ) almost everywhere (with respect to the Lebesgue measure) in X. Remark 3.1. Definition 1 reduces to the classical L-smoothness when ℓ L is a constant function. It reduces to the (L0, L1)-smoothness proposed in [Zhang et al., 2019] when ℓ(u) = L0 + L1u is an affine function. Definition 2 ((r, ℓ)-smoothness). A real-valued differentiable function f : X R is (r, ℓ)-smooth for continuous functions r, ℓ: [0, + ) (0, + ) where ℓis non-decreasing and r is non-increasing, if it satisfies 1) for any x X, B(x, r( f(x) )) X, and 2) for any x1, x2 B(x, r( f(x) )), f(x1) f(x2) ℓ( f(x) ) x1 x2 . The requirements that ℓis non-decreasing and r is non-increasing do not cause much loss in generality. If these conditions are not satisfied, one can replace ℓand r with the non-increasing function r(u) := inf0 v u r(v) r(u) and non-decreasing function ℓ(u) := sup0 v u ℓ(v) ℓ(u) in Definitions 1 and 2. Then the only requirement is r > 0 and ℓ< . Next, we prove that the above two definitions are equivalent in the following proposition, whose proof is involved and deferred to Appendix A.2. Proposition 3.2. An (r, ℓ)-smooth function is ℓ-smooth; and an ℓ-smooth function satisfying Assumption 1 is (r, m)-smooth where m(u) := ℓ(u + a) and r(u) := a/m(u) for any a > 0. The condition in Definition 1 is simple and one can easily check whether it is satisfied for a given example function. On the other hand, Definition 2 is a local Lipschitz condition on the gradient that is harder to verify. However, it is useful for deriving several useful properties in the next section. 3.1.2 Properties First, we provide the following lemma which is very useful in our analyses of all the methods considered in this paper. Its proof is deferred to Appendix A.3. Lemma 3.3. If f is (r, ℓ)-smooth, for any x X satisfying f(x) G, we have 1) B(x, r(G)) X, and 2) for any x1, x2 B(x, r(G)), f(x1) f(x2) L x1 x2 , f(x1) f(x2)+ f(x2), x1 x2 + L 2 x1 x2 2 , (2) where L := ℓ(G) is the effective smoothness constant. Remark 3.4. Since we have shown the equivalence between ℓ-smoothness and (r, ℓ)-smoothness, Lemma 3.3 also applies to ℓ-smooth functions, for which we have L = ℓ(2G) and r(G) = G/L if choosing a = G in Proposition 3.2. Lemma 3.3 states that, if the gradient at x is bounded by some constant G, then within its neighborhood with a constant radius, we can obtain (2), the same inequalities that were derived in the textbook analysis [Nesterov, 2003] under the standard Lipschitz smoothness condition. With (2), the analysis for generalized smoothness is not much harder than that for standard smoothness. Since we Table 2: Examples of univariate (ρ, L0, Lρ) smooth functions for different ρs. The parameters a, b, p are real numbers (not necessarily integers) satisfying a, b > 1 and p < 1 or p 2. We use 1+ to denote any real number slightly larger than 1. ρ 0 1 1 1+ 1.5 2 p 2 p 1 Example Functions Quadratic Polynomial ax a(bx) Rational Logarithmic xp mostly choose x = x2 = xt and x1 = xt+1 in the analysis, in order to apply Lemma 3.3, we need two conditions: f(xt) G and xt+1 xt r(G) for some constant G. The latter is usually directly implied by the former for most deterministic methods with a small enough stepsize, and the former can be obtained with our new approach that bounds the gradients along the trajectory. With Lemma 3.3, we can derive the following useful lemma which is the reverse direction of a generalized Polyak-Lojasiewicz (PL) inequality, whose proof is deferred to Appendix A.3. Lemma 3.5. If f is ℓ-smooth, then f(x) 2 2ℓ(2 f(x) ) (f(x) f ) for any x X. Lemma 3.5 provides an inequality involving the gradient norm and the sub-optimality gap. For example, when ℓ(u) = uρ for some 0 ρ < 2, this lemma suggests f(x) O (f(x) f )1/(2 ρ) , which means the gradient norm is bounded whenever the function value is bounded. The following corollary provides a more formal statement for general sub-quadratic ℓ(i.e., limu ℓ(u)/u2 = 0), and we defer its proof to Appendix A.3. Corollary 3.6. Suppose f is ℓ-smooth where ℓis sub-quadratic. If f(x) f F for some x X and F 0, denoting G := sup{u 0 | u2 2ℓ(2u) F}, then they satisfy G2 = 2ℓ(2G) F and we have f(x) G < . Therefore, in order to bound the gradients along the trajectory as we discussed below Lemma 3.3, it suffices to bound the function values, which is usually easier. 3.1.3 Examples The most important subset of ℓ-smooth (or (r, ℓ)-smooth) functions are those with a polynomial ℓ, and can be characterized by the (ρ, L0, Lρ)-smooth function class defined below. Definition 3 ((ρ, L0, Lρ)-smoothness). A real-valued differentiable function f is (ρ, L0, Lρ)-smooth for constants ρ, L0, Lρ 0 if it is ℓ-smooth with ℓ(u) = L0 + Lρuρ. Definition 3 reduces to the standard Lipschitz smoothness condition when ρ = 0 or Lρ = 0 and to the (L0, L1)-smoothness proposed in [Zhang et al., 2019] when ρ = 1. We list several univariate examples of (ρ, L0, Lρ)-smooth functions for different ρs in Table 2 with their rigorous justifications in Appendix A.1. Note that when x goes to infinity, polynomial and exponential functions corresponding to ρ = 1 grow much faster than quadratic functions corresponding to ρ = 0 . Rational and logarithmic functions for ρ > 1 grow even faster as they can blow up to infinity near finite points. Note that the domains of such functions are not Rd, which is why we consider the more general Assumption 1 instead of simply assuming X = Rd. Aside from logarithmic functions, the (2, L0, L2)-smooth function class also includes other univariate self-concordant functions. This is an important function class in the analysis of Interior Point Methods and coordinate-free analysis of the Newton method [Nesterov, 2003]. More specifically, a convex function h : R R is self-concordant if |h (x)| 2h (x)3/2 for all x R. Formally, we have the following proposition whose proof is deferred to Appendix A.1. Proposition 3.7. If h : R R is a self-concordant function satisfying h (x) > 0 over the interval (a, b), then h restricted on (a, b) is (2, L0, 2)-smooth for some L0 > 0. 4 Convex setting In this section, we present the convergence results of gradient descent (GD) and Nesterov s accelerated gradient method (NAG) in the convex setting. Formally, we define convexity as follows. Definition 4. A real-valued differentiable function f : X R is µ-strongly-convex for µ 0 if X is a convex set and f(y) f(x) f(x), y x + µ 2 y x 2 for any x, y X. A function is convex if it is µ-strongly-convex with µ = 0. We assume the existence of a global optimal point x throughout this section, as in the following assumption. However, we want to note that, for gradient descent, this assumption is just for simplicity rather than necessary. Assumption 3. There exists a point x X such that f(x ) = f = infx X f(x). 4.1 Gradient descent The gradient descent method with a constant stepsize η is defined via the following update rule xt+1 = xt η f(xt). (3) As discussed below Lemma 3.3, the key in the convergence analysis is to show f(xt) G for all t 0 and some constant G. We will prove it by induction relying on the following lemma whose proof is deferred to Appendix B. Lemma 4.1. For any x X satisfying f(x) G, define x+ := x η f(x). If f is convex and (r, ℓ)-smooth, and η min n 2 ℓ(G), r(G) 2G o , we have x+ X and f(x+) f(x) G. Lemma 4.1 suggests that for gradient descent (3) with a small enough stepsize, if the gradient norm at xt is bounded by G, then we have f(xt+1) f(xt) G, i.e., the gradient norm is also bounded by G at t+1. In other words, the gradient norm is indeed a non-increasing potential function for gradient descent in the convex setting. With a standard induction argument, we can show that f(xt) f(x0) for all t 0. As discussed below Lemma 3.3, then we can basically apply the classical analysis to obtain the convergence guarantee in the convex setting as in the following theorem, whose proof is deferred to Appendix B. Theorem 4.2. Suppose f is convex and (r, ℓ)-smooth. Denote G := f(x0) and L := ℓ(G), then the iterates generated by (3) with η min n 1 L, r(G) 2G o satisfy f(xt) G for all t 0 and f(x T ) f x0 x 2 Since η is a constant independent of ϵ or T, Theorem 4.2 achieves the classical O(1/T) rate, or O(1/ϵ) gradient complexity to achieve an ϵ-sub-optimal point, under the generalized smoothness condition. Since strongly convex functions are a subset of convex functions, Lemma 4.1 still holds for them. Then we immediately obtain the following result in the strongly convex setting, whose proof is deferred to Appendix B. Theorem 4.3. Suppose f is µ-strongly-convex and (r, ℓ)-smooth. Denote G := f(x0) and L := ℓ(G), then the iterates generated by (3) with η min n 1 L, r(G) 2G o satisfy f(xt) G for all t 0 and f(x T ) f µ(1 ηµ)T 2(1 (1 ηµ)T ) x0 x 2 . Theorem 4.3 gives a linear convergence rate and the O((ηµ) 1 log(1/ϵ)) gradient complexity to achieve an ϵ-sub-optimal point. Note that for ℓ-smooth functions, we have r(G) L (see Remark 3.4), which means we can choose η = 1 2L. Then we obtain the O(κ log(1/ϵ)) rate, where κ := L/µ is the local condition number around the initial point x0. For standard Lipschitz smooth functions, it reduces to the classical rate of gradient descent. 4.2 Nesterov s accelerated gradient method In the case of convex and standard Lipschitz smooth functions, it is well known that Nesterov s accelerated gradient method (NAG) achieves the optimal O(1/T 2) rate. In this section, we show that Algorithm 1: Nesterov s Accelerated Gradient Method (NAG) input A convex and ℓ-smooth function f, stepsize η, initial point x0 1: Initialize z0 = x0, B0 = 0, and A0 = 1/η. 2: for t = 0, ... do 3: Bt+1 = Bt + 1 2 1 + 4Bt + 1 4: At+1 = Bt+1 + 1/η 5: yt = xt + (1 At/At+1)(zt xt) 6: xt+1 = yt η f(yt) 7: zt+1 = zt η(At+1 At) f(yt) 8: end for under the ℓ-smoothness condition with a sub-quadratic ℓ, the optimal O(1/T 2) rate can be achieved by a slightly modified version of NAG shown in Algorithm 1, the only difference between which and the classical NAG is that the latter directly sets At+1 = Bt+1 in Line 4. Formally, we have the following theorem, whose proof is deferred to Appendix C. Theorem 4.4. Suppose f is convex and ℓ-smooth where ℓis sub-quadratic. Then there always exists a constant G satisfying G max 8 q ℓ(2G)((f(x0) f ) + x0 x 2), f(x0) . Denote L := ℓ(2G) and choose η min 1 16L2 , 1 2L . The iterates generated by Algorithm 1 satisfy f(x T ) f 4(f(x0) f ) + 4 x0 x 2 It is easy to note that Theorem 4.4 achieves the accelerated O(1/T 2) convergence rate, or equivalently the O(1/ ϵ) gradient complexity to find an ϵ-sub-optimal point, which is optimal among gradient-based methods [Nesterov, 2003]. In order to prove Theorem 4.4, we also use induction to show the gradients along the trajectory of Algorithm 1 are bounded by G. However, unlike gradient descent, the gradient norm is no longer a potential function or monotonically non-increasing, which makes the induction analysis more challenging. Suppose that we have shown f(ys) G for s < t. To complete the induction, it suffices to prove f(yt) G. Since xt = yt 1 η f(yt 1) is a gradient descent step by Line 6 of Algorithm 1, Lemma 4.1 directly shows f(xt) G. In order to also bound f(yt) , we try to control yt xt , which is the most challenging part of our proof. Since yt xt can be expressed as a linear combination of past gradients { f(ys)}s 0. Then with probability at least 1 δ, the iterates generated by (4) satisfy f(xt) G for all t < T and t F} T, τ2 := min t ϵt > G 5ηL τ := min{τ1, τ2}, where we use a b to denote min{a, b} for any a, b R. Then at least before time τ, we know that the function value and gradient noise are bounded, where the former also implies bounded gradients according to Corollary 3.6. Therefore, it suffices to show the probability of τ < T is small, which means with a high probability, τ = T and thus gradients are always bounded before T. Since both the gradient and noise are bounded for t < τ, it is straightforward to bound the update xt+1 xt , which allows us to use Lemma 3.3 and other useful properties. However, it is still non-trivial to upper bound E[f(xτ) f ] as τ is a random variable instead of a fixed time step. Fortunately, τ is a stopping time with nice properties. That is because both f(xt+1) and ϵt = gt f(xt) only depend on {gs}s t, i.e., the stochastic gradients up to t. Therefore, for any fixed t, the events {τ > t} only depend on {gs}s t, which show τ is a stopping time. Then with a careful analysis, we are still able to obtain an upper bound on E[f(xτ) f ] = O(1). On the other hand, τ < T means either τ = τ1 < T or τ = τ2 < T. If τ = τ1 < T, by its definition, we know f(xτ+1) f > F. Roughly speaking, it also suggests f(xτ) f > F/2. If we choose F such that it is much larger than the upper bound on E[f(xτ) f ] we just obtained, by Markov s inequality, we can show the probability of τ = τ1 < T is small. In addition, by union bound and Chebyshev s inequality, the probability of τ2 < T can also be bounded by a small constant. Therefore, we have shown τ < T. Then the rest of the analysis is not too hard following the classical analysis. 5.3 Reconciliation with existing lower bounds In this section, we reconcile our convergence results for constant-stepsize GD/SGD in the non-convex setting with existing lower bounds in [Zhang et al., 2019] and [Wang et al., 2022], based on which the authors claim that adaptive methods such as GD/SGD with clipping and Adam are provably faster than non-adaptive GD/SGD. This may seem to contradict our convergence results. In fact, we show that any gain in adaptive methods is at most by constant factors, as GD and SGD already achieve the optimal rates in the non-convex setting. [Zhang et al., 2019] provides both upper and lower complexity bounds for constant-stepsize GD for (L0, L1)-smooth functions, and shows that its complexity is O(Mϵ 2), where M := sup{ f(x) | f(x) f(x0)} is the supremum gradient norm below the level set of the initial function value. If M is very large, then the O(Mϵ 2) complexity can be viewed as a negative result, and as evidence that constant-stepsize GD can be slower than GD with gradient clipping, since in the latter case, they obtain the O(ϵ 2) complexity without M. However, based on our Corollary 3.6, their M can be actually bounded by our G, which is a constant. Therefore, the gain in adaptive methods is at most by constant factors. [Wang et al., 2022] further provides a lower bound which shows non-adaptive GD may diverge for some examples. However, their counter-example does not allow the stepsize to depend on the initial sub-optimality gap. In contrast, our stepsize η depends on the effective smoothness constant L, which depends on the initial sub-optimality gap through G. Therefore, there is no contradiction here either. We should point out that in the practice of training neural networks, the stepsize is usually tuned after fixing the loss function and initialization, so it does depend on the problem instance and initialization. 5.4 Lower bound For (ρ, L0, Lρ)-smooth functions with ρ < 2, it is easy to verify that the constant G in both Theorem 5.2 and Theorem 5.3 is a polynomial function of problem-dependent parameters like L0, Lρ, f(x0) f , σ, etc. In other words, GD and SGD are provably efficient methods in the non-convex setting for ρ < 2. In this section, we show that the requirement of ρ < 2 is necessary in the non-convex setting with the lower bound for GD in the following Theorem 5.4, whose proof is deferred in Appendix G. Since SGD reduces to GD when there is no gradient noise, it is also a lower bound for SGD. Theorem 5.4. Given L0, L2, G0, 0 > 0 satisfying L2 0 10, for any η 0, there exists a (2, L0, L2)-smooth function f that satisfies Assumptions 1 and 2, and initial point x0 that satisfies f(x0) G0 and f(x0) f 0, such that gradient descent with stepsize η (3) either cannot reach a 1-stationary point or takes at least exp(L2 0/8)/6 steps to reach a 1-stationary point. 6 Conclusion In this paper, we generalize the standard Lipschitz smoothness as well as the (L0, L1)-smoothness [Zhang et al., 2020] conditions to the ℓ-smoothness condition, and develop a new approach for analyzing the convergence under this condition. The approach uses different techniques for several methods and settings to bound the gradient along the optimization trajectory, which allows us to obtain stronger results for both convex and non-convex problems. We obtain the classical rates for GD/SGD/NAG methods in the convex and/or non-convex setting. Our results challenge the folklore belief on the necessity of adaptive methods for generalized smooth functions. There are several interesting future directions following this work. First, the ℓ-smoothness can perhaps be further generalized by allowing ℓto also depend on potential functions in each setting, besides the gradient norm. In addition, it would also be interesting to see if the techniques of bounding gradients along the trajectory that we have developed in this and the concurrent work [Li et al., 2023] can be further generalized to other methods and problems and to see whether more efficient algorithms can be obtained. Finally, although we justified the necessity of the requirement of ℓ-smoothness with a sub-quadratic ℓin the non-convex setting, it is not clear whether it is also necessary for NAG in the convex setting, another interesting open problem. Acknowledgments This work was supported, in part, by the MIT-IBM Watson AI Lab and ONR Grants N00014-20-1-2394 and N00014-23-1-2299. We also acknowledge support from DOE under grant DE-SC0022199, and NSF through awards DMS-2031883, DMS-1953181, and DMS-2022448 (TRIPODS program). Yossi Arjevani, Yair Carmon, John C. Duchi, Dylan J. Foster, Nathan Srebro, and Blake E. Woodworth. Lower bounds for non-convex stochastic optimization. Mathematical Programming, 199:165 214, 2019. Stephen P Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004. Yair Carmon, John C. Duchi, Oliver Hinder, and Aaron Sidford. Lower bounds for finding stationary points i. Mathematical Programming, pages 1 50, 2017. Ziyi Chen, Yi Zhou, Yingbin Liang, and Zhaosong Lu. Generalized-smooth nonconvex optimization is as efficient as smooth nonconvex optimization. ar Xiv preprint ar Xiv:2303.02854, 2023. Michael Crawshaw, Mingrui Liu, Francesco Orabona, Wei Zhang, and Zhenxun Zhuang. Robustness to unbounded smoothness of generalized signsgd. Advances in Neural Information Processing Systems, 35:9955 9968, 2022. Alexandre d Aspremont, Damien Scieur, and Adrien Taylor. Acceleration methods. Foundations and Trends in Optimization, 5(1-2):1 245, 2021. ISSN 2167-3888. doi: 10.1561/2400000036. URL http://dx.doi.org/10.1561/2400000036. Matthew Faw, Litu Rout, Constantine Caramanis, and Sanjay Shakkottai. Beyond uniform smoothness: A stopped analysis of adaptive sgd. Ar Xiv, abs/2302.06570, 2023. Saeed Ghadimi and Guanghui Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization, 23(4):2341 2368, 2013. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. Co RR, abs/1412.6980, 2014. Haochuan Li, Ali Jadbabaie, and Alexander Rakhlin. Convergence of adam under relaxed assumptions. ar Xiv preprint ar Xiv:2304.13972, 2023. Arkadij Semenoviˇc Nemirovskij and David Borisovich Yudin. Problem complexity and method efficiency in optimization. Wiley-Interscience, 1983. Yurii Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer Science & Business Media, 2003. Jiang Qian, Yuren Wu, Bojin Zhuang, Shaojun Wang, and Jing Xiao. Understanding gradient clipping in incremental gradient methods. In International Conference on Artificial Intelligence and Statistics, 2021. Amirhossein Reisizadeh, Haochuan Li, Subhro Das, and Ali Jadbabaie. Variance-reduced clipping for non-convex optimization. Ar Xiv, abs/2303.00883, 2023. Bohan Wang, Yushun Zhang, Huishuai Zhang, Qi Meng, Zhi-Ming Ma, Tie-Yan Liu, and Wei Chen. Provable adaptivity in adam. ar Xiv preprint ar Xiv:2208.09900, 2022. Bohang Zhang, Jikai Jin, Cong Fang, and Liwei Wang. Improved analysis of clipping algorithms for non-convex optimization. Ar Xiv, abs/2010.02519, 2020. Jingzhao Zhang, Tianxing He, Suvrit Sra, and Ali Jadbabaie. Why gradient clipping accelerates training: A theoretical justification for adaptivity. ar Xiv preprint ar Xiv:1905.11881, 2019. Shen-Yi Zhao, Yin-Peng Xie, and Wu-Jun Li. On the convergence and improvement of stochastic normalized gradient descent. Science China Information Sciences, 64, 2021. A Proofs related to generalized smoothness In this section, we provide the proofs of propositions and lemmas related to the generalized smoothness condition in Definition 1 or 2. First, in Appendix A.1, we justify the examples we discussed in Section 3. Next, we provide the detailed proof of Proposition 3.2 in Appendix A.2. Finally, we provide the proofs of the useful properties of generalized smoothness in Appendix A.3, including Lemma 3.3, Lemma 3.5, and Corollary 3.6 stated in Section 3.1.2. A.1 Justification of examples in Section 3 In this section, we justify the univariate examples of (ρ, L0, Lρ)-smooth functions listed in Table 2 and also provide the proof of Propositions 3.7. First, it is well-known that all quadratic functions have bounded Hessian and are Lipschitz smooth, corresponding to ρ = 0. Next, [Zhang et al., 2019, Lemma 2] shows that any univariate polynomial is (L0, L1)-smooth, corresponding to ρ = 1. Then, regarding the exponential function f(x) = ax where a > 1, we have f (x) = log(a)ax and f (x) = log(a)2ax = log(a)f (x), which implies f is (1, 0, log(a))-smooth. Similarly, by standard calculations, it is straight forward to verify that logarithmic functions and xp, p = 1 are also (ρ, L0, Lρ)-smooth with ρ = 2 and ρ = p 2 p 1 respectively. So far we have justified all the examples in Table 2 except double exponential functions a(bx) and rational functions, which will be justified rigorously by the two propositions below. First, for double exponential functions in the form of f(x) = a(bx) where a, b > 1, we have the following proposition, which shows f is (ρ, L0, Lρ)-smooth for any ρ > 1. Proposition A.1. For any ρ > 1, the double exponential function f(x) = a(bx), where a, b > 1, is (ρ, L0, Lρ)-smooth for some L0, Lρ 0. However, it is not necessarily (L0, L1)-smooth for any L0, L1 0. Proof of Proposition A.1. By standard calculations, we can obtain f (x) = log(a) log(b) bxa(bx), f (x) = log(b)(log(a)bx + 1) f (x). (6) Note that if ρ > 1, lim x + |f (x)|ρ |f (x)| = lim x + |f (x)|ρ 1 log(b)(log(a)bx + 1) = lim y + (log(a) log(b)y)ρ 1 a(ρ 1)y log(b)(log(a)y + 1) = , where the first equality is a direct calculation based on (6); the second equality uses change of variable y = bx; and the last equality is because exponential functions grow faster than affine functions. Therefore, for any Lρ > 0, there exists x0 R such that |f (x)| Lρ |f (x)|ρ if x > x0. Next, note that limx f (x) = 0. Then for any λ1 > 0, there exists x1 R such that |f (x)| λ1 if x < x1. Also, since f is continuous, by Weierstrass s Theorem, we have |f (x)| λ2 if x1 x x0 for some λ2 > 0. Then denoting L0 = max{λ1, λ2}, we know f is (ρ, L0, Lρ)-smooth. Next, to show f is not necessarily (L0, L1)-smooth, consider the specific double exponential function f(x) = e(ex). Then we have f (x) = exe(ex), f (x) = (ex + 1) f (x). For any x max {log(L0 + 1), log(L1 + 1)}, we can show that |f (x)| > (L1 + 1)f (x) > L0 + L1 |f (x)| , which shows f is not (L0, L1) smooth for any L0, L1 0. In the next proposition, we show that any univariate rational function f(x) = P(x)/Q(x), where P and Q are two polynomials, is (ρ, L0, Lρ)-smooth with ρ = 1.5. Proposition A.2. The rational function f(x) = P(x)/Q(x), where P and Q are two polynomials, is (1.5, L0, L1.5)-smooth for some L0, L1.5 0. However, it is not necessarily (ρ, L0, Lρ)-smooth for any ρ < 1.5 and L0, Lρ 0. Proof of Proposition A.2. Let f(x) = P(x)/Q(x) where P and Q are two polynomials. Then the partial fractional decomposition of f(x) is given by f(x) = w(x) + Air (x ai)r + Birx + Cir (x2 + bix + ci)r , where w(x) is a polynomial, Air, Bir, Cir, ai, bi, ci are all real constants satisfying b2 i 4ci < 0 for each 1 i n which implies x2 + bix + ci > 0 for all x R. Assume ji 1 and Aiji = 0 without loss of generality. Then we know f has only finite singular points {ai}1 i m and has continuous first and second order derivatives at all other points. To simplify notation, denote pir(x) := Air (x ai)r , qir(x) := Birx + Cir (x2 + bix + ci)r . Then we have f(x) = w(x) + Pm i=1 Pji r=1 pir(x) + Pn i=1 Pki r=1 qir(x). We know that r+2 r+1 1.5 for any r 1. Then we can show that lim x ai |f (x)|1.5 |f (x)| = lim x ai p iji(x) 1.5 p iji(x) 1 ji + 1, (7) where the first equality is because one can easily verify that the first and second order derivatives of piji dominate those of all other terms when x goes to ai, and the second equality is by standard calculations noting that ji+2 ji+1 1.5. Note that (7) implies that, for any Lρ > ji + 1, there exists δi > 0 such that |f (x)| Lρ |f (x)|1.5 , if |x ai| < δi. (8) Similarly, one can show limx |f (x)| 1.5 |f (x)| = , which implies there exists x0 > 0 such that |f (x)| Lρ |f (x)|1.5 , if |x| > x0. (9) Define B := {x R | |x| x0 and |x ai| δi, i} . We know B is a compact set and therefore the continuous function f is bounded within B, i.e., there exists some constant L0 > 0 such that |f (x)| L0, if x B. (10) Combining (8), (9), and (10), we have shown |f (x)| L0 + Lρ |f (x)|1.5 , x dom(f), which completes the proof of the first part. For the second part, consider the ration function f(x) = 1/x. Then we know that f (x) = 1/x2 and f (x) = 2/x3. Note that for any ρ < 1.5 and 0 < x min{(L0 + 1) 1/3, (Lρ + 1) 1/(3 2ρ)}, we have |f (x)| = 1 x3 + 1 x3 2ρ |f (x)|ρ > L0 + Lρ |f (x)|ρ , which shows f is not (ρ, L0, Lρ) smooth for any ρ < 1.5 and L0, Lρ 0. Finally, we complete this section with the proof of Proposition 3.7, which shows self-concordant functions are (2, L0, L2)-smooth for some L0, Lρ 0. Proof of Proposition 3.7. Let h : R R be a self-concordant function. We have h (x) 2h (x)3/2. Then, for x (a, b), we can obtain 1 2h (x) 1/2h (x) h (x). Integrating both sides from x0 to y for x0, y (a, b), we have h (y)1/2 h (x0)1/2 h (y) h (x0). Therefore, h (y) (h (x0)1/2 h (x0) + h (y))2 2(h (x0)1/2 h (x0))2 + 2h (y)2. Since h (y) > 0, we have |h (y)| = h (y). Therefore, the above inequality shows that h is (2, L0, L2)-smooth with L0 = 2(h (x0)1/2 h (x0))2 and L2 = 2. A.2 Proof of Proposition 3.2 In order to prove Proposition 3.2, we need the following several lemmas. First, the lemma below partially generalizes Grönwall s inequality. Lemma A.3. Let α : [a, b] [0, ) and β : [0, ) (0, ) be two continuous functions. Suppose α (t) β(α(t)) almost everywhere over (a, b). Denote function ϕ(u) := R 1 β(u) du. We have for all t [a, b], ϕ(α(t)) ϕ(α(a)) a + t. Proof of Lemma A.3. First, by definition, we know that ϕ is increasing since ϕ = 1 β > 0. Let function γ : [a, b] R be the solution of the following differential equation γ (t) = β(γ(t)) t (a, b), γ(a) = α(a). (11) Then we have dϕ(γ(t)) = dγ(t) β(γ(t)) = dt. Integrating both sides, noting that γ(a) = α(a) by (11), we obtain ϕ(γ(t)) ϕ(α(a)) = t a. Then it suffices to show ϕ(α(t)) ϕ(γ(t)), t [a, b]. Note that the following inequality holds almost everywhere. (ϕ(α(t)) ϕ(γ(t))) = ϕ (α(t))α (t) ϕ (γ(t))γ (t) = α (t) β(α(t)) γ (t) β(γ(t)) 0, where the inequality is because α (t) β(α(t)) by the assumption of this lemma and γ (t) = β(γ(t)) by (11). Since ϕ(α(a)) ϕ(γ(a)) = 0, we know for all t [a, b], ϕ(α(t)) ϕ(γ(t)), which completes the proof. With Lemma A.3, one can bound the gradient norm within a small enough neighborhood of a given point as in the following lemma. Lemma A.4. If the objective function f is ℓ-smooth, for any two points x, y Rd such that the closed line segment between x and y is contained in X, if y x a ℓ( f(x) +a) for any a > 0, we have f(y) f(x) + a. Proof of Lemma A.4. Denote z(t) := (1 t)x + ty for 0 t 1. Then we know z(t) X for all 0 t 1 by the assumption made in this lemma. Then we can also define α(t) := f(z(t)) for 0 t 1. Note that for any 0 t s 1, by triangle inequality, α(s) α(t) f(z(s)) f(z(t)) . (12) We know that α(t) = f(z(t)) is differentiable almost everywhere since f is second order differentiable almost everywhere (Here we assume α(t) = 0 for 0 < t < 1 without loss of generality. Otherwise, one can define tm = sup{0 < t < 1 | α(t) = 0} and consider the interval [tm, 1] instead). Then the following equality holds almost everywhere α (t) = lim s t α(s) α(t) s t lim s t f(z(s)) f(z(t)) s t = lim s t f(z(s)) f(z(t)) = 2f(z(t))(y x) 2f(z(t)) y x ℓ(α(t)) y x , where the first inequality is due to (12) and the last inequality is by Definition 1. Let β(u) := ℓ(u) y x and ϕ(u) := R u 0 1 β(v)dv. By Lemma A.3, we know that ϕ ( f(y) ) = ϕ(u(1)) ϕ(u(0)) + 1 = ϕ ( f(x) ) + 1. Denote ψ(u) := R u 0 1 ℓ(v)dv = ϕ(u) y x . We have ψ ( f(y) ) ψ ( f(x) ) + y x ψ ( f(x) ) + a ℓ( f(x) + a) 1 ℓ(v) dv + Z f(x) +a =ψ( f(x) + a). Since ψ is increasing, we have f(y) f(x) + a. With Lemma A.4, we are ready to prove Proposition 3.2. Proof of Proposition 3.2. We prove the two directions in this proposition separately. 1. An (r, ℓ)-smooth function is ℓ-smooth. For each fixed x X where 2f(x) exists and any unit-norm vector w, by Definition 2, we know that for any t r( f(x) ), f(x + tw) f(x) t ℓ( f(x) ). Then we know that 2f(x)w = lim t 0 1 t ( f(x + tw) f(x)) = lim t 0 1 t ( f(x + tw) f(x)) ℓ( f(x) ), which implies 2f(x) ℓ( f(x) ) for any point x if 2f(x) exists. Then it suffices to show that 2f(x) exists almost everywhere. Note that for each x X, Definition 2 states that the gradient function is ℓ( f(x) ) Lipschitz within the ball B(x, r( f(x) )). Then by Rademacher s Theorem, f is twice differentiable almost everywhere within this ball. Then we can show it is also twice differentiable almost everywhere within the entire domain X as long as we can cover X with countably many such balls. Define Sn := {x X | n f(x) n + 1} for integer n 0. We have X = n 0Sn. One can easily find an internal covering of Sn with balls of size r(n + 1)3, i.e., there exist {xn,i}i 0, where xn,i Sn, such that Sn i 0B(xn,i, r(n + 1)) i 0B(xn,i, r( f(xn,i) )). Therefore we have X n,i 0B(xn,i, r( f(xn,i) )) which completes the proof. 2. An ℓ-smooth function satisfying Assumption 1 is (r, m)-smooth where m(u) := ℓ(u + a) and r(u) := a/m(u) for any a > 0. For any y Rd satisfying y x r( f(x) ) = a ℓ( f(x) +a), denote z(t) := (1 t)x + ty for 0 t 1. We first show y X by contradiction. Suppose y / X, let us define tb := inf{0 t 1 | z(t) / X} and zb := z(tb). Then we know zb is a boundary point of X. Since f is a closed function with an open domain, we have lim t tb f(z(t)) = . (13) On the other hand, by the definition of tb, we know z(t) X for every 0 t < tb. Then by Lemma A.4, for all 0 t < tb, we have f(z(t)) f(x) + a. Therefore for all 0 t < tb, f(z(t)) f(x) + Z t f(z(s)), y x ds f(x) + ( f(x) + a) y x < , 3We can find an internal covering in the following way. We first cover Sn with countably many hyper-cubes of length r(n + 1)/ d, which is obviously doable. Then for each hyper-cube that intersects with Sn, we pick one point from the intersection. Then the ball centered at the picked point with radius r(n + 1) covers this hyper-cube. Therefore, the union of all such balls can cover Sn. which contradicts (13). Therefore we have shown y X. Since y is chosen arbitrarily with the ball B(x, r( f(x) )), we have B(x, r( f(x) )) X. Then for any x1, x2 B(x, r( f(x) )), we denote w(t) := tx1 + (1 t)x2. Then we know w(t) B(x, r( f(x) )) for all 0 t 1 and can obtain f(x1) f(x2) = 0 2f(w(t)) (x1 x2) dt 0 ℓ( f(x) + a) dt =m( f(x) ) x1 x2 , where the last inequality is due to Lemma A.4. A.3 Proofs of lemmas implied by generalized smoothness In this part, we provide the proofs of the useful properties stated in Section 3.1.2, including Lemma 3.3, Lemma 3.5, and Corollary 3.6. Proof of Lemma 3.3. First, note that since ℓis non-decreasing and r is non-increasing, we have ℓ( f(x) ) ℓ(G) = L and r(G) r( f(x) ). Then by Definition 2, we directly have that B(x, r(G)) B(x, r( f(x) )) X, and that for any x1, x2 B(x, r(G)), we have f(x1) f(x2) ℓ( f(x) ) x1 x2 L x1 x2 . Next, for the second inequality in (2), define z(t) := (1 t)x2 + tx1 for 0 t 1. We know z(t) B(x, r(G)). Note that we have shown f(z(t)) f(x2) L z(t) x2 = t L x1 x2 . (14) Then we have f(x1) f(x2) = Z 1 f(z(t), x1 x2 dt f(x2), x1 x2 + f(z(t)) f(x2), x1 x2 dt f(x2), x1 x2 + L x1 x2 2 Z 1 = f(x2), x1 x2 + L 2 x1 x2 2 , where the inequality is due to (14). Proof of Lemma 3.5. If f is ℓ-smooth, by Proposition 3.2, f is also (r, m)-smooth where m(u) = ℓ(2u) and r(u) = u/ℓ(2u). Then by Lemma 3.3 where we choose G = f(x) , we have that B x, f(x) ℓ(2 f(x) ) X, and that for any x1, x2 B x, f(x) ℓ(2 f(x) ) , we have f(x1) f(x2) + f(x2), x1 x2 + ℓ(2 f(x) ) Choosing x2 = x and x1 = x f(x) ℓ(2 f(x) ), it is easy to verify that x1, x2 B x, f(x) ℓ(2 f(x) ) . Therefore, we have f f x f(x) ℓ(2 f(x) ) f(x) f(x) 2 2ℓ(2 f(x) ), which completes the proof. Proof of Corollary 3.6. We first show G < . Note that since ℓis sub-quadratic, we know limu 2ℓ(2u)/u2 = 0. Therefore, for any F > 0, there exists some M > 0 such that 2ℓ(2u)/u2 < 1/F for every u > M. In other words, for any u satisfying u2 2ℓ(2u) F, we must have u M. Therefore, by definition of G, we have G M < if F > 0. If F = 0, we trivially get G = 0 < . Also, since the set {u 0 | u2 2ℓ(2u) F} is closed and bounded, we know its supremum G is in this set and it is also straightforward to show G2 = 2ℓ(2G) F. Next, by Lemma 3.5, we know f(x) 2 2ℓ(2 f(x) ) (f(x) f ) 2ℓ(2 f(x) ) F. Then based on the definition of G, we have f(x) G. B Analysis of GD for convex functions In this section, we provide the detailed convergence analysis of gradient descent in the convex setting, including the proofs of Lemma 4.1 and Theorem 4.2, for which the following lemma will be helpful. Lemma B.1 (Co-coercivity). If f is convex and (r, ℓ)-smooth, for any x X and y B(x, r( f(x) )/2), we have y X and f(x) f(y), x y 1 L f(x) f(y) 2 , where L = ℓ( f(x) ). Proof of Lemma B.1. Define the Bregman divergences ϕx(w) := f(w) f(x), w and ϕy(w) := f(w) f(y), w , which are both convex functions. Since ϕx(w) = f(w) f(x), we have ϕx(x) = 0 which implies minw ϕx(w) = ϕx(x) as ϕx is convex. Similarly we have minw ϕy(w) = ϕy(y). Denote rx := r( f(x) ). Since f is (r, ℓ)-smooth, we know its gradient f is L-Lipschitz locally in B(x, rx). Since ϕx(w) f(w) = f(x) is a constant, we know ϕx is also L-Lipschitz locally in B(x, rx). Then similar to the proof of Lemma 3.3, one can easily show that for any x1, x2 B(x, rx), we have ϕx(x1) ϕx(x2) + ϕx(x2), x1 x2 + L 2 x1 x2 2 . (15) Note that for any y B(x, r( f(x) )/2) as in the lemma statement, y 1 L ϕx(y) x y x + 1 L f(y) f(x) 2 y x rx, where the first inequality uses triangle inequality and ϕx(y) = f(y) f(x); and the second inequality uses Definition 2. It implies that y 1 L ϕx(y) B(x, rx). Then we can obtain ϕx(x) = min w ϕx(w) ϕx L ϕx(y) ϕx(y) 1 2L ϕx(y) 2 , where the last inequality uses (15) where we choose x1 = y 1 L ϕx(y) and x2 = y. By the definition of ϕx, the above inequality is equivalent to 1 2L f(y) f(x) 2 f(y) f(x) f(x), x y . Similar argument can be made for ϕy( ) to obtain 1 2L f(y) f(x) 2 f(x) f(y) f(y), y x . Summing up the two inequalities, we can obtain the desired result. With Lemma B.1, we prove Lemma 4.1 as follows. Proof of Lemma 4.1. Let L = ℓ(G). We first verify that x+ B(x, r(G)/2). Note that x+ x = η f(x) ηG r(G)/2, where we choose η r(G)/(2G). Thus by Lemma B.1, we have f(x+) 2 = f(x) 2 + 2 f(x+) f(x), f(x) + f(x+) f(x) 2 η f(x+) f(x), x+ x + f(x+) f(x) 2 f(x) 2 + 1 2 f(x+) f(x) 2 where the first inequality uses Lemma B.1 and the last inequality chooses η 2/L. With Lemma 4.1, we are ready to prove both Theorem 4.2 and Theorem 4.3. Proof of Theorem 4.2. Denote G := f(x0) . Then we trivially have f(x0) G. Lemma 4.1 states that if f(xt) G for any t 0, then we also have f(xt+1) f(xt) G. By induction, we can show that f(xt) G for all t 0. Then the rest of the proof basically follows the standard textbook analysis. We still provide the detailed proof below for completeness. Note that xt+1 xt = η f(xt) ηG r(G), where we choose η r(G)/(2G). Thus we can apply Lemma 3.3 to obtain 0 f(xt+1) f(xt) f(xt), xt+1 xt L 2 xt+1 xt 2 f(xt+1) f(xt) f(xt), xt+1 xt 1 2η xt+1 xt 2 , (16) where the last inequality chooses η 1/L. Meanwhile, by convexity between xt and x , we have 0 f(xt) f + f(xt), x xt . (17) Note that (t + 1) (16)+(17) gives 0 f(xt) f + f(xt), x xt + (1 + t) f(xt+1) f(xt) f(xt), xt+1 xt 1 2η xt+1 xt 2 . Then reorganizing the terms of the above inequality, noting that xt+1 x 2 xt x 2 = xt+1 xt 2 + 2 xt+1 xt, xt x = xt+1 xt 2 + 2η f(xt), x xt , we can obtain (t + 1)(f(xt+1) f ) + 1 2η xt+1 x 2 t(f(xt) f ) + 1 2η xt x 2 . The above inequality implies t(f(xt) f ) + 1 2η xt x 2 is a non-increasing potential function, which directly implies the desired result. Proof of Theorem 4.3. Since strongly convex functions are also convex, by the same argument as in the proof of Theorem 4.2, we have f(xt) G := f(x0) for all t 0. Moreover, (16) still holds. For µ-strongly-convex function, we can obtain a tighter version of (17) as follows. 0 f(xt) f + f(xt), x xt + µ 2 x xt 2 . (18) Let A0 = 0 and At+1 = (1 + At)/(1 ηµ) for all t 0. Combining (16) and (18), we have 0 (At+1 At)(f(xt) f + f(xt), x xt ) f(xt+1) f(xt) f(xt), xt+1 xt 1 2η xt+1 xt 2 . Then reorganizing the terms of the above inequality, noting that xt+1 x 2 xt x 2 = xt+1 xt 2 + 2 xt+1 xt, xt x = xt+1 xt 2 + 2η f(xt), x xt , we can obtain At+1(f(xt+1) f ) + 1 + ηµAt+1 2η xt+1 x 2 At(f(xt) f ) + 1 + ηµAt 2η xt x 2 . The above inequality means At(f(xt) f ) + 1+ηµAt 2η xt x 2 is a non-increasing potential function. Thus by telescoping we have f(x T ) f µ(1 ηµ)T 2(1 (1 ηµ)T ) x0 x 2 . C Analysis of NAG for convex functions In this section, we provide the detailed analysis of Nesterov s accelerated gradient method in the convex setting. As we discussed in Section 4.2, the stepsize size choice in Theorem 4.4 is smaller than the classical one. Therefore, we provide a more fine-grained version of the theorem, which allows the stepsize to depend on the degree of ℓ. Theorem C.1. Suppose f is convex and ℓ-smooth. For α (0, 2], if ℓ(u) = o(uα), i.e., limu ℓ(u)/uα = 0, then there must exist a constant G such that for L := ℓ(2G), we have G max 8 max{L1/α 1/2, 1} q L((f(x0) f ) + x0 x 2), f(x0) . (19) Choose η min 1 16L3 2/α , 1 2L . Then the iterates of Algorithm 1 satisfy f(x T ) f 4(f(x0 f ) + 4 x0 x 2 Note that when α = 2, i.e., ℓis sub-quadratic, Theorem C.1 reduces to Theorem 4.4 which chooses η min{ 1 16L2 , 1 2L}. When α = 1, i.e., ℓis sub-linear, the above theorem chooses η 1 16L as in the classical textbook analysis up to a numerical constant factor. Throughout this section, we will assume f is convex and ℓ-smooth, and consider the parameter choices in Theorem C.1, unless explicitly stated. Note that since f is ℓ-smooth, it is also (r, m)-smooth with m(u) = ℓ(u + G) and r(u) = G ℓ(u+G) by Proposition 3.2. Note that m(G) = ℓ(2G) = L and r(G) = G/L. Then the stepsize satisfies η 1/(2L) min{ 2 m(G), r(G) Before proving Theorem C.1, we first present several additional useful lemmas. To start with, we provide two lemmas regarding the weights {At}t 0 and {Bt}t 0 used in Algorithm 1. The lemma below states that Bt = Θ(t2). Lemma C.2. The weights {Bt}t 0 in Algorithm 1 satisfy 1 4t2 Bt t2 for all t 0. Proof of Lemma C.2. We prove this lemma by induction. First note that the inequality obviously holds for B0 = 0. Suppose its holds up to t. Then we have Bt+1 = Bt + 1 Similarly, we have Bt+1 = Bt + 1 4Bt + 1) t2 + 1 4t2 + 1) (t + 1)2. Lemma C.2 implies the following useful lemma. Lemma C.3. The weights {At}t 0 in Algorithm 1 satisfy that (1 At At+1 ) 1 As+1(As+1 As 1) 4. Proof of Lemma C.3. First, note that it is easy to verify that As+1 As 1 = Bs+1 Bs 1 0, which implies each term in the LHS of the above inequality is non-negative. Then we have (1 At At+1 ) 1 As+1(As+1 As 1) 1 At+1 At (At+1 At) s=0 (As+1 As 1) (At As+1) = 1 At+1 At (Bt+1 Bt) s=0 (Bs+1 Bs 1) (As = Bs + 1/η) = 1 At+1 At 1 4Bs + 1) (by definition of Bs) 8 1 (t + 1)2t (t + 1)t2 2 (by At Bt and Lemma C.2) The following lemma summarizes the results in the classical potential function analysis of NAG in [d Aspremont et al., 2021]. In order to not deal with the generalized smoothness condition for now, we directly assume the inequality (20) holds in the lemma, which will be proved later under the generalized smoothness condition. Lemma C.4. For any t 0, if the following inequality holds, f(yt) + f(yt), xt+1 yt + 1 2η xt+1 yt 2 f(xt+1), (20) then we can obtain At+1(f(xt+1) f ) + 1 2η zt+1 x 2 At(f(xt) f ) + 1 2η zt x 2 . (21) Proof of Lemma C.4. These derivations below can be found in [d Aspremont et al., 2021]. We present them here for completeness. First, since f is convex, the convexity between x and yt gives f f(yt) + f(yt), x yt . Similarly the convexity between xt and yt gives f(xt) f(yt) + f(yt), xt yt . Combining the above two inequalities as well as (20) assumed in this lemma, we have 0 (At+1 At)(f(yt) f + f(yt), x yt ) + At(f(yt) f(xt) + f(yt), xt yt ) f(xt+1) f(yt) f(yt), xt+1 yt 1 2η xt+1 yt 2 . (22) Furthermore, note that 1 2η zt+1 x 2 zt x 2 zt+1 zt 2 + 2 zt+1 zt, zt x η2(At+1 At)2 f(yt) 2 2η(At+1 At) f(yt), zt x 2(At+1 At)2 f(yt) 2 (At+1 At) f(yt), zt x . (23) Meanwhile, we have At+1xt+1 = At+1yt ηAt+1 f(yt) = At+1xt + (At+1 At)(zt xt) ηAt+1 f(yt). Thus we have (At+1 At)zt = At+1xt+1 Atxt + ηAt+1 f(yt). Plugging back in (23), we obtain zt+1 x 2 zt x 2 2(At+1 At)2 f(yt) 2 + (At+1 At) f(yt), x + At+1xt+1 + Atxt ηAt+1 f(yt), f(yt) . (At+1 At) f(yt), x + Atxt At+1xt+1, f(yt) zt+1 x 2 zt x 2 + η(At+1 1 2(At+1 At)2) f(yt) 2 . So we can reorganize (22) to obtain 0 At+1(f(xt+1) f ) At(f(xt) f ) + (At+1 At) f(yt), x + Atxt At+1xt+1, f(yt) 2η At+1 xt+1 yt 2 = At+1(f(xt+1) f ) At(f(xt) f ) zt+1 x 2 zt x 2 + η 2(At+1 (At+1 At)2) f(yt) 2 . Then we complete the proof noting that it is easy to verify At+1 (At+1 At)2 = Bt+1 + 1 η (Bt+1 Bt)2 = 1 In the next lemma, we show that if f(yt) G, then the condition (20) assumed in Lemma C.4 is satisfied at time t. Lemma C.5. For any t 0, if f(yt) G, then we have f(xt+1) G, and furthermore, f(yt) + f(yt), xt+1 yt + 1 2η xt+1 yt 2 f(xt+1). Proof of Lemma C.5. As disccued below Theorem C.1, the stepsize satisfies η 1/(2L) min{ 2 m(G), r(G) 2G }. Therefore we can apply Lemma 4.1 to show f(xt+1) f(yt) G. For the second part, note that xt+1 yt = η f(yt) G 2L r(G), we can apply Lemma 3.3 to show f(xt+1) f(yt) + f(yt), xt+1 yt + L 2 xt+1 yt 2 f(yt) + f(yt), xt+1 yt + 1 2η xt+1 yt 2 . With Lemma C.4 and Lemma C.5, we can show that f(yt) G for all t 0, as in the lemma below. Lemma C.6. For all t 0, f(yt) G. Proof of Lemma C.6. We will prove this lemma by induction. First, by Lemma 3.5 and the choice of G, it is easy to verify that f(x0) G. Then for any fixed t 0, suppose that f(xs) G for all s < t. Then by Lemma C.4 and Lemma C.5, we know that f(xs) G for all 0 s t, and that for all s < t, As+1(f(xs+1) f ) + 1 2η zs+1 x 2 As(f(xs) f ) + 1 2η zs x 2 . (24) By telescoping (24), we have for all 0 s < t, f(xs+1) f 1 ηAs+1 ((f(x0) f ) + z0 x 2). (25) For 0 s t, since f(xs) G, then Lemma 3.5 implies f(xs) 2 2L(f(xs) f ). (26) Note that by Algorithm 1, we have zt xt = At 1 At (zt 1 xt 1) η(At At 1) f(yt 1) + η f(yt 1). Thus we can obtain s=1 ηAs+1(As+1 As 1) f(ys). yt xt = (1 At At+1 ) 1 s=1 ηAs+1(As+1 As 1) f(ys). Thus we have yt xt (1 At At+1 ) 1 s=1 ηAs+1(As+1 As 1) f(ys) =: I. Since f(ys) G and xs+1 ys = η f(ys) r(G) for s < t, by Lemma 3.3, we have I (1 At At+1 ) 1 s=1 ηAs+1(As+1 As 1)( f(xs+1) + ηL f(ys) ) ηLI + (1 At At+1 ) 1 s=1 ηAs+1(As+1 As 1) f(xs+1) . I 1 1 ηL(1 At At+1 ) 1 s=1 ηAs+1(As+1 As 1) f(xs+1) 1 1 ηL(1 At At+1 ) 1 s=1 ηAs+1(As+1 As 1) p 2L(f(xs+1) f ) (by (26)) 1 1 ηL(1 At At+1 ) 1 s=1 ηAs+1(As+1 As 1) η ((f(x0) f ) + z0 x 2) 1 ηL(1 At At+1 ) 1 As+1(As+1 As 1) q (f(x0) f ) + z0 x 2 L((f(x0) f ) + z0 x 2) (by Lemma C.3) 1 2L3/2 1/α L1/2 1/αG = G 2L r(G). (by the choices of η and G) Since f(xt) G and we just showed xt yt r(G), by Lemma 3.3, we have f(yt) f(xt) + L yt xt 2L ηAt ((f(x0) f ) + z0 x 2) + L G 2L (by (26) and (25)) G. (by At 1/η and choice of G) Then we complete the induction as well as the proof. With the three lemmas above, it is straight forward to prove Theorem C.1. Proof of Theorem C.1. Combining Lemmas C.4, C.5, and C.6, we know the following inequality holds for all t 0. At+1(f(xt+1) f ) + 1 2η zt+1 x 2 At(f(xt) f ) + 1 2η zt x 2 , Then by telescoping, we directly complete the proof. D Analysis of NAG for strongly convex functions In this section, we provide the convergence analysis of the modified version of Nesterov s accelerated gradient method for µ-strongly-convex functions defined in Algorithm 2. The convergence results is formally presented in the following theorem. Theorem D.1. Suppose f is µ-strongly-convex and ℓ-smooth. For α (0, 2], if ℓ(u) = o(uα), i.e., limu ℓ(u)/uα = 0, then there must exist a constant G such that for L := ℓ(2G), we have G 8 max{L1/α 1/2, 1} q L((f(x0) f ) + µ z0 x 2)/ min{µ, 1}. (27) If we choose 144L3 2/α log4 e + 144L3 2/α Algorithm 2: NAG for µ-strongly-convex functions input A µ-strongly-convex and ℓ-smooth function f, stepsize η, initial point x0 1: Initialize z0 = x0, B0 = 0, and A0 = 1/(ηµ). 2: for t = 0, ... do 3: Bt+1 = 2Bt+1+ 4Bt+4ηµB2 t +1 2(1 ηµ) 4: At+1 = Bt+1 + 1 ηµ 5: τt = (At+1 At)(1+ηµAt) At+1+2ηµAt At+1 ηµA2 t and δt = At+1 At 1+ηµAt+1 6: yt = xt + τt(zt xt) 7: xt+1 = yt η f(yt) 8: zt+1 = (1 ηµδt)zt + ηµδtyt ηδt f(yt) 9: end for The iterates generated by Algorithm 2 satisfy f(x T ) f (1 ηµ)T 1(f(x0 f ) + µ z0 x 2) ηµ + (1 ηµ)T 1 . The above theorem gives a gradient complexity of O 1 ηµ log(1/ϵ) . Note that Theorem 4.2 shows the complexity of GD is O 1 ηµ log(1/ϵ) . It seems NAG gives a better rate at first glance. However, note that the choices of G, L, η in these two theorems are different, it is less clear whether NAG accelerates the optimization in this setting. Below, we informally show that, if ℓ(u) = o( u), the rate we obtain for NAG is faster than that for GD. For simplicity, we informally assume ℓ(u) xρ with ρ (0, 1). Let G0 = f(x0) . Then for GD, by Theorem 4.2, we have ηgdµ µ/ℓ(G0) µ/Gρ 0. For NAG, since ℓis sub-linear we can choose α = 1 in the theorem statement. Since f is µ-strongly-convex, by standard results, we can show that f(x0) f 1 µG2 0 and z0 x 1 µG0. Thus the requirement of G in (27) can be simplified as G ℓ(G) G0/µ, which is satisfied if choosing G (G0/µ)1/(1 ρ). Then we also have ηnag 1 ℓ(G) (µ/G0)ρ/(1 ρ). Thus ηnagµ (µ/Gρ 0)1/(2 2ρ). This means whenever 1/(2 2ρ) < 1, i.e., 0 ρ < 1/2, we have ηnagµ ηgdµ, which implies the rate we obtain for NAG is faster than that for GD. In what follows, we will provide the proof of Theorem D.1. We will always use the parameter choices in the theorem throughout this section. D.1 Useful lemmas In this part, we provide several useful lemmas for proving Theorem D.1. To start with, the following two lemmas provide two useful inequalities. Lemma D.2. For any 0 u 1, we have log(1 + u) 1 Lemma D.3. For all 0 < p 1 and t 0, we have t 2 p log(e + 1 p)(p(1 + p)t + 1). Proof of Lemma D.3. Let f(t) = 2 p log(e + 1 p)(p(1 + p)t + 1) t. It is obvious that f(t) 0 for t 2 p log(e + 1 p). For t > 2 p log(e + 1 p), we have f (t) = 2 p log(e + 1 p) log(1 + p)(1 + p)t 1 p(1 + p)t 1 (by Lemma D.2) = p exp(t log(1 + p)) 1 p exp(t p/2) 1 (by Lemma D.2) p(e + 1/p) 1 0. (since t > 2 p log(e + 1 Thus f is non-decreasing and f(t) f 2 p log(e + 1 In the next four lemmas, we provide several useful inequalities regarding the weights {At}t 0 and {Bt}t 0 used in Algorithm 2. Lemma D.4. For all s t, we have Bt+1 Bt Bt+1 Bs+1 Bs 1 + ηµBs+1 1, which implies τt δs 1. Proof of Lemma D.4. By Algorithm 2, it is easy to verify (Bs+1 Bs)2 = Bs+1(1 + ηµBs+1). This implies Bs = Bs+1 p Bs+1(1 + ηµBs+1). Bt Bt+1 = 1 ηµ + 1 Bt+1 1 ηµ + 1 Bs+1 = Bs Bs+1 , where in the inequality, we use the fact that Bs is non-decreasing with s. Therefore Bt+1 Bt Bt+1 Bs+1 Bs 1 + ηµBs+1 Bs+1 Bs Bs+1 Bs+1 Bs 1 + ηµBs+1 = 1. Thus we have τt δs = (At+1 At)(1 + ηµAt) At+1 + 2ηµAt At+1 ηµA2 t As+1 As At+1 As+1 As 1 + ηµAs+1 (by At+1 At) At+1 Bs+1 Bs 1 + ηµAs+1 (by As+1 As = Bs+1 Bs) Bt+1 Bs+1 Bs 1 + ηµBs+1 1. (by As+1 Bs+1) Lemma D.5. If 0 < ηµ < 1, then for any t 1, we have Bt 1 ηµ Bt+1 3Bt 1 ηµ. Bt 1 (1 ηµ)t 1 (1 + ηµ)t 1. Proof of Lemma D.5. For t 1, we have Bt 1 thus Bt+1 = 2Bt + 1 + p 4Bt + 4ηµB2 t + 1 2(1 ηµ) 2Bt + 1 1 ηµ 3Bt 1 µη . On the other hand, we have Bt+1 = 2Bt + 1 + p 4Bt + 4ηµB2 t + 1 2(1 ηµ) t 1 B1 1 1 ηµ t 1 (1 + ηµ)t 1. Lemma D.6. For 0 < ηµ < 1 and t 1, we have Bs (1 ηµ)Bt+1 3Bt. Proof of Lemma D.6. Bt+1 = 2Bt + 1 + p 4Bt + 4ηµB2 t + 1 2(1 ηµ) Bt + Bt 1 ηµ Combined with Lemma D.5, we have the desired result. Lemma D.7. For t 1, we have As+1 At 3 + 4 log(e + 1 Proof of Lemma D.7. By Lemma D.5, we have At = Bt + 1 ηµ (1 + ηµ)t 1 + 1 Thus, we have Bs+1 + 1/(ηµ) Bs+1 At + t ηµAt 3 + 1 ηµAt 2 ηµ log(e + 1 ηµ)(ηµ(1 + ηµ)t + 1) (by Lemma D.6 and Lemma D.3) 3 + 4 log(e + 1 ηµ). (by Inequality (29)) D.2 Proof of Theorem D.1 With all the useful lemmas in the previous section, we proceed to prove Theorem D.1, for which we need several additional lemmas. First, similar to Lemma C.4, the following lemma summarizes the results in the classical potential function analysis of NAG for strongly convex functions in [d Aspremont et al., 2021]. Lemma D.8. For any t 0, if the following inequality holds f(yt) + f(yt), xt+1 yt + 1 2η xt+1 yt 2 f(xt+1), then we can obtain At+1(f(xt+1) f ) + 1 + ηµAt+1 2η zt+1 x 2 At(f(xt) f ) + 1 + ηµAt 2η zt x 2 . Proof of Lemma D.8. These derivations can be found in d Aspremont et al. [2021]. We present it here for completeness. The strong convexity between x and yt gives f f(yt) + f(yt), x yt + µ The convexity between xt and yt gives f(xt) f(yt) + f(yt), xt yt . Combining the above two inequalities and the one assumed in this lemma, we have 0 (At+1 At)(f f(yt) f(yt), x yt µ + At(f(yt) f(xt) f(yt), xt yt ) + At+1(f(xt+1) f(yt) f(yt), xt+1 yt 1 2η xt+1 yt 2). Reorganizing we can obtain At+1(f(xt+1) f ) + 1 + ηµAt+1 2η zt+1 x 2 At(f(xt) f ) + 1 + ηµAt + (At At+1)2 At+1 ηµA2 t+1 1 + ηµAt+1 η 2 f(yt) 2 A2 t (At+1 At)(1 + ηµAt)(1 + ηµAt+1) (At+1 + 2ηµAt At+1 ηµA2 t)2 µ 2 xt zt 2 . Then we complete the proof noting that (At At+1)2 At+1 ηµA2 t+1 = (Bt Bt+1)2 Bt+1 + 1 ηµ ηµ(Bt+1 + 1/(ηµ))2 = ηµB2 t+1 + 1 ηµ ηµB2 t+1 2Bt+1 1 ηµ = 2Bt+1 0. Next, note that Lemma C.5 still holds in the strongly convex setting. We repeat it below for completeness. Lemma D.9. For any t 0, if f(yt) G, then we have f(xt+1) G, and furthermore, f(yt) + f(yt), xt+1 yt + 1 2η xt+1 yt 2 f(xt+1). With Lemma D.8 and Lemma D.9, we will show that f(yt) G for all t 0 by induction in the following lemma. Lemma D.10. For all t 0, we have f(yt) G. Proof of Lemma D.10. We will prove this lemma by induction. First, by Lemma 3.5 and the choice of G, it is easy to verify that f(x0) G. Then for any fixed t 0, suppose that f(xs) G for all s < t. Then by Lemma D.8 and Lemma D.9, we know that f(xs) G for all 0 s t, and that for all s < t, As+1(f(xs+1) f ) + 1 + ηµAs+1 2η zs+1 x 2 As(f(xs) f ) + 1 + ηµAs 2η zs x 2 . By telescoping (30), we have for all 0 s < t, f(xs+1) f 1 As+1ηµ(f(x0) f + µ z0 x 2). (31) For 0 s t, since f(xs) G, then Lemma 3.5 implies f(xs) 2 2L(f(xs) f ). (32) Note that by Algorithm 2, we have zt xt = (1 ηµδt 1)(1 τt 1)(zt 1 xt 1) + η(1 δt 1) f(yt 1). s=0 (1 δs) f(ys) i=s+1 (1 ηµδi)(1 τi). yt xt = ητt s=0 (1 δs) f(ys) i=s+1 (1 ηµδi)(1 τi). 1 ηµδi = 1 ηµ(Ai+1 Ai) 1 + ηµAi+1 = 1 + ηµAi 1 + ηµAi+1 1 τi = 1 (Ai+1 Ai)(1 + ηµAi) Ai+1 + 2ηµAi Ai+1 ηµA2 i = Ai(1 + ηµAi+1) Ai+1 + 2ηµAi Ai+1 ηµA2 i Ai(1 + ηµAi+1) Ai+1(1 + ηµAi). Thus we have s=0 (δs 1)As+1 At f(ys) =: I, where the second inequality follows from Lemma D.4. We further control term I by At ( f(xs+1) + ηL f(ys) ) At f(xs+1) . Thus we have yt xt η 1 ηL 2L(f(xs+1) f ) (by (32)) 2L 1 As+1ηµ(f(x0) f + µ z0 x 2) (by (31)) 2ηL(f(x0) f + µ z0 x 2) 2ηL(f(x0) f + µ z0 x 2) 3 + 4 log(e + 1 ηµ) . (by Lemma D.7) 3 + 4 log(e + 1 ηµ) G L1/2 1/α 4 (by (27)) 3 + 4 log(e + 1 ηµ) log2 e + 144L3 2/α 24L (by (28)) Since f(xt) G and we just showed xt yt r(G), by Lemma 3.3, we have f(yt) f(xt) + L yt xt 2L ηµAt ((f(x0) f ) + µ z0 x 2) + L G 2L (by (31)) G. (by At 1/(ηµ) and (27)) Then we complete the induction as well as the proof. Proof of Theorem D.1. Combining Lemmas D.8, D.9, and D.10, we know the following inequality holds for all t 0. At+1(f(xt+1) f )+ 1 + ηµAt+1 2η zt+1 x 2 At(f(xt) f )+ 1 + ηµAt 2η zt x 2 . Then by telescoping, we get At(f(xt) f )+ 1 + ηµAt 2η zt x 2 A0(f(x0) f )+ 1 + ηµA0 2η z0 x 2 . Finally, applying Lemma D.5, we have At = Bt + 1/(ηµ) 1/(1 ηµ)t 1 + 1/(ηµ). Thus completes the proof. E Analysis of GD for non-convex functions In this section, we provide the proofs related to analysis of gradient descent for non-convex function, including those of Lemma 5.1 and Theorem 5.2. Proof of Lemma 5.1. First, based on Corollary 3.6, we know f(x) G < . Also note that x+ x = η f(x) ηG G/L. Then by Lemma 3.3 and Remark 3.4, we have x+ X and f(x+) f(x) + f(xt), x+ x + L =f(x) η(1 ηL/2) f(x) 2 Proof of Theorem 5.2. By Lemma 5.1, using induction, we directly obtain f(xt) f(x0) for all t 0. Then by Corollary 3.6, we have f(xt) G for all t 0. Following the proof of Lemma 5.1, we can similarly show f(xt+1) f(xt) η(1 ηL/2) η 2 f(xt) 2 η 2 f(xt) 2 . Taking a summation over t < T and rearanging terms, we have t G 5ηL where the first inequality uses union bound; the second inequality applies Chebyshev s inequality and E[ ϵt 2] = E[Et 1[ ϵt 2]] σ2 for each fixed t by Assumption 4; the last inequality uses Lemma F.1. Next, we will bound P(τ1 < T, τ2 = T). Note that under the event {τ1 < T, τ2 = T}, we know that 1) τ = τ1 < T which implies f(xτ+1) f > F; and 2) τ < T = τ2 which implies ϵτ G 5ηL by the definition in (5). Also note that we always have f(xτ) f F which implies f(xτ) G by Corollary 3.6. Then we can show xτ+1 xτ = η gτ η( f(xτ) + ϵτ ) ηG + G where we choose η 1 2L. Then based on Lemma 3.3 and Remark 3.4, we have f(xτ+1) f(xτ) η 2 f(xτ) 2 η f(xτ), ϵτ + η2L ϵτ 2 η f(xτ) ϵτ + η2L ϵτ 2 where the first inequality is obtained following the same derivation as in (33); the last equality is due to Corollary 3.6. Therefore we can show that under the event {τ1 < T, τ2 = T}, f(xτ) f = f(xτ) f(xτ+1) + f(xτ+1) f > F/2. P(τ1 < T, τ2 = T) P (f(xτ) f > F/2) E[f(xτ) f ] F/2 2(f(x0) f + σ) where the second inequality uses Markov s inequality; the third inequality uses Lemma F.2; and in the last inequality we choose F = 8(f(x0) f + σ)/δ. Therefore we can show P(τ < T) P(τ2 < T) + P(τ1 < T, τ2 = T) δ/2. Then we also know P(τ = T) 1 δ/2 1/2. Therefore, by Lemma F.2, 2(f(x0) f + σ) t<τ f(xt) 2 # t ϵ2} denote the event of not converging to an ϵ-stationary point. By Markov s inequality, we have P(E) δ/2. Therefore we have P({τ < T} E) δ, which completes the proof. G Lower bound In this section, we provide the proof of Theorem 5.4. Proof of Theorem 5.4. Let c, η0 > 0 satisfy η0 c2/2. Consider log(|x| c), |x| y 2 log(y c) log(2y |x| c), c/2 |x| < y kx2 + b, |x| < c/2, where c > 0 is a constant and y = (c + p c2 + 2η0)/2 > 0 is the fixed point of the iteration xt+1 = xt η0 xt c and k, b are chosen in such a way that f(x) and f (x) are continuous. Specifically, choose k = c 1f (c/2) and b = f(c/2) cf (c/2)/4. Since f( x) = f(x), f(x) is symmetric about the line x = 0. In a small neighborhood, f(x) is symmetric about (y, f(y)), so f (x) is continuous at y. Let us first consider the smoothness of f. By symmetry, it suffices to consider x > 0. Then, (x c) 1, x y (2y x c) 1, c/2 x < y 2kx, 0 < x < c/2. Its Hessian is given by (x c) 2, x > y (2y x c) 2, c/2 < x < y 2k, 0 < x < c/2. Hence, f(x) is (2, 2k, 1)-smooth. Note that f(x) has a stationary point 0. For stepsize ηf satisfying η0 ηf c2/4, there exists z = (c+ p c2 + 2ηf) y such that z = z ηf(y c) 1 and by symmetry, once xτ = z, xt = z for all t τ, making the GD iterations stuck. Now we choose a proper x0 such that f (x0) and f(x0) f(0) are bounded. We consider arriving at y from above. That is, x0 x1 . . . xτ = z > c > 0. Since in each update where xt+1 = xt ηf(xt c) 1 > c, xt xt+1 = xt (xt ηf(xt c) 1) = ηf(xt c) 1 ηf. Hence, we can choose τ in such a way that 3c/2 x0 < 3c/2 + ηf. Then, log(c/2) f(x0) log(c/2 + ηf), 2/(c + 2 ηf) f (x0) 2/c. By definition, y c = η0(c + p c2 + 2η0) 1. Hence, f(c/2) = 2 log(y c) log(2y c/2 c) = 2 log(η0) 2 log(c + p c2 + 2η0) log( p c2 + 2η0 c/2), f (c/2) = 1 p c2 + 2η0 c/2 f(x0) f(0) = f(x0) f(c/2) + cf (c/2)/4 log(c/2 + ηf) + 2 log(η 1 0 ) + 2 log(c + p c2 + 2η0 c/2) + c c2 + 2η0 c/2 log(c) + 2 log(η 1 0 ) + 2 log(2 2c2) + log( = 4 log(c) + 2 log(η 1 0 ) + 7 2 log(2) + 1 For stepsize ηf < η0, reaching below 4c/3 takes at least (x0 4c/3)/ ηf c/(6 ηf) > cη 1/2 0 /6 steps to reach 4c/3, where f (4c/3) = log(c/3). Now we set c and η0 and scale function f(x) to satisfy the parameter specifications L0, L2, G0, 0. Define g(x) = L 1 2 f(x). Then, g(x) is (2, 2k L 1 2 , L2)-smooth. Since the gradient of g(x) is L 1 2 times f(x), the above analysis for f(x) applies to g(x) by replacing η0 with η1 = L2η0 and ηf with η = L2ηf. To ensure that 2k L 1 2 = 2(c L2) 1f (c/2) = 2 c L2 c2 + 2η1 c/2 4 c2L2 L0, it suffices to take c 2/ L0L2. To ensure that g (x0) 2 L2c G0, it suffices to take c 2/(L2G0). To ensure that g(x0) g(0) (4 log(c) + 2 log(η 1 1 ) + 3.5 log 2 + 0.5)L 1 2 0, it suffices to take log(η 1 1 ) = L2 0 3.5 log 2 0.5 2 2 log(c). Since we require η1 c2/2, parameters L2 and 0 need to satisfy log 2 2 log(c) L2 0 3.5 log 2 0.5 2 2 log(c), that is, L2 0 5.5 log 2 + 0.5, which holds because L2 0 10. Take c = max{2/ L0L2, 2/(L2G0), p 8/L0}. Then, as long as η 2/L0, the requirement that η c2/4 is satisfied. Therefore, on g(x) with initial point x0, gradient descent with a constant stepsize either gets stuck, or takes at least cη 1/2 1 /6 = c 6 exp L2 0 3.5 log 2 0.5 6 exp(L2 0 3.5 log 2 0.5 steps to reach a 1-stationary point. On the other hand, if η > 2/L0, consider the function f(x) = L0 2 x2. For any xt = 0, we always have |xt+1| / |xt| = |1 ηL0| > 1, which means the iterates diverge to infinity.