# convergence_of_adam_under_relaxed_assumptions__8a47aad9.pdf Convergence of Adam Under Relaxed Assumptions Haochuan Li MIT haochuan@mit.edu Alexander Rakhlin MIT rakhlin@mit.edu Ali Jadbabaie MIT jadbabai@mit.edu In this paper, we provide a rigorous proof of convergence of the Adaptive Moment Estimate (Adam) algorithm for a wide class of optimization objectives. Despite the popularity and efficiency of the Adam algorithm in training deep neural networks, its theoretical properties are not yet fully understood, and existing convergence proofs require unrealistically strong assumptions, such as globally bounded gradients, to show the convergence to stationary points. In this paper, we show that Adam provably converges to ϵ-stationary points with O(ϵ 4) gradient complexity under far more realistic conditions. The key to our analysis is a new proof of boundedness of gradients along the optimization trajectory of Adam, under a generalized smoothness assumption according to which the local smoothness (i.e., Hessian norm when it exists) is bounded by a sub-quadratic function of the gradient norm. Moreover, we propose a variance-reduced version of Adam with an accelerated gradient complexity of O(ϵ 3). 1 Introduction In this paper, we study the non-convex unconstrained stochastic optimization problem min x {f(x) = Eξ [f(x, ξ)]} . (1) The Adaptive Moment Estimation (Adam) algorithm [23] has become one of the most popular optimizers for solving (1) when f is the loss for training deep neural networks. Owing to its efficiency and robustness to hyper-parameters, it is widely applied or even sometimes the default choice in many machine learning application domains such as natural language processing [44; 4; 13], generative adversarial networks [35; 21; 55], computer vision [14], and reinforcement learning [28; 33; 40]. It is also well known that Adam significantly outperforms stochastic gradient descent (SGD) for certain models like transformer [50; 24; 1]. Despite its success in practice, theoretical analyses of Adam are still limited. The original proof of convergence in [23] was later shown by [37] to contain gaps. The authors in [37] also showed that for a range of momentum parameters chosen independently with the problem instance, Adam does not necessarily converge even for convex objectives. However, in deep learning practice, the hyper-parameters are in fact problem-dependent as they are usually tuned after given the problem and weight initialization. Recently, there have been many works proving the convergence of Adam for non-convex functions with various assumptions and problem-dependent hyper-parameter choices. However, these results leave significant room for improvement. For example, [12; 19] prove the convergence to stationary points assuming the gradients are bounded by a constant, either explicitly or implicitly. On the other hand, [51; 45] consider weak assumptions, but their convergence results are still limited. See Section 2 for more detailed discussions of related works. To address the above-mentioned gap between theory and practice, we provide a new convergence analysis of Adam without assuming bounded gradients, or equivalently, Lipschitzness of the objective function. In addition, we also relax the standard global smoothness assumption, i.e., the Lipschitzness of the gradient function, as it is far from being satisfied in deep neural network training. Instead, we 37th Conference on Neural Information Processing Systems (Neur IPS 2023). consider a more general, relaxed, and non-uniform smoothness condition according to which the local smoothness (i.e., Hessian norm when it exists) around x is bounded by a sub-quadratic function of the gradient norm f(x) (see Definition 3.2 and Assumption 2 for the details). This generalizes the (L0, L1) smoothness condition proposed by [49] based on language model experiments. Even though our assumptions are much weaker and more realistic, we can still obtain the same O(ϵ 4) gradient complexity for convergence to an ϵ-stationary point. The key to our analysis is a new technique to obtain a high probability, constant upper bound on the gradients along the optimization trajectory of Adam, without assuming Lipschitzness of the objective function. In other words, it essentially turns the bounded gradient assumption into a result that can be directly proven. Bounded gradients imply bounded stepsize at each step, with which the analysis of Adam essentially reduces to the simpler analysis of Ada Bound [31]. Furthermore, once the gradient boundedness is achieved, the analysis under the generalized non-uniform smoothness assumption is not much harder than that under the standard smoothness condition. We will introduce the technique in more details in Section 5. We note that the idea of bounding gradient norm along the trajectory of the optimization algorithm can be use in other problems as well. For more details, we refer the reader to our concurrent work [26] in which we present a set of new techiniques and methods for bounding gradient norm for other optimization algorithms under a generalized smoothness condition. Another contribution of this paper is to show that the gradient complexity of Adam can be further improved with variance reduction methods. To this end, we propose a variance-reduced version of Adam by modifying its momentum update rule, inspired by the idea of the STORM algorithm [9]. Under additional generalized smoothness assumption of the component function f( , ξ) for each ξ, we show that this provably accelerates the convergence with a gradient complexity of O(ϵ 3). This rate improves upon the existing result of [47] where the authors obtain an asymptotic convergence of their approach to variance reduction for Adam in the non-convex setting, under the bounded gradient assumption. 1.1 Contributions In light of the above background, we summarize our main contributions as follows. We develop a new analysis to show that Adam converges to stationary points under relaxed assumptions. In particular, we do not assume bounded gradients or Lipschitzness of the objective function. Furthermore, we also consider a generalized non-uniform smoothness condition where the local smoothness or Hessian norm is bounded by a sub-quadratic function of the gradient norm. Under these more realistic assumptions, we obtain a dimension free gradient complexity of O(ϵ 4) if the gradient noise is centered and bounded. We generalize our analysis to the setting where the gradient noise is centered and has sub-Gaussian norm, and show the convergence of Adam with a gradient complexity of O(ϵ 4 log3.25(1/ϵ)). We propose a variance-reduced version of Adam (VRAdam) with provable convergence guarantees. In particular, we obtain the accelerated O(ϵ 3) gradient complexity. 2 Related work In this section, we discuss the relevant literature related to convergence of Adam and the generalized smoothness condition, and defer additional related work on variants of Adam and variance reduction methods to Appendix A. Convergence of Adam. Adam was first proposed by Kingma and Ba [23] with a theoretical convergence guarantee for convex functions. However, Reddi et al. [37] found a gap in the proof of this convergence analysis, and also constructed counter-examples for a range of hyper-parameters on which Adam does not converge. That being said, the counter-examples depend on the hyperparameters of Adam, i.e., they are constructed after picking the hyper-parameters. Therefore, it does not rule out the possibility of obtaining convergence guarantees for problem-dependent hyperparameters, as also pointed out by [42; 51]. Many recent works have developed convergence analyses of Adam with various assumptions and hyper-parameter choices. Zhou et al. [54] show Adam with certain hyper-parameters can work on the counter-examples of [37]. De et al. [10] prove convergence for general non-convex functions assuming gradients are bounded and the signs of stochastic gradients are the same along the trajectory. The analysis in [12] also relies on the bounded gradient assumption. Guo et al. [19] assume the adaptive stepsize is upper and lower bounded by two constants, which is not necessarily satisfied unless assuming bounded gradients or considering the Ada Bound variant [31]. [51; 45] consider very weak assumptions. However, they show either 1) convergence only to some neighborhood of stationary points with a constant radius, unless assuming the strong growth condition; or 2) convergence to stationary points but with a slower rate. Generalized smoothness condition. Generalizing the standard smoothness condition in a variety of settings has been a focus of many recent papers. Recently, [49] proposed a generalized smoothness condition called (L0, L1) smoothness, which assumes the local smoothness or Hessian norm is bounded by an affine function of the gradient norm. The assumption was well-validated by extensive experiments conducted on language models. Various analyses of different algorithms under this condition were later developed [48; 34; 52; 17; 38; 8]. One recent closely-related work is [45] which studies converges of Adam under the (L0, L1) smoothness condition. However, their results are still limited, as we have mentioned above. In this paper, we consider an even more general smoothness condition where the local smoothness is bounded by a sub-quadratic function of the gradient norm, and prove the convergence of Adam under this condition. In our concurrent work [26], we further analyze various other algorithms in both convex and non-convex settings under similar generalized smoothness conditions following the same key idea of bounding gradients along the trajectory. 3 Preliminaries Notation. Let denote the Euclidean norm of a vector or spectral norm of a matrix. For any given vector x, we use (x)i to denote its i-th coordinate and x2, x, |x| to denote its coordinate-wise square, square root, and absolute value respectively. For any two vectors x and y, we use x y and x/y to denote their coordinate-wise product and quotient respectively. We also write x y or x y to denote the coordinate-wise inequality between x and y, which means (x)i (y)i or (x)i (y)i for each coordinate index i. For two symmetric real matrices A and B, we say A B or A B if B A or A B is positive semi-definite (PSD). Given two real numbers a, b R, we denote a b := min{a, b} for simplicity. Finally, we use O( ), Θ( ), and Ω( ) for the standard big-O, big-Theta, and big-Omega notation. 3.1 Description of the Adam algorithm Algorithm 1 ADAM 1: Input: β, βsq, η, λ, T, xinit 2: Initialize m0 = v0 = 0 and x1 = xinit 3: for t = 1, , T do 4: Draw a new sample ξt and perform the following updates 5: mt = (1 β)mt 1 + β f(xt, ξt) 6: vt = (1 βsq)vt 1 + βsq( f(xt, ξt))2 7: ˆmt = mt 1 (1 β)t 8: ˆvt = vt 1 (1 βsq)t 9: xt+1 = xt η ˆvt+λ ˆmt 10: end for The formal definition of Adam proposed in [23] is shown in Algorithm 1, where Lines 5 9 describe the update rule of iterates {xt}1 t T . Lines 5 6 are the updates for the first and second order momentum, mt and vt, respectively. In Lines 7 8, they are re-scaled to ˆmt and ˆvt in order to correct the initialization bias due to setting m0 = v0 = 0. Then the iterate is updated by xt+1 = xt ht ˆmt where ht = η/( ˆvt + λ) is the adaptive stepsize vector for some parameters η and λ. 3.2 Assumptions In what follows below, we will state our main assumptions for analysis of Adam. 3.2.1 Function class We start with a standard assumption in optimization on the objective function f whose domain lies in a Euclidean space with dimension d. Assumption 1. The objective function f is differentiable and closed within its open domain dom(f) Rd and is bounded from below, i.e., f := infx f(x) > . Remark 3.1. A function f is said to be closed if its sub-level set {x dom(f) | f(x) a} is closed for each a R. In addition, a continuous function f over an open domain is closed if and only f(x) tends to infinity whenever x approaches to the boundary of dom(f), which is an important condition to ensure the iterates of Adam with a small enough stepsize η stay within the domain with high probability. Note that this condition is mild since any continuous function defined over the entire space Rd is closed. Besides Assumption 1, the only additional assumption we make regarding f is that its local smoothness is bounded by a sub-quadratic function of the gradient norm. More formally, we consider the following (ρ, L0, Lρ) smoothness condition with 0 ρ < 2. Definition 3.2. A differentiable real-valued function f is (ρ, L0, Lρ) smooth for some constants ρ, L0, Lρ 0 if the following inequality holds almost everywhere in dom(f) 2f(x) L0 + Lρ f(x) ρ . Remark 3.3. When ρ = 1 , Definition 3.2 reduces to the (L0, L1) smoothness condition in [49]. When ρ = 0 or Lρ = 0, it reduces to the standard smoothness condition. Assumption 2. The objective function f is (ρ, L0, Lρ) smooth with 0 ρ < 2. The standard smooth function class is very restrictive as it only contains functions that are upper and lower bounded by quadratic functions. The (L0, L1) smooth function class is more general since it also contains, e.g., univariate polynomials and exponential functions. Assumption 2 is even more general and contains univariate rational functions, double exponential functions, etc. See Appendix D.1 for the formal propositions and proofs. We also refer the reader to our concurrent work [26] for more detailed discussions of examples of (ρ, L0, Lρ) smooth functions for different ρs. It turns out that bounded Hessian norm at a point x implies local Lipschitzness of the gradient in the neighborhood around x. In particular, we have the following lemma. Lemma 3.4. Under Assumptions 1 and 2, for any a > 0 and two points x dom(f), y Rd such that y x a L0+Lρ( f(x) +a)ρ , we have y dom(f) and f(y) f(x) (L0 + Lρ( f(x) + a)ρ) y x . Remark 3.5. Lemma 3.4 can be actually used as the definition of (ρ, L0, Lρ) smooth functions in place of Assumption 2. Besides the local gradient Lipschitz condition, it also suggests that, as long as the update at each step is small enough, the iterates will not go outside of the domain. For the special case of ρ = 1, choosing a = max{ f(x) , L0/L1}, one can verify that the required locality size in Lemma 3.4 satisfies a L0+L1( f(x) +a) 1 3L1 . In this case, Lemma 3.4 states that x y 1/(3L1) implies f(y) f(x) 2(L0 + L1 f(x) ) y x . Therefore, it reduces to the local gradient Lipschitz condition for (L0, L1) smooth functions in [49; 48] up to numerical constant factors. For ρ = 1, the proof is more involved because Grönwall s inequality used in [49; 48] no longer applies. Therefore we defer the detailed proof of Lemma 3.4 to Appendix D.2. 3.2.2 Stochastic gradient We consider one of the following two assumptions on the stochastic gradient f(xt, ξt) in our analysis of Adam. Assumption 3. The gradient noise is centered and almost surely bounded. In particular, for some σ 0 and all t 1, Et 1[ f(xt, ξt)] = f(xt), f(xt, ξt) f(xt) σ, a.s., where Et 1[ ] := E[ |ξ1, . . . , ξt 1] is the conditional expectation given ξ1, . . . , ξt 1. Assumption 4. The gradient noise is centered with sub-Gaussian norm. In particular, for some R 0 and all t 1, Et 1[ f(xt, ξt)] = f(xt), Pt 1 ( f(xt, ξt) f(xt) s) 2e s2 where Et 1[ ] := E[ |ξ1, . . . , ξt 1] and Pt 1[ ] := P[ |ξ1, . . . , ξt 1] are the conditional expectation and probability given ξ1, . . . , ξt 1. Assumption 4 is strictly weaker than Assumption 3 since an almost surely bounded random variable clearly has sub-Gaussian norm, but it results in a slightly worse convergece rate up to poly-log factors (see Theorems 4.1 and 4.2). Both of them are stronger than the most standard bounded variance assumption E[ f(xt, ξt) f(xt) 2] σ2 for some σ 0, although Assumption 3 is actually a common assumption in existing analyses under the (L0, L1) smoothness condition (see e.g. [49; 48]). The extension to the bounded variance assumption is challenging and a very interesting future work as it is also the assumption considered in the lower bound [3]. We suspect that such an extension would be straightforward if we consider a mini-batch version of Algorithm 1 with a batch size of S = Ω(ϵ 2), since this results in a very small variance of O(ϵ2) and thus essentially reduces the analysis to the deterministic setting. However, for practical Adam with an O(1) batch size, the extension is challenging and we leave it as a future work. In the section, we provide our convergence results for Adam under Assumptions 1, 2, and 3 or 4. To keep the statements of the theorems concise, we first define several problem-dependent constants. First, we let 1 := f(x1) f < be the initial sub-optimality gap. Next, given a large enough constant G > 0, we define r := min n 1 5LρGρ 1 , 1 5(Lρ 1 0 Lρ)1/ρ o , L := 3L0 + 4LρGρ, (2) where L can be viewed as the effective smoothness constant along the trajectory if one can show f(xt) G and xt+1 xt r at each step (see Section 5 for more detailed discussions). We will also use c1, c2 to denote some small enough numerical constants and C1, C2 to denote some large enough ones. The formal convergence results under Assumptions 1, 2, and 3 are presented in the following theorem, whose proof is deferred in Appendix E. Theorem 4.1. Suppose Assumptions 1, 2, and 3 hold. Denote ι := log(1/δ) for any 0 < δ < 1. Let G be a constant satisfying G max n 2λ, 2σ, C1 1L0, (C1 1Lρ) 1 2 ρ o . Choose 0 βsq 1, β min 1, c1λϵ2 , η c2 min rλ G , σλβ LG ι, λ3/2β Let T = max n 1 β2 , C2 1G ηϵ2 o . Then with probability at least 1 δ, we have f(xt) G for every 1 t T, and 1 T PT t=1 f(xt) 2 ϵ2. Note that G, the upper bound of gradients along the trajectory, is a constant that depends on λ, σ, L0, Lρ, and the initial sub-optimality gap 1, but not on ϵ. There is no requirement on the second order momentum parameter βsq, although many existing works like [12; 51; 45] need certain restrictions on it. We choose very small β and η, both of which are O(ϵ2). Therefore, from the choice of T, it is clear that we obtain a gradient complexity of O(ϵ 4), where we only consider the leading term. We are not clear whether the dependence on ϵ is optimal or not, as the Ω(ϵ 4) lower bound in [3] assumes the weaker bounded variance assumption than our Assumpion 3. However, it matches the state-of-the-art complexity among existing analyses of Adam. One limitation of the dependence of our complexity on λ is O(λ 2), which might be large since λ is usually small in practice, e.g., the default choice is λ = 10 8 in the Py Torch implementation. There are some existing analyses on Adam [12; 51; 45] whose rates do not depend explicitly on λ or only depend on log(1/λ). However, all of them depend on poly(d), whereas our rate is dimension free. The dimension d is also very large, especially when training transformers, for which Adam is widely used. We believe that independence on d is better than that on λ, because d is fixed given the architecture of the neural network but λ is a hyper-parameter which we have the freedom to tune. In fact, based on our preliminary experimental results on CIFAR-10 shown in Figure 1, the performance of Adam is not very sensitive to the choice of λ. Although the default choice of λ is 10 8, increasing it up to 0.01 only makes minor differences. As discussed in Section 3.2.2, we can generalize the bounded gradient noise condition in Assumption 3 to the weaker sub-Gaussian noise condition in Assumption 4. The following theorem formally shows the convergence result under Assumptions 1, 2, and 4, whose proof is deferred in Appendix E.6. 0 100 200 300 400 Test Error (%) 0 100 200 300 400 Test Error (%) (b) Res Net-Small 0 100 200 300 400 Test Error (%) (c) Res Net110 Figure 1: Test errors of different models trained on CIFAR-10 using the Adam optimizer with β = 0.9, βsq = 0.999, η = 0.001 and different λs. From left to right: (a) a shallow CNN with 6 layers; (b) Res Net-Small with 20 layers; and (c) Res Net110 with 110 layers. Theorem 4.2. Suppose Assumptions 1, 2, and 4 hold. Denote ι := log(2/δ) and σ := R p 2 log(4T/δ) for any 0 < δ < 1. Let G be a constant satisfying G max n 2λ, 2σ, C1 1L0, (C1 1Lρ) 1 2 ρ o . Choose 0 βsq 1, β min 1, c1λϵ2 , η c2 min rλ G , σλβ LG ι, λ3/2β Let T = max n 1 β2 , C2 1G ηϵ2 o . Then with probability at least 1 δ, we have f(xt) G for every 1 t T, and 1 T PT t=1 f(xt) 2 ϵ2. Note that the main difference of Theorem 4.2 from Theorem 4.1 is that σ is now O( log T) instead of a constant. With some standard calculations, one can show that the gradient complexity in Theorem 4.2 is bounded by O(ϵ 4 logp(1/ϵ)), where p = max 3, 9+2ρ 5.1 Bounding the gradients along the optimization trajectory We want to bound the gradients along the optimization trajectory mainly for two reasons. First, as discussed in Section 2, many existing analyses of Adam rely on the assumption of bounded gradients, because unbounded gradient norm leads to unbounded second order momentum ˆvt which implies very small stepsize, and slow convergence. On the other hand, once the gradients are bounded, it is straightforward to control ˆvt as well as the stepsize, and therefore the analysis essentially reduces to the easier one for Ada Bound. Second, informally speaking1, under Assumption 2, bounded gradients also imply bounded Hessians, which essentially reduces the (ρ, L0, Lρ) smoothness to the standard smoothness. See Section 5.2 for more formal discussions. In this paper, instead of imposing the strong assumption of globally bounded gradients, we develop a new analysis to show that with high probability, the gradients are always bounded along the trajectory of Adam until convergence. The essential idea can be informally illustrated by the following circular" reasoning that we will make precise later. On the one hand, if f(xt) G for every t 1, it is not hard to show the gradient converges to zero based on our discussions above. On the other hand, we know that a converging sequence must be upper bounded. Therefore there exists some G such that f(xt) G for every t 1. In other words, the bounded gradient condition implies the convergence result and the convergence result also implies the boundedness condition, forming a circular argument. This circular argument is of course flawed. However, we can break the circularity of reasoning and rigorously prove both the bounded gradient condition and the convergence result using a contradiction 1The statement is informal because here we can only show bounded gradients and Hessians at the iterate points, which only implies local smoothness near the neighborhood of each iterate point (see Section 5.2). However, the standard smoothness condition is a stronger global condition which assumes bounded Hessian at every point within a convex set. argument. Before introducing the contradiction argument, we first need to provide the following useful lemma, which is the reverse direction of a generalized Polyak-Lojasiewicz (PL) inequality, whose proof is deferred in Appendix D.3. Lemma 5.1. Under Assumptions 1 and 2, we have f(x) 2 3(3L0+4Lρ f(x) ρ)(f(x) f ). Define the function ζ(u) := u2 3(3L0+4Lρuρ) over u 0. It is easy to verify that if ρ < 2, ζ is increasing and its range is [0, ). Therefore, ζ is invertible and ζ 1 is also increasing. Then, for any constant G > 0, denoting F = ζ(G), Lemma 5.1 suggests that if f(x) f F, we have f(x) ζ 1(f(x) f ) ζ 1(F) = G. In other words, if ρ < 2, the gradient is bounded within any sub-level set, even though the sub-level set could be unbounded. Then, let τ be the first time the sub-optimality gap is strictly greater than F, truncated at T + 1, or formally, τ := min{t | f(xt) f > F} (T + 1). (3) Then at least when t < τ, we have f(xt) f F and thus f(xt) G. Based on our discussions above, it is not hard to analyze the updates before time τ, and one can contruct some Lyapunov function to obtain an upper bound on f(xτ) f . On the other hand, if τ T, we immediately obtain a lower bound on f(xτ), that is f(xτ) f > F, by the definition of τ in (3). If the lower bound is greater than the upper bound, it leads to a contradiction, which shows τ = T + 1, i.e., the sub-optimality gap and the gradient norm are always bounded by F and G respectively before the algorithm terminates. We will illustrate the technique in more details in the simple deterministic setting in Section 5.3, but first, in Section 5.2, we introduce several prerequisite lemmas on the (ρ, L0, Lρ) smoothness. 5.2 Local smoothness In Section 5.1, we informally mentioned that (ρ, L0, Lρ) smoothness essentially reduces to the standard smoothness if the gradient is bounded. In this section, we will make the statement more precise. First, note that Lemma 3.4 implies the following useful corollary. Corollary 5.2. Under Assumptions 1 and 2, for any G > 0 and two points x dom(f), y Rd such that f(x) G and y x r := min n 1 5LρGρ 1 , 1 5(Lρ 1 0 Lρ)1/ρ o , denoting L := 3L0 + 4LρGρ, we have y dom(f) and f(y) f(x) L y x , f(y) f(x) + f(x), y x + L The proof of Corollary 5.2 is deferred in Appendix D.4. Although the inequalities in Corollary 5.2 look very similar to the standard global smoothness condition with constant L, it is still a local condition as it requires x y r. Fortunately, at least before τ, such a requirement is easy to satisfy for small enough η, according to the following lemma whose proof is deferred in Appendix E.5. Lemma 5.3. Under Assumption 3, if t < τ and choosing G σ, we have xt+1 xt ηD where D := 2G/λ. Then as long as η r/D, we have xt+1 xt r which satisfies the requirement in Corollary 5.2. Then we can apply the inequalities in it in the same way as the standard smoothness condition. In other words, most classical inequalities derived for standard smooth functions also apply to (ρ, L0, Lρ) smooth functions. 5.3 Warm-up: analysis in the deterministic setting In this section, we consider the simpler deterministic setting where the stochastic gradient f(xt, ξt) in Algorithm 1 or (18) is replaced with the exact gradient f(xt). As discussed in Section 5.1, the key in our contradiction argument is to obtain both upper and lower bounds on f(xτ) f . In the following derivations, we focus on illustrating the main idea of our analysis technique and ignore minor proof details. In addition, all of them are under Assumptions 1, 2, and 3. In order to obtain the upper bound, we need the following two lemmas. First, denoting ϵt := ˆmt f(xt), we can obtain the following informal descent lemma for deterministic Adam. Lemma 5.4 (Descent lemma, informal). For any t < τ, choosing G λ and a small enough η, f(xt+1) f(xt) η 4G f(xt) 2 + η 2λ ϵt 2 , (4) where omits less important terms. Compared with the standard descent lemma for gradient descent, there is an additional term of ϵt 2 in Lemma 5.4. In the next lemma, we bound this term recursively. Lemma 5.5 (Informal). Choosing β = Θ(ηGρ+1/2), if t < τ, we have ϵt+1 2 (1 β/4) ϵt 2 + λβ 16G f(xt) 2 . (5) The proof sketches of the above two lemmas are deferred in Appendix B. Now we combine them to get the upper bound on f(xτ) f . Define the function Φt := f(xt) f + 2η λβ ϵt 2. Note that for any t < τ, (4)+ 2η λβ (5) gives 8G f(xt) 2 . (6) The above inequality shows Φt is non-increasing and thus a Lyapunov function. Therefore, we have f(xτ) f Φτ Φ1 = 1, where in the last inequality we use Φ1 = f(x1) f = 1 since ϵ1 = ˆm1 f(x1) = 0 in the deterministic setting. As discussed in Section 5.1, if τ T, we have F < f(xτ) f 1. Note that we are able to choose a large enough constant G so that F = G2 3(3L0+4LρGρ) is greater than 1, which leads to a contradiction and shows τ = T + 1. Therefore, (6) holds for all 1 t T. Taking a summation over 1 t T and re-arranging terms, we get t=1 f(xt) 2 8G(Φ1 ΦT +1) if choosing T 8G 1 ηϵ2 , i.e., it shows convergence with a gradient complexity of O(ϵ 2) since both G and η are constants independent of ϵ in the deterministic setting. 5.4 Extension to the stochastic setting In this part, we briefly introduce how to extend the analysis to the more challenging stochastic setting. It becomes harder to obtain an upper bound on f(xτ) f because Φt is no longer non-increasing due to the existence of noise. In addition, τ defined in (3) is now a random variable. Note that all the derivations, such as Lemmas 5.4 and 5.5, are conditioned on the random event t < τ. Therefore, one can not simply take a total expectation of them to show E[Φt] is non-increasing. Fortunately, τ is in fact a stopping time with nice properties. If the noise is almost surely bounded as in Assumption 3, by a more careful analysis, we can obtain a high probability upper bound on f(xτ) f using concentration inequalities. Then we can still obtain a contradiction and convergence under this high probability event. If the noise has sub-Gaussian norm as in Assumption 4, one can change the definition of τ to τ := min{t | f(xt) f > F} min{t | f(xt) f(xt, ξt) > σ} (T + 1) for appropriately chosen F and σ. Then at least when t < τ, the noise is bounded by σ. Hence we can get the same upper bound on f(xτ) f as if Assumption 3 still holds. However, when t T, the lower bound f(xτ) f > F does not necessarily holds, which requires some more careful analyses. The details of the proofs are involved and we defer them in Appendix E. 6 Variance-reduced Adam In this section, we propose a variance-reduced version of Adam (VRAdam). This new algorithm is depicted in Algorithm 2. Its main difference from the original Adam is that in the momentum update rule (Line 6), an additional term of (1 β) ( f(xt, ξt) f(xt 1, ξt)) is added, inspired by the STORM algorithm [9]. This term corrects the bias of mt so that it is an unbiased estimate of f(xt) in the sense of total expectation, i.e., E[mt] = f(xt). We will also show that it reduces the variance and accelerates the convergence. Aside from the adaptive stepsize, one major difference between Algorithm 2 and STORM is that our hyper-parameters η and β are fixed constants whereas theirs are decreasing as a function of t. Choosing constant hyper-parameters requires a more accurate estimate at the initialization. That is why we use a mega-batch S1 to evaluate the gradient at the initial point to initialize m1 and v1 (Lines 2 3). In practice, one can also do a full-batch gradient evaluation at initialization. Note that there is no initialization bias for the momentum, so we do not re-scale mt and only re-scale vt. We also want to point out that although the initial mega-batch gradient evaluation makes the algorithm a bit harder to implement, constant hyper-parameters are usually easier to tune and more common in training deep neural networks. It should be not hard to extend our analysis to time-decreasing η and β and we leave it as an interesting future work. Algorithm 2 VARIANCE-REDUCED ADAM (VRADAM) 1: Input: β, βsq, η, λ, T, S1, xinit 2: Draw a batch of samples S1 with size S1 and use them to evaluate the gradient f(xinit, S1). 3: Initialize m1 = f(xinit, S1), v1 = βsqm2 1, and x2 = xinit ηm1 |m1|+λ. 4: for t = 2, , T do 5: Draw a new sample ξt and perform the following updates: 6: mt = (1 β)mt 1 + β f(xt, ξt)+(1 β) ( f(xt, ξt) f(xt 1, ξt)) 7: vt = (1 βsq)vt 1 + βsq( f(xt, ξt))2 8: ˆvt = vt 1 (1 βsq)t 9: xt+1 = xt η ˆvt+λ mt 10: end for In addition to Assumption 1, we need to impose the following assumptions which can be viewed as stronger versions of Assumptions 2 and 3, respectively. Assumption 5. The objective function f and the component function f( , ξ) for each fixed ξ are (ρ, L0, Lρ) smooth with 0 ρ < 2. Assumption 6. The random variables {ξt}1 t T are sampled i.i.d. from some distribution P such that for any x dom(f), Eξ P[ f(x, ξ)] = f(x), f(x, ξ) f(x) σ, a.s. Remark 6.1. Assumption 6 is stronger than Assumption 3. Assumption 3 applies only to the iterates generated by the algorithm, while Assumption 6 is a pointwise assumption over all x dom(f) and further assumes an i.i.d. nature of the random variables {ξt}1 t T . Also note that, similar to Adam, it is straightforward to generalize the assumption to noise with sub-Gaussian norm as in Assumption 4. 6.1 Analysis In this part, we briefly discuss challenges in the analysis of VRAdam. The detailed analysis is deferred in Appendix F. Note that Corollary 5.2 requires bounded update xt+1 xt r at each step. For Adam, it is easy to satisfy for a small enough η according to Lemma 5.3. However, for VRAdam, obtaining a good enough almost sure bound on the update is challenging even though the gradient noise is bounded. To bypass this difficulty, we directly impose a bound on f(xt) mt by changing the definition of the stopping time τ, similar to how we deal with the sub-Gaussian noise condition for Adam. In particular, we define τ := min{t | f(xt) > G} min{t | f(xt) mt > G} (T + 1). Then by definition, both f(xt) and f(xt) mt are bounded by G before time τ, which directly implies bounded update xt+1 xt . Of course, the new definition brings new challenges to lower bounding f(xτ) f , which requires more careful analyses specific to the VRAdam algorithm. Please see Appendix F for the details. 6.2 Convergence guarantees for VRAdam In the section, we provide our main results for convergence of VRAdam under Assumptions 1, 5, and 6. We consider the same definitions of problem-dependent constants 1, r, L as those in Section 4 to make the statements of theorems concise. Let c be a small enough numerical constant and C be a large enough numerical constant. The formal convergence result is shown in the following theorem. Theorem 6.2. Suppose Assumptions 1, 5, and 6 hold. For any 0 < δ < 1, let G > 0 be a constant satisfying G max n 2λ, 2σ, p C 1L0/δ, (C 1Lρ/δ) 1 2 ρ o . Choose 0 βsq 1 and β = a2η2, where a = 40L Gλ 3/2. Choose G , λ L, λ2δ 1L2 , λ2 , T = 64G 1 ηδϵ2 , S1 1 2β2T . Then with probability at least 1 δ, we have f(xt) G for every 1 t T, and 1 T PT t=1 f(xt) 2 ϵ2. Note that the choice of G, the upper bound of gradients along the trajectory of VRAdam, is very similar to that in Theorem 4.1 for Adam. The only difference is that now it also depends on the failure probability δ. Similar to Theorem 4.1, there is no requirement on βsq and we choose a very small β = O(ϵ2). However, the variance reduction technique allows us to take a larger stepsize η = O(ϵ) (compared with O(ϵ2) for Adam) and obtain an accelerated gradient complexity of O(ϵ 3), where we only consider the leading term. We are not sure whether it is optimal as the Ω(ϵ 3) lower bound in [3] assumes the weaker bounded variance condition. However, our result significantly improves upon [47], which considers a variance-reduced version of Adam by combining Adam and SVRG [22] and only obtains asymptotic convergence in the non-convex setting. Similar to Adam, our gradient complexity for VRAdam is dimension free but its dependence on λ is O(λ 2). Another limitation is that, the dependence on the failure probability δ is polynomial, worse than the poly-log dependence in Theorem 4.1 for Adam. 7 Conclusion and future works In this paper, we proved the convergence of Adam and its variance-reduced version under less restrictive assumptions compared to those in the existing literature. We considered a generalized nonuniform smoothness condition, according to which the Hessian norm is bounded by a sub-quadratic function of the gradient norm almost everywhere. Instead of assuming the Lipschitzness of the objective function as in existing analyses of Adam, we use a new contradiction argument to prove that gradients are bounded by a constant along the optimization trajectory. There are several interesting future directions that one could pursue following this work. Relaxation of the bounded noise assumption. Our analysis relies on the assumption of bounded noise or noise with sub-Gaussian norm. However, the existing lower bounds in [3] consider the weaker bounded variance assumption. Hence, it is not clear whether the O(ϵ 4) complexity we obtain for Adam is tight in this setting. It will be interesting to see whether one can relax the assumption to the bounded variance setting. One may gain some insights from recent papers such as [16; 46] that analyze Ada Grad under weak noise conditions. An alternative way to show the tightness of the O(ϵ 4) complexity is to prove a lower bound under the bounded noise assumption. Potential applications of our technique. Another interesting future direction is to see if the techniques developed in this work for bounding gradients (including those in the the concurrent work [26]) can be generalized to improve the convergence results for other optimization problems and algorithms. We believe it is possible so long as the function class is well behaved and the algorithm is efficient enough so that f(xτ) f can be well bounded for some appropriately defined stopping time τ. Understanding why Adam is better than SGD. We want to note that our results can not explain why Adam is better than SGD for training transformers, because [26] shows that non-adaptive SGD converges with the same O(ϵ 4) gradient complexity under even weaker conditions. It would be interesting and impactful if one can find a reasonable setting (function class, gradient oracle, etc) under which Adam or other adaptive methods provably outperform SGD. Acknowledgments This work was supported, in part, by the MIT-IBM Watson AI Lab and ONR Grants N00014-20-12394 and N00014-23-1-2299. We also acknowledge support from DOE under grant DE-SC0022199, and NSF through awards DMS-2031883 and DMS-1953181. [1] Kwangjun Ahn, Xiang Cheng, Minhak Song, Chulhee Yun, Ali Jadbabaie, and Suvrit Sra. Linear attention is (maybe) all you need (to understand transformer optimization). ar Xiv preprint ar Xiv:2310.01082, 2023. [2] Zeyuan Allen-Zhu and Elad Hazan. Variance reduction for faster non-convex optimization. In International conference on machine learning, pages 699 707. PMLR, 2016. [3] Yossi Arjevani, Yair Carmon, John C Duchi, Dylan J Foster, Nathan Srebro, and Blake Woodworth. Lower bounds for non-convex stochastic optimization. Mathematical Programming, 199 (1-2):165 214, 2023. [4] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, T. J. Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeff Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam Mc Candlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. Ar Xiv, abs/2005.14165, 2020. [5] Congliang Chen, Li Shen, Fangyu Zou, and Wei Liu. Towards practical adam: Non-convexity, convergence theory, and mini-batch acceleration. The Journal of Machine Learning Research, 23(1):10411 10457, 2022. [6] Xiangyi Chen, Sijia Liu, Ruoyu Sun, and Mingyi Hong. On the convergence of a class of adam-type algorithms for non-convex optimization. ar Xiv preprint ar Xiv:1808.02941, 2018. [7] Ziyi Chen, Yi Zhou, Yingbin Liang, and Zhaosong Lu. Generalized-smooth nonconvex optimization is as efficient as smooth nonconvex optimization. ar Xiv preprint ar Xiv:2303.02854, 2023. [8] Michael Crawshaw, Mingrui Liu, Francesco Orabona, Wei Zhang, and Zhenxun Zhuang. Robustness to unbounded smoothness of generalized signsgd. Advances in Neural Information Processing Systems, 35:9955 9968, 2022. [9] Ashok Cutkosky and Francesco Orabona. Momentum-based variance reduction in non-convex sgd. Ar Xiv, abs/1905.10018, 2019. [10] Soham De, Anirbit Mukherjee, and Enayat Ullah. Convergence guarantees for rmsprop and adam in non-convex optimization and an empirical comparison to nesterov acceleration. ar Xiv: Learning, 2018. [11] Aaron Defazio, Francis Bach, and Simon Lacoste-Julien. Saga: A fast incremental gradient method with support for non-strongly convex composite objectives. Advances in neural information processing systems, 27, 2014. [12] Alexandre D efossez, Léon Bottou, Francis R. Bach, and Nicolas Usunier. A simple convergence proof of adam and adagrad. ar Xiv: Machine Learning, 2020. [13] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. Ar Xiv, abs/1810.04805, 2019. [14] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. Ar Xiv, abs/2010.11929, 2020. [15] Cong Fang, Chris Junchi Li, Zhouchen Lin, and Tong Zhang. Spider: Near-optimal nonconvex optimization via stochastic path-integrated differential estimator. Advances in neural information processing systems, 31, 2018. [16] Matthew Faw, Isidoros Tziotis, Constantine Caramanis, Aryan Mokhtari, Sanjay Shakkottai, and Rachel Ward. The power of adaptivity in sgd: Self-tuning step sizes with unbounded gradients and affine variance. In Conference on Learning Theory, pages 313 355. PMLR, 2022. [17] Matthew Faw, Litu Rout, Constantine Caramanis, and Sanjay Shakkottai. Beyond uniform smoothness: A stopped analysis of adaptive sgd. ar Xiv preprint ar Xiv:2302.06570, 2023. [18] Sébastien Gadat and Ioana Gavra. Asymptotic study of stochastic adaptive algorithms in non-convex landscape. The Journal of Machine Learning Research, 23(1):10357 10410, 2022. [19] Zhishuai Guo, Yi Xu, Wotao Yin, Rong Jin, and Tianbao Yang. A novel convergence analysis for algorithms of the adam family. Ar Xiv, abs/2112.03459, 2021. [20] Hideaki Iiduka. Theoretical analysis of adam using hyperparameters close to one without lipschitz smoothness. Numerical Algorithms, pages 1 39, 2023. [21] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. Image-to-image translation with conditional adversarial networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5967 5976, 2016. [22] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In NIPS, 2013. [23] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. Co RR, abs/1412.6980, 2014. [24] Frederik Kunstner, Jacques Chen, Jonathan Wilder Lavington, and Mark Schmidt. Noise is not the main factor behind the gap between sgd and adam on transformers, but sign descent might be. ar Xiv preprint ar Xiv:2304.13960, 2023. [25] Lihua Lei, Cheng Ju, Jianbo Chen, and Michael I Jordan. Non-convex finite-sum optimization via scsg methods. Advances in Neural Information Processing Systems, 30, 2017. [26] Haochuan Li, Jian Qian, Yi Tian, Alexander Rakhlin, and Ali Jadbabaie. Convex and non-convex optimization under generalized smoothness. ar Xiv preprint ar Xiv:2306.01264, 2023. [27] Zhize Li, Hongyan Bao, Xiangliang Zhang, and Peter Richtárik. Page: A simple and optimal probabilistic gradient estimator for nonconvex optimization. In International conference on machine learning, pages 6286 6295. PMLR, 2021. [28] Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Manfred Otto Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. Co RR, abs/1509.02971, 2015. [29] Deyi Liu, Lam M Nguyen, and Quoc Tran-Dinh. An optimal hybrid variance-reduced algorithm for stochastic composite nonconvex optimization. ar Xiv preprint ar Xiv:2008.09055, 2020. [30] Zijian Liu, Perry Dong, Srikanth Jagabathula, and Zhengyuan Zhou. Near-optimal highprobability convergence for non-convex stochastic optimization with variance reduction. ar Xiv preprint ar Xiv:2302.06032, 2023. [31] Liangchen Luo, Yuanhao Xiong, Yan Liu, and Xu Sun. Adaptive gradient methods with dynamic bound of learning rate. Ar Xiv, abs/1902.09843, 2019. [32] Julien Mairal. Optimization with first-order surrogate functions. In International Conference on Machine Learning, pages 783 791. PMLR, 2013. [33] Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. Ar Xiv, abs/1602.01783, 2016. [34] Jiang Qian, Yuren Wu, Bojin Zhuang, Shaojun Wang, and Jing Xiao. Understanding gradient clipping in incremental gradient methods. In International Conference on Artificial Intelligence and Statistics, 2021. [35] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. Co RR, abs/1511.06434, 2015. [36] Sashank J. Reddi, Ahmed Hefny, Suvrit Sra, Barnabas Poczos, and Alex Smola. Stochastic variance reduction for nonconvex optimization. In Maria Florina Balcan and Kilian Q. Weinberger, editors, Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 314 323, New York, New York, USA, 20 22 Jun 2016. PMLR. URL https://proceedings.mlr.press/v48/reddi16.html. [37] Sashank J. Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of adam and beyond. Ar Xiv, abs/1904.09237, 2018. [38] Amirhossein Reisizadeh, Haochuan Li, Subhro Das, and Ali Jadbabaie. Variance-reduced clipping for non-convex optimization. ar Xiv preprint ar Xiv:2303.00883, 2023. [39] Nicolas Roux, Mark Schmidt, and Francis Bach. A stochastic gradient method with an exponential convergence _rate for finite training sets. In F. Pereira, C.J. Burges, L. Bottou, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems, volume 25. Curran Associates, Inc., 2012. URL https://proceedings.neurips.cc/paper_files/paper/ 2012/file/905056c1ac1dad141560467e0a99e1cf-Paper.pdf. [40] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. Ar Xiv, abs/1707.06347, 2017. [41] Shai Shalev-Shwartz and Tong Zhang. Stochastic dual coordinate ascent methods for regularized loss minimization. Journal of Machine Learning Research, 14(1), 2013. [42] Naichen Shi, Dawei Li, Mingyi Hong, and Ruoyu Sun. Rmsprop converges with proper hyper-parameter. In International Conference on Learning Representations, 2021. [43] Quoc Tran-Dinh, Nhan H Pham, Dzung T Phan, and Lam M Nguyen. Hybrid stochastic gradient descent algorithms for stochastic nonconvex optimization. ar Xiv preprint ar Xiv:1905.05920, 2019. [44] Ashish Vaswani, Noam M. Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Ar Xiv, abs/1706.03762, 2017. [45] Bohan Wang, Yushun Zhang, Huishuai Zhang, Qi Meng, Zhirui Ma, Tie-Yan Liu, and Wei Chen. Provable adaptivity in adam. Ar Xiv, abs/2208.09900, 2022. [46] Bohan Wang, Huishuai Zhang, Zhiming Ma, and Wei Chen. Convergence of adagrad for non-convex objectives: Simple proofs and relaxed assumptions. In The Thirty Sixth Annual Conference on Learning Theory, pages 161 190. PMLR, 2023. [47] Ruiqi Wang and Diego Klabjan. Divergence results and convergence of a variance reduced version of adam. Ar Xiv, abs/2210.05607, 2022. [48] Bohang Zhang, Jikai Jin, Cong Fang, and Liwei Wang. Improved analysis of clipping algorithms for non-convex optimization. Advances in Neural Information Processing Systems, 33:15511 15521, 2020. [49] J. Zhang, Tianxing He, Suvrit Sra, and Ali Jadbabaie. Why gradient clipping accelerates training: A theoretical justification for adaptivity. ar Xiv: Optimization and Control, 2019. [50] Jingzhao Zhang, Sai Praneeth Karimireddy, Andreas Veit, Seungyeon Kim, Sashank Reddi, Sanjiv Kumar, and Suvrit Sra. Why are adaptive methods good for attention models? Advances in Neural Information Processing Systems, 33:15383 15393, 2020. [51] Yushun Zhang, Congliang Chen, Naichen Shi, Ruoyu Sun, and Zhimin Luo. Adam can converge without any modification on update rules. Ar Xiv, abs/2208.09632, 2022. [52] Shen-Yi Zhao, Yin-Peng Xie, and Wu-Jun Li. On the convergence and improvement of stochastic normalized gradient descent. Science China Information Sciences, 64, 2021. [53] Dongruo Zhou, Jinghui Chen, Yuan Cao, Yiqi Tang, Ziyan Yang, and Quanquan Gu. On the convergence of adaptive gradient methods for nonconvex optimization. ar Xiv preprint ar Xiv:1808.05671, 2018. [54] Zhiming Zhou, Qingru Zhang, Guansong Lu, Hongwei Wang, Weinan Zhang, and Yong Yu. Adashift: Decorrelation and convergence of adaptive learning rate methods. Ar Xiv, abs/1810.00143, 2018. [55] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. 2017 IEEE International Conference on Computer Vision (ICCV), pages 2242 2251, 2017. [56] Fangyu Zou, Li Shen, Zequn Jie, Weizhong Zhang, and Wei Liu. A sufficient condition for convergences of adam and rmsprop. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 11119 11127, 2018. A Additional related work In this section, we discuss additional related work on variants of Adam and variance reduction methods. Variants of Adam. After Reddi et al. [37] pointed out the non-convergence issue with Adam, various variants of Adam that can be proved to converge were proposed [56; 18; 6; 5; 31; 54]. For example, AMSGrad [37] and Ada Fom [6] modify the second order momentum so that it is non-decreasing. Ada Bound [31] explicitly imposes upper and lower bounds on the second order momentum so that the stepsize is also bounded. Ada Shift [54] uses a new estimate of the second order momentum to correct the bias. There are also some works [53; 18; 20] that provide convergence guarantees of these variants. One closely related work to ours is [47], which considers a variancereduced version of Adam by combining Adam and SVRG [22]. However, they assume bounded gradients and can only get an asymptotic convergence in the non-convex setting. Variance reduction methods. The technique of variance reduction was introduced to accelerate convex optimization in the finite-sum setting [39; 22; 41; 32; 11]. Later, many works studied variance-reduced methods in the non-convex setting and obtained improved convergence rates for standard smooth functions. For example, SVRG and SCSG improve the O(ϵ 4) gradient complexity of stochastic gradient descent (SGD) to O(ϵ 10/3) [2; 36; 25]. Many new variance reduction methods [15; 43; 29; 27; 9; 30] were later proposed to further improve the complexity to O(ϵ 3), which is optimal and matches the lower bound in [3]. Recently, [38; 7] obtained the O(ϵ 3) complexity for the more general (L0, L1) smooth functions. Our variance-reduced Adam is motivated by the STORM algorithm proposed by [9], where an additional term is added in the momentum update to correct the bias and reduce the variance. B Proof sketches of informal lemmas in Section 5.3 In this section, we provide the proof sketches of the informal lemmas in Section 5.3. We focus on illustrating the ideas rather than rigorous proof details. Please see Appendix E for more rigorous and detailed proofs of Adam in the stochastic setting. Proof Sketch of Lemma 5.4. By the definition of τ, for all t < τ, we have f(xt) f F which implies f(xt) G. Then from the update rule (18) in Proposition E.1 provided later in Appendix E, it is easy to verify ˆvt G2 since ˆvt is a convex combination of {( f(xs))2}s t. Let ht := η/( ˆvt + λ) be the stepsize vector and denote Ht := diag(ht). We know η 2GI η G + λI Ht η As discussed in Section 5.2, when η is small enough, we can apply Corollary 5.2 to obtain f(xt+1) f(xt) f(xt), xt+1 xt = f(xt) 2 Ht f(xt) Htϵt 2 f(xt) 2 Ht + 1 4G f(xt) 2 + η where in the first (approximate) inequality we ignore the second order term 1 2L xt+1 xt 2 η2 in Corollary 5.2 for small enough η; the equality applies the update rule xt+1 xt = Ht ˆmt = Ht( f(xt) + ϵt); in the second inequality we use 2a Ab a 2 A + b 2 A for any PSD matrix A and vectors a and b; and the last inequality is due to (7). Proof Sketch of Lemma 5.5. By the update rule (18) in Proposition E.1, we have ϵt+1 = (1 αt+1) (ϵt + f(xt) f(xt+1)) . (8) For small enough η, we can apply Corollary 5.2 to get f(xt+1) f(xt) 2 L2 xt+1 xt 2 O(η2G2ρ) ˆmt 2 O(η2G2ρ)( f(xt) 2+ ϵt 2), (9) where the second inequality is due to L = O(Gρ) and xt+1 xt = O(η) ˆmt ; and the last inequality uses ˆmt = f(xt) + ϵt and Young s inequality a + b 2 2 a 2 + 2 b 2. Therefore, ϵt+1 2 (1 αt+1)(1 + αt+1/2) ϵt 2 + (1 + 2/αt+1) f(xt+1) f(xt) 2 (1 αt+1/2) ϵt 2 + O(η2G2ρ/αt+1) f(xt) 2 + ϵt 2 (1 β/4) ϵt 2 + λβ 16G f(xt) 2 , where the first inequality uses (8) and Young s inequality a + b 2 (1 + u) a 2 + (1 + 1/u) b 2 for any u > 0; the second inequality uses (1 αt+1)(1 + αt+1/2) 1 αt+1/2 and (9); and in the last inequality we use β αt+1 and choose β = Θ(ηGρ+1/2) which implies O(η2G2ρ/αt+1) λβ 16G β/4. C Probabilistic lemmas In this section, we state several well-known and useful probabilistic lemmas without proof. Lemma C.1 (Azuma-Hoeffding inequality). Let {Zt}t 1 be a martingale with respect to a filtration {Ft}t 0. Assume that |Zt Zt 1| ct almost surely for all t 0. Then for any fixed T, with probability at least 1 δ, t=1 c2 t log(1/δ). Lemma C.2 (Optional Stopping Theorem). Let {Zt}t 1 be a martingale with respect to a filtration {Ft}t 0. Let τ be a bounded stopping time with respect to the same filtration. Then we have E[Zτ] = E[Z0]. D Proofs related to (ρ, L0, Lρ) smoothness In this section, we provide proofs related to (ρ, L0, Lρ) smoothness. In what follows, we first provide a formal proposition in Appendix D.1 showing that univariate rational functions and double exponential functions are (ρ, L0, Lρ) smooth with ρ < 2, as we claimed in Section 3.2.1, and then provide the proofs of Lemma 3.4, Lemma 5.1, and Corollary 5.2 in Appendix D.2, D.3 and D.4 respectively. D.1 Examples Proposition D.1. Any univariate rational function P(x)/Q(x), where P, Q are two polynomials, and any double exponential function a(bx), where a, b > 1, are (ρ, L0, Lρ) smooth with 1 < ρ < 2. However, they are not necessarily (L0, L1) smooth. Proof of Proposition D.1. We prove the proposition in the following four parts: 1. Univariate rational functions are (ρ, L0, Lρ) smooth with 1 < ρ < 2. Let f(x) = P(x)/Q(x) where P and Q are two polynomials. Then the partial fractional decomposition of f(x) is given by f(x) = w(x) + Air (x ai)r + Birx + Cir (x2 + bix + ci)r , where w(x) is a polynomial, Air, Bir, Cir, ai, bi, ci are all real constants satisfying b2 i 4ci < 0 for each 1 i n which implies x2 + bix + ci > 0 for all x R. Assume Aiji = 0 without loss of generality. Then we know f has only finite singular points {ai}1 i m and has continuous first and second order derivatives at all other points. To simplify notation, denote pir(x) := Air (x ai)r , qir(x) := Birx + Cir (x2 + bix + ci)r . Then we have f(x) = w(x) + Pm i=1 Pji r=1 pir(x) + Pn i=1 Pki r=1 qir(x). For any 3/2 < ρ < 2, we know that ρ > r+2 r+1 for any r 1. Then we can show that lim x ai |f (x)|ρ |f (x)| = lim x ai p iji(x) ρ p iji(x) = , (10) where the first equality is because one can easily verify that the first and second order derivatives of piji dominate those of all other terms when x goes to ai, and the second equality is because p iji(x) ρ = O (x ai) ρ(ji+1) , p iji(x) = O (x ai) (ji+2) , and ρ(ji + 1) > ji + 2 (here we assume ji 1 since otherwise there is no need to prove (10) for i). Note that (10) implies that, for any Lρ > 0, there exists δi > 0 such that |f (x)| Lρ |f (x)|ρ , if |x ai| < δi. (11) Similarly, one can show limx |f (x)| ρ |f (x)| = , which implies there exists M > 0 such that |f (x)| Lρ |f (x)|ρ , if |x| > M. (12) Define B := {x R | |x| M and |x ai| δi, i} . We know B is a compact set and therefore the continuous function f is bounded within B, i.e., there exists some constant L0 > 0 such that |f (x)| L0, if x B. (13) Combining (11), (12), and (13), we have shown |f (x)| L0 + Lρ |f (x)|ρ , x dom(f), which completes the proof of the first part. 2. Rational functions are not necessarily (L0, L1) smooth. Consider the ration function f(x) = 1/x. Then we know that f (x) = 1/x2 and f (x) = 2/x3. Note that for any 0 < x min{(L0 + 1) 1/3, (L1 + 1) 1}, we have |f (x)| = 1 x |f (x)| > L0 + L1 |f (x)| , which shows f is not (L0, L1) smooth for any L0, L1 0. 3. Double exponential functions are (ρ, L0, Lρ) smooth with 1 < ρ < 2. Let f(x) = a(bx), where a, b > 1, be a double exponential function. Then we know that f (x) = log(a) log(b) bxa(bx), f (x) = log(b)(log(a)bx + 1) f (x). For any ρ > 1, we have lim x + |f (x)|ρ |f (x)| = lim x + |f (x)|ρ 1 log(b)(log(a)bx + 1) = lim y + (log(a) log(b)y)ρ 1 a(ρ 1)y log(b)(log(a)y + 1) = , where the first equality is a direct calculation; the second equality uses change of variable y = bx; and the last equality is because exponential function grows faster than linear function. Then we complete the proof following a similar argument to that in Part 1. 4. Double exponential functions are not necessarily (L0, L1) smooth. Consider the double exponential function f(x) = e(ex). Then we have f (x) = exe(ex), f (x) = (ex + 1) f (x). For any x max {log(L0 + 1), log(L1 + 1)}, we can show that |f (x)| > (L1 + 1)f (x) > L0 + L1 |f (x)| , which shows f is not (L0, L1) smooth for any L0, L1 0. D.2 Proof of Lemma 3.4 Before proving Lemma 3.4, we need the following lemma that generalizes (a special case of) Grönwall s inequality. Lemma D.2. Let u : [a, b] [0, ) and ℓ: [0, ) (0, ) be two continuous functions. Suppose u (t) ℓ(u(t)) for all t (a, b). Denote function ϕ(w) := R 1 ℓ(w) dw. We have for all t [a, b], ϕ(u(t)) ϕ(u(a)) a + t. Proof of Lemma D.2. First, by definition, we know that ϕ is increasing since ϕ = 1 ℓ> 0. Let function v be the solution of the following differential equation v (t) = ℓ(v(t)) t (a, b), v(a) = u(a). (14) It is straightforward to verify that the solution to (14) satisfies ϕ(v(t)) t = ϕ(u(a)) a. Then it suffices to show ϕ(u(t)) ϕ(v(t)) t [a, b]. Note that (ϕ(u(t)) ϕ(v(t))) = ϕ (u(t))u (t) ϕ (v(t))v (t) = u (t) ℓ(u(t)) v (t) where the inequality is because u (t) ℓ(u(t)) by the assumption of this lemma and v (t) = ℓ(v(t)) by (14). Since ϕ(u(a)) ϕ(v(a)) = 0, we know for all t [a, b], ϕ(u(t)) ϕ(v(t)). With Lemma D.2, one can bound the gradient norm within a small enough neighborhood of a given point as in the following lemma. Lemma D.3. Suppose f is (ρ, L0, Lρ) smooth for some ρ, ρ, L0, Lρ 0. For any a > 0 and points x, y dom(f) satisfying y x a L0+Lρ( f(x) +a)ρ , we have f(y) f(x) + a. Proof of Lemma D.3. Denote functions z(t) := (1 t)x+ty and u(t) := f(z(t)) for 0 t 1. Note that for any 0 t s 1, by triangle inequality, u(s) u(t) f(z(s)) f(z(t)) . We know that u(t) = f(z(t)) is differentiable since f is second order differentiable2. Then we have u (t) = lim s t u(s) u(t) s t lim s t f(z(s)) f(z(t)) s t = lim s t f(z(s)) f(z(t)) 2f(z(t)) y x (L0 + Lρu(t)ρ) y x . Let ϕ(w) := R w 0 1 (L0+Lρvρ) y x dv. By Lemma D.2, we know that ϕ ( f(y) ) = ϕ(u(1)) ϕ(u(0)) + 1 = ϕ ( f(x) ) + 1. Denote ψ(w) := R w 0 1 (L0+Lρvρ)dv = ϕ(w) y x . We have ψ ( f(y) ) ψ ( f(x) ) + y x ψ ( f(x) ) + a L0 + Lρ( f(x) + a)ρ 1 (L0 + Lρvρ)dv + Z f(x) +a 1 (L0 + Lρvρ)dv =ψ( f(x) + a) Since ψ is increasing, we have f(y) f(x) + a. 2Here we assume u(t) > 0 for 0 < t < 1. Otherwise, we can define tm = sup{0 < t < 1 | u(t) = 0} and consider the interval [tm, 1] instead. With Lemma D.3, we are ready to prove Lemma 3.4. Proof of Lemma 3.4. Denote z(t) := (1 t)x + ty for some y Rd satisfying y x a L0+Lρ( f(x) +a)ρ . We first show y dom(f) by contradiction. Suppose y / dom(f), let us define tb := inf{0 t 1 | z(t) / X} and zb := z(tb). Then we know zb is a boundary point of X. Since f is a closed function with an open domain, we have lim t tb f(z(t)) = . (15) On the other hand, by the definition of tb, we know z(t) X for every 0 t < tb. Then by Lemma D.3, for all 0 t < tb, we have f(z(t)) f(x) + a. Therefore for all 0 t < tb f(z(t)) f(x) + Z t f(z(s)), y x ds f(x) + ( f(x) + a) y x < , which contradicts with (15). Therefore we have shown y dom(f). We have f(y) f(x) = 0 2f(z(t)) (y x) dt 0 (L0 + Lρ f(z(t)) ρ) dt y x (L0 + Lρ ( f(x) + a)ρ) where the last inequality is due to Lemma D.3. D.3 Proof of Lemma 5.1 Proof of Lemma 5.1. Denote G := f(x) and L := 3L0 + 4LρGρ. Let y := x 1 2L f(x). Then we have 2L = G 6L0 + 8LρGρ min ( 1 5LρGρ 1 , 1 5(Lρ 1 0 Lρ)1/ρ where the inequality can be easily verified considering both cases of G (L0/Lρ)1/ρ and G (L0/Lρ)1/ρ. Then based on Corollary 5.2, we have y dom(f) and f f(x) f(y) f(x) f(x), y x + L 2 y x 2 = 3LG2 which completes the proof. D.4 Proof of Corollary 5.2 Proof of Corollary 5.2. First, Lemma 3.4 states that for any a > 0, y x a L0+Lρ ( f(x) +a)ρ = f(y) f(x) (L0+Lρ ( f(x) +a)ρ) y x . If f(x) G, we choose a = max{G, (L0/Lρ)1/ρ}. Then it is straightforward to verify that a L0 + Lρ ( f(x) + a)ρ min ( 1 5LρGρ 1 , 1 5(Lρ 1 0 Lρ)1/ρ L0 + Lρ ( f(x) + a)ρ 3L0 + 4LρGρ =: L. Therefore we have shown for any x, y satisfying y x r, f(y) f(x) L y x . (16) Next, let z(t) := (1 t)x + ty for 0 t 1. We know f(y) f(x) = Z 1 f(z(t), y x dt f(x), y x + f(z(t)) f(x), y x dt f(x), y x + Z 1 0 L z(t) x y x dt = f(x), y x + L y x 2 Z 1 = f(x), y x + 1 where the inequality is due to (16). E Convergence analysis of Adam In this section, we provide detailed convergence analysis of Adam. We will focus on proving Theorem 4.1 under the bounded noise assumption (Assumption 3) in most parts of this section except Appendix E.6 where we will show how to generalize the results to noise with sub-Gaussian norm (Assumption 4) and provide the proof of Theorem 4.2. For completeness, we repeat some important technical definitions here. First, we define ϵt := ˆmt f(xt) (17) as the deviation of the re-scaled momentum from the actual gradient. Given a large enough constant G defined in Theorem 4.1, denoting F = G2 3(3L0+4LρGρ), we formally define the stopping time τ as τ := min{t | f(xt) f > F} (T + 1), i.e., τ is the first time when the sub-optimality gap is strictly greater than F, truncated at T + 1 to make sure it is bounded in order to apply Lemma C.2. Based on Lemma 5.1 and the discussions below it, we know that if t < τ, we have both f(xt) f F and f(xt) G. It is clear to see that τ is a stopping time3 with respect to {ξt}t 1 because the event {τ t} is a function of {ξs}s 8GF Based on Lemma E.5, we have I1 I2, which leads to a contradiction. Therefore, we must have τ = T + 1 conditioned on E. Then, Lemma E.8 also implies that under E, t=1 f(xt) 2 I1 where the last inequality is due to Lemma E.5. E.4 Proof of Lemma E.7 In order to prove Lemma E.7, we need the following several lemmas. Lemma E.9. Denote γt 1 = (1 αt)(ϵt 1 + f(xt 1) f(xt)). If G 2σ and η min n r D, λ3/2β o , we have for every 2 t τ, ϵt 1 2 + λβ 16G f(xt 1) 2 + α2 tσ2 + 2αt γt 1, f(xt, ξt) f(xt) . Proof of Lemma E.9. According to the update rule (18), we have ϵt =(1 αt)(ϵt 1 + f(xt 1) f(xt)) + αt( f(xt, ξt) f(xt)) =γt 1 + αt( f(xt, ξt) f(xt)). (23) Since we choose η r D, by Lemma 5.3, we have xt xt 1 r if t τ. Therefore by Corollary 5.2, for any 2 t τ, f(xt 1) f(xt) L xt xt 1 ηL λ ( f(xt 1) + ϵt 1 ) , (24) γt 1 2 = (1 αt)ϵt 1 + (1 αt)( f(xt 1) f(xt)) 2 (1 αt)2 (1 + αt) ϵt 1 2 + (1 αt)2 1 + 1 f(xt 1) f(xt) 2 (1 αt) ϵt 1 2 + 1 αt f(xt 1) f(xt) 2 (1 αt) ϵt 1 2 + 2η2L2 f(xt 1) 2 + ϵt 1 2 ϵt 1 2 + λβ 16G f(xt 1) 2 , where the first inequality uses Young s inequality a + b 2 (1 + u) a 2 + (1 + 1/u) b 2 for any u > 0; the second inequality is due to (1 αt)2 (1 + αt) = (1 αt)(1 α2 t) (1 αt), (1 αt)2 1 + 1 αt (1 αt)2 (1 + αt) 1 αt (1 αt) 1 the third inequality uses (24) and Young s inequality; and in the last inequality we choose η λ3/2β which implies 2η2L2 λ2β λβ 16G β 2 . Then by (23), we have ϵt 2 = γt 1 2 + 2αt γt 1, f(xt, ξt) f(xt) + α2 t f(xt, ξt) f(xt) 2 ϵt 1 2 + λβ 16G f(xt 1) 2 + α2 tσ2 + 2αt γt 1, f(xt, ξt) f(xt) . Lemma E.10. Denote γt 1 = (1 αt)(ϵt 1 + f(xt 1) f(xt)). If G 2σ and η min n r D, σβ DL o , with probability 1 δ, γt 1, f(xt, ξt) f(xt) 5σ2p (1 + β2T) log(1/δ). Proof of Lemma E.10. First note that γt 1, f(xt, ξt) f(xt) = γt 11τ t, f(xt, ξt) f(xt) . Since τ is a stopping time, we know that 1τ t is a function of {ξs}s F} (T + 1), τ2 := min{t | f(xt) f(xt, ξt) > σ} (T + 1), τ := min{τ1, τ2}. Then it is straightforward to verify that τ1, τ2, τ are all stopping times. Since we want to show P(τ T) is small, noting that {τ T} = {τ = τ1 T} {τ = τ2 T}, it suffices to bound both P(τ = τ1 T) and P(τ = τ2 T). First, we know that P(τ = τ2 T) P(τ2 T) 1 t T f(xt) f(xt, ξt) > σ 1 t T P ( f(xt) f(xt, ξt) > σ) 1 t T E [Pt 1 ( f(xt) f(xt, ξt) > σ)] 1 t T E h 2e σ2 where the fourth inequality uses Assumption 4; and the last inequality uses σ = R p 2 log(4T/δ). Next, if τ = τ1 T, by definition, we have f(xτ) f > F, or equivalently, η (f(xτ) f ) > 8GF On the other hand, since for any t < τ, under the new definition of τ, we still have f(xt) f F, f(xt) G, f(xt) f(xt, ξt) σ. Then we know that Lemma E.8 still holds because all of its requirements are still satisfied, i.e., there exists some event E with P(E) δ/2, such that under its complement Ec, t=1 f(xt) 2 + 8G η (f(xτ) f ) 8G β + ηβT + 20σ2η p (1/β2 + T)ι By Lemma E.5, we know I1 I2, which suggests that Ec {τ = τ1 T} = , i.e., {τ = τ1 T} E. Then we can show P(E {τ T}) P(E) + P(τ = τ2 T) δ. P(Ec {τ = T + 1}) 1 P(E {τ T}) 1 δ, and under the event Ec {τ = T + 1}, we have τ = T + 1 and t=1 f(xt) 2 I1/T ϵ2, where the last inequality is due to Lemma E.5. F Convergence analysis of VRAdam In this section, we provide detailed convergence analysis of VRAdam and prove Theorem 6.2. To do that, we first provide some technical definitions4. Denote ϵt :=mt f(xt) 4Note that the same symbol for Adam and VRAdam may have different meanings. as the deviation of the momentum from the actual gradient. From the update rule in Algorithm 2, we can write ϵt = (1 β)ϵt 1 + Wt, (25) where we define Wt := f(xt, ξt) f(xt) (1 β) ( f(xt 1, ξt) f(xt 1)) . Let G be the constant defined in Theorem 6.2 and denote F := G2 3(3L0+4LρGρ). We define the following stopping times as discussed in Section 6.1. τ1 := min{t | f(xt) f > F} (T + 1), τ2 := min{t | ϵt > G} (T + 1), (26) τ := min{τ1, τ2}. It is straight forward to verify that τ1, τ2, τ are all stopping times. Then if t < τ, we have f(xt) f F, f(xt) G, ϵt G. Then we can also bound the update xt+1 xt ηD where D = 2G/λ if t < τ (see Lemma F.3 for the details). Finally, we consider the same definition of r and L as those for Adam. Specifically, ( 1 5LρGρ 1 , 1 5(Lρ 1 0 Lρ)1/ρ , L := 3L0 + 4LρGρ. (27) F.1 Useful lemmas We first list several useful lemmas in this section without proofs. Their proofs are deferred later in Appendix F.3. To start with, we provide a lemma on the local smoothness of each component function f( , ξ) when the gradient of the objective function f is bounded. Lemma F.1. For any constant G σ and two points x dom(f), y Rd such that f(x) G and y x r/2, we have y dom(f) and f(y) f(x) L y x , f(y, ξ) f(x, ξ) 4L y x , ξ, f(y) f(x) + f(x), y x + 1 where r and L are defined in (27). With the new definition of stopping time τ in (26), all the quantities in Algorithm 2 are well bounded before τ. In particular, the following lemma holds. Lemma F.2. If t < τ, we have f(xt) G, f(xt, ξt) G + σ, mt 2G, ˆvt (G + σ)2, η G + σ + λ ht η Next, we provide the following lemma which bounds the update at each step before τ. Lemma F.3. if t < τ, xt+1 xt ηD where D = 2G/λ. The following lemma bounds Wt when t τ. Lemma F.4. If t τ, G 2σ, and η r 2D, Wt βσ + 5ηL λ ( f(xt 1) + ϵt 1 ) . Finally, we present some inequalities regarding the parameter choices, which will simplify the calculations later. Lemma F.5. Under the parameter choices in Theorem 6.2, we have 8σ2 , η λ3/2 F.2 Proof of Theorem 6.2 Before proving the theorem, we will need to present several important lemmas. First, note that the descent lemma still holds for VRAdam. Lemma F.6. If t < τ, choosing G σ + λ and η min r 6L , we have f(xt+1) f(xt) η 4G f(xt) 2 + η Proof of Lemma F.6. The proof is essentially the same as that of Lemma E.6. Lemma F.7. Choose G max {2σ, 2λ}, S1 1 2β2T , and η min r 2D, λ3/2 16G f(xt) 2 # 4σ2β2T E[ ϵτ 2]. Proof of Lemma F.7. By Lemma F.4, we have Wt 2 2σ2β2 + 100η2L2 f(xt 1) 2 + ϵt 1 2 f(xt 1) 2 + ϵt 1 2 , where in the second inequality we choose η λ3/2 β G. Therefore, noting that λβ 16G β/2, by (25), we have ϵt 2 =(1 β)2 ϵt 1 2 + Wt 2 + (1 β) ϵt 1, Wt (1 β/2) ϵt 1 2 + λβ 16G f(xt 1) 2 + 2σ2β2 + (1 β) ϵt 1, Wt Taking a summation over 2 t τ and re-arranging the terms, we get 16G f(xt) 2 ϵ1 2 ϵτ 2 + 2σ2β2(τ 1) + (1 β) Taking expectations on both sides, noting that by the Optional Stopping Theorem (Lemma C.2), we have 16G f(xt) 2 # 2σ2β2T + E[ ϵ1 2] E[ ϵτ 2] 4σ2β2T E[ ϵτ 2], where in the second inequality we choose S1 1 2β2T which implies E[ ϵ1 2] σ2/S1 2σ2β2T. Lemma F.8. Under the parameter choices in Theorem 6.2, we have t=1 f(xt) 2 # η , E[f(xτ) f ] 2 1, E[ ϵτ 2] λ 1β Proof of Lemma F.8. First note that according to Lemma F.5, it is straight forward to verify that the parameter choices in Theorem 6.2 satisfy the requirements in Lemma F.6 and Lemma F.7. Then by Lemma F.6, if t < τ, f(xt+1) f(xt) η 4G f(xt) 2 + η Taking a summation over 1 t < τ, re-arranging terms, multiplying both sides by 8G η , and taking an expection, we get t=1 2 f(xt) 2 8G η E[f(x1) f(xτ)] 8G η ( 1 E[f(xτ) f ]) . (28) By Lemma F.7, we have λ ϵt 2 f(xt) 2 # λβ E[ ϵτ 2] 8G 1 λβ E[ ϵτ 2], (29) where the last inequality is due to Lemma F.5. Then (28) + (29) gives t=1 f(xt) 2 # η E[f(xτ) f ] + 16G λβ E[ ϵτ 2] 16G 1 which completes the proof. With all the above lemmas, we are ready to prove the theorem. Proof of Theorem 6.2. First note that according to Lemma F.5, it is straight forward to verify that the parameter choices in Theorem 6.2 satisfy the requirements in all the lemmas for VRAdam. Then, first note that if τ = τ1 T, we know f(xτ) f > F by the definition of τ. Therefore, P(τ = τ1 T) P(f(xτ) f > F) E[f(xτ) f ] where the second inequality uses Markov s inequality; the third inequality is by Lemma F.8; and the last inequality is due to Lemma F.5. Similarly, if τ2 = τ T, we know ϵτ > G. We have P(τ2 = τ T) P( ϵτ > G) = P( ϵτ 2 > G2) E[ ϵτ 2] where the second inequality uses Markov s inequliaty; the third inequality is by Lemma F.8; and the last inequality is due to Lemma F.5. where the last inequality is due to Lemma F.5. Therefore, P(τ T) P(τ1 = τ T) + P(τ2 = τ T) δ Also, note that by Lemma F.8 t=1 f(xt) 2 # P(τ = T + 1)E t=1 f(xt) 2 τ = T + 1 t=1 f(xt) 2 τ = T + 1 where the last inequality is due to P(τ = T + 1) = 1 P(τ T) 1 δ/2 1/2. Then we can get t=1 f(xt) 2 τ = T + 1 Let F := n 1 T PT t=1 f(xt) 2 > ϵ2o be the event of not converging to stationary points. By Markov s inequality, we have P(F|τ = T + 1) δ Therefore, P(F {τ T}) P(τ T) + P(F|τ = T + 1) δ, i.e., with probability at least 1 δ, we have both τ = T + 1 and 1 T PT t=1 f(xt) 2 ϵ2. F.3 Proofs of lemmas in Appendix F.1 Proof of Lemma F.1. This lemma is a direct corollary of Corollary 5.2. Note that by Assumption 6, we have f(x, ξ) G + σ 2G. Hence, when computing the locality size and smoothness constant for the component function f( , ξ), we need to replace the constant G in Corollary 5.2 with 2G, that is why we get a smaller locality size of r/2 and a larger smoothness constant of 4L. Proof of Lemma F.2. The bound on mt is by the definition of τ in (26). All other quantities for VRAdam are defined in the same way as those in Adam (Algorithm 1), so they have the same upper bounds as in Lemma E.2. Proof of Lemma F.3. xt+1 xt η mt /λ 2ηG/λ = ηD, where the first inequality uses the update rule in Algorithm 2 and ht η/λ by Lemma F.2; the second inequality is again due to Lemma F.2. Proof of Lemma F.4. By the definition of Wt, it is easy to verify that Wt = β( f(xt, ξt) f(xt)) + (1 β)δt, δt = f(xt, ξt) f(xt 1, ξt) f(xt) + f(xt 1). Then we can bound δt f(xt, ξt) f(xt 1, ξt) + f(xt) f(xt 1) 5L xt xt 1 λ ( f(xt 1) + ϵt 1 ) , where the second inequality uses Lemma F.1; and the last inequality is due to xt xt 1 η mt 1 /λ η ( f(xt 1) + ϵt 1 ) /λ. Then, we have Wt βσ + 5ηL λ ( f(xt 1) + ϵt 1 ) . Proof of Lemma F.5. These inequalities can be obtained by direct calculations.