# on_the_convergence_of_adam_and_beyond__3b80d621.pdf Published as a conference paper at ICLR 2018 ON THE CONVERGENCE OF ADAM AND BEYOND Sashank J. Reddi, Satyen Kale & Sanjiv Kumar Google New York New York, NY 10011, USA {sashank,satyenkale,sanjivk}@google.com Several recently proposed stochastic optimization methods that have been successfully used in training deep networks such as RMSPROP, ADAM, ADADELTA, NADAM are based on using gradient updates scaled by square roots of exponential moving averages of squared past gradients. In many applications, e.g. learning with large output spaces, it has been empirically observed that these algorithms fail to converge to an optimal solution (or a critical point in nonconvex settings). We show that one cause for such failures is the exponential moving average used in the algorithms. We provide an explicit example of a simple convex optimization setting where ADAM does not converge to the optimal solution, and describe the precise problems with the previous analysis of ADAM algorithm. Our analysis suggests that the convergence issues can be fixed by endowing such algorithms with long-term memory of past gradients, and propose new variants of the ADAM algorithm which not only fix the convergence issues but often also lead to improved empirical performance. 1 INTRODUCTION Stochastic gradient descent (SGD) is the dominant method to train deep networks today. This method iteratively updates the parameters of a model by moving them in the direction of the negative gradient of the loss evaluated on a minibatch. In particular, variants of SGD that scale coordinates of the gradient by square roots of some form of averaging of the squared coordinates in the past gradients have been particularly successful, because they automatically adjust the learning rate on a per-feature basis. The first popular algorithm in this line of research is ADAGRAD (Duchi et al., 2011; Mc Mahan & Streeter, 2010), which can achieve significantly better performance compared to vanilla SGD when the gradients are sparse, or in general small. Although ADAGRAD works well for sparse settings, its performance has been observed to deteriorate in settings where the loss functions are nonconvex and gradients are dense due to rapid decay of the learning rate in these settings since it uses all the past gradients in the update. This problem is especially exacerbated in high dimensional problems arising in deep learning. To tackle this issue, several variants of ADAGRAD, such as RMSPROP (Tieleman & Hinton, 2012), ADAM (Kingma & Ba, 2015), ADADELTA (Zeiler, 2012), NADAM (Dozat, 2016), etc, have been proposed which mitigate the rapid decay of the learning rate using the exponential moving averages of squared past gradients, essentially limiting the reliance of the update to only the past few gradients. While these algorithms have been successfully employed in several practical applications, they have also been observed to not converge in some other settings. It has been typically observed that in these settings some minibatches provide large gradients but only quite rarely, and while these large gradients are quite informative, their influence dies out rather quickly due to the exponential averaging, thus leading to poor convergence. In this paper, we analyze this situation in detail. We rigorously prove that the intuition conveyed in the above paragraph is indeed correct; that limiting the reliance of the update on essentially only the past few gradients can indeed cause significant convergence issues. In particular, we make the following key contributions: We elucidate how the exponential moving average in the RMSPROP and ADAM algorithms can cause non-convergence by providing an example of simple convex optimization prob- Published as a conference paper at ICLR 2018 lem where RMSPROP and ADAM provably do not converge to an optimal solution. Our analysis easily extends to other algorithms using exponential moving averages such as ADADELTA and NADAM as well, but we omit this for the sake of clarity. In fact, the analysis is flexible enough to extend to other algorithms that employ averaging squared gradients over essentially a fixed size window (for exponential moving averages, the influences of gradients beyond a fixed window size becomes negligibly small) in the immediate past. We omit the general analysis in this paper for the sake of clarity. The above result indicates that in order to have guaranteed convergence the optimization algorithm must have long-term memory of past gradients. Specifically, we point out a problem with the proof of convergence of the ADAM algorithm given by Kingma & Ba (2015). To resolve this issue, we propose new variants of ADAM which rely on long-term memory of past gradients, but can be implemented in the same time and space requirements as the original ADAM algorithm. We provide a convergence analysis for the new variants in the convex setting, based on the analysis of Kingma & Ba (2015), and show a datadependent regret bound similar to the one in ADAGRAD. We provide a preliminary empirical study of one of the variants we proposed and show that it either performs similarly, or better, on some commonly used problems in machine learning. 2 PRELIMINARIES Notation. We use S+ d to denote the set of all positive definite d d matrices. With slight abuse of notation, for a vector a Rd and a positive definite matrix M Rd Rd, we use a/M to denote M 1a, Mi 2 to denote ℓ2-norm of ith row of M and M to represent M 1/2. Furthermore, for any vectors a, b Rd, we use a for element-wise square root, a2 for element-wise square, a/b to denote element-wise division and max(a, b) to denote element-wise maximum. For any vector θi Rd, θi,j denotes its jth coordinate where j [d]. The projection operation ΠF,A(y) for A Sd + is defined as arg minx F A1/2(x y) for y Rd. Finally, we say F has bounded diameter D if x y D for all x, y F. Optimization setup. A flexible framework to analyze iterative optimization methods is the online optimization problem in the full information feedback setting. In this online setup, at each time step t, the optimization algorithm picks a point (i.e. the parameters of the model to be learned) xt F, where F Rd is the feasible set of points. A loss function ft (to be interpreted as the loss of the model with the chosen parameters in the next minibatch) is then revealed, and the algorithm incurs loss ft(xt). The algorithm s regret at the end of T rounds of this process is given by RT = PT i=1 ft(xt) minx F PT i=1 ft(x). Throughout this paper, we assume that the feasible set F has bounded diameter and ft(x) is bounded for all t [T] and x F. Our aim to is to devise an algorithm that ensures RT = o(T), which implies that on average, the model s performance converges to the optimal one. The simplest algorithm for this setting is the standard online gradient descent algorithm (Zinkevich, 2003), which moves the point xt in the opposite direction of the gradient gt = ft(xt) while maintaining the feasibility by projecting onto the set F via the update rule xt+1 = ΠF(xt αtgt), where ΠF(y) denotes the projection of y Rd onto the set F i.e., ΠF(y) = minx F x y , and αt is typically set to α/ t for some constant α. The aforementioned online learning problem is closely related to the stochastic optimization problem: minx F Ez[f(x, z)], popularly referred to as empirical risk minimization (ERM), where z is a training example drawn training sample over which a model with parameters x is to be learned, and f(x, z) is the loss of the model with parameters x on the sample z. In particular, an online optimization algorithm with vanishing average regret yields a stochastic optimization algorithm for the ERM problem (Cesa-Bianchi et al., 2004). Thus, we use online gradient descent and stochastic gradient descent (SGD) synonymously. Generic adaptive methods setup. We now provide a framework of adaptive methods that gives us insights into the differences between different adaptive methods and is useful for understanding the flaws in a few popular adaptive methods. Algorithm 1 provides a generic adaptive framework that encapsulates many popular adaptive methods. Note the algorithm is still abstract because the Published as a conference paper at ICLR 2018 Algorithm 1 Generic Adaptive Method Setup Input: x1 F, step size {αt > 0}T t=1, sequence of functions {φt, ψt}T t=1 for t = 1 to T do gt = ft(xt) mt = φt(g1, . . . , gt) and Vt = ψt(g1, . . . , gt) ˆxt+1 = xt αtmt/ Vt xt+1 = ΠF, V t(ˆxt+1) end for averaging functions φt and ψt have not been specified. Here φt : Ft Rd and ψt : Ft Sd +. For ease of exposition, we refer to αt as step size and αt V 1/2 t as learning rate of the algorithm and furthermore, restrict ourselves to diagonal variants of adaptive methods encapsulated by Algorithm 1 where Vt = diag(vt) . We first observe that standard stochastic gradient algorithm falls in this framework by using: φt(g1, . . . , gt) = gt and ψt(g1, . . . , gt) = I, (SGD) and αt = α/ t for all t [T]. While the decreasing step size is required for convergence, such an aggressive decay of learning rate typically translates into poor empirical performance. The key idea of adaptive methods is to choose averaging functions appropriately so as to entail good convergence. For instance, the first adaptive method ADAGRAD (Duchi et al., 2011), which propelled the research on adaptive methods, uses the following averaging functions: φt(g1, . . . , gt) = gt and ψt(g1, . . . , gt) = diag(Pt i=1 g2 i ) t , (ADAGRAD) and step size αt = α/ t for all t [T]. In contrast to a learning rate of α/ t in SGD, such a setting effectively implies a modest learning rate decay of α/ q P i g2 i,j for j [d]. When the gradients are sparse, this can potentially lead to huge gains in terms of convergence (see Duchi et al. (2011)). These gains have also been observed in practice for even few non-sparse settings. Adaptive methods based on Exponential Moving Averages. Exponential moving average variants of ADAGRAD are popular in the deep learning community. RMSPROP, ADAM, NADAM, and ADADELTA are some prominent algorithms that fall in this category. The key difference is to use an exponential moving average as function ψt instead of the simple average function used in ADAGRAD. ADAM1, a particularly popular variant, uses the following averaging functions: φt(g1, . . . , gt) = (1 β1) i=1 βt i 1 gi and ψt(g1, . . . , gt) = (1 β2)diag( i=1 βt i 2 g2 i ), (ADAM) for some β1, β2 [0, 1). This update can alternatively be stated by the following simple recursion: mt,i = β1mt 1,i + (1 β1)gt,i and vt,i = β2vt 1,i + (1 β2)g2 t,i (1) and m0,i = 0 and v0,i = 0 for all i [d]. and t [T]. A value of β1 = 0.9 and β2 = 0.999 is typically recommended in practice. We note the additional projection operation in Algorithm 1 in comparison to ADAM. When F = Rd, the projection operation is an identity operation and this corresponds to the algorithm in (Kingma & Ba, 2015). For theoretical analysis, one requires αt = 1/ t for t [T], although, a more aggressive choice of constant step size seems to work well in practice. RMSPROP, which appeared in an earlier unpublished work (Tieleman & Hinton, 2012) is essentially a variant of ADAM with β1 = 0. In practice, especially in deep learning applications, the momentum term arising due to non-zero β1 appears to significantly boost the performance. We will mainly focus on ADAM algorithm due to this generality but our arguments also apply to RMSPROP and other algorithms such as ADADELTA, NADAM. 1Here, for simplicity, we remove the debiasing step used in the version of ADAM used in the original paper by Kingma & Ba (2015). However, our arguments also apply to the debiased version as well. Published as a conference paper at ICLR 2018 3 THE NON-CONVERGENCE OF ADAM With the problem setup in the previous section, we discuss fundamental flaw in the current exponential moving average methods like ADAM. We show that ADAM can fail to converge to an optimal solution even in simple one-dimensional convex settings. These examples of non-convergence contradict the claim of convergence in (Kingma & Ba, 2015), and the main issue lies in the following quantity of interest: Vt+1 αt+1 Vt This quantity essentially measures the change in the inverse of learning rate of the adaptive method with respect to time. One key observation is that for SGD and ADAGRAD, Γt 0 for all t [T]. This simply follows from update rules of SGD and ADAGRAD in the previous section. In particular, update rules for these algorithms lead to non-increasing learning rates. However, this is not necessarily the case for exponential moving average variants like ADAM and RMSPROP i.e., Γt can potentially be indefinite for t [T] . We show that this violation of positive definiteness can lead to undesirable convergence behavior for ADAM and RMSPROP. Consider the following simple sequence of linear functions for F = [ 1, 1]: ft(x) = Cx, for t mod 3 = 1 x, otherwise, where C > 2. For this function sequence, it is easy to see that the point x = 1 provides the minimum regret. Suppose β1 = 0 and β2 = 1/(1 + C2). We show that ADAM converges to a highly suboptimal solution of x = +1 for this setting. Intuitively, the reasoning is as follows. The algorithm obtains the large gradient C once every 3 steps, and while the other 2 steps it observes the gradient 1, which moves the algorithm in the wrong direction. The large gradient C is unable to counteract this effect since it is scaled down by a factor of almost C for the given value of β2, and hence the algorithm converges to 1 rather than 1. We formalize this intuition in the result below. Theorem 1. There is an online convex optimization problem where ADAM has non-zero average regret i.e., RT /T 0 as T . We relegate all proofs to the appendix. A few remarks are in order. One might wonder if adding a small constant in the denominator of the update helps in circumventing this problem i.e., the update for ADAM in Algorithm 1 of ˆxt+1 is modified as follows: ˆxt+1 = xt αtmt/ p Vt + ϵI. (3) The algorithm in (Kingma & Ba, 2015) uses such an update in practice, although their analysis does not. In practice, selection of the ϵ parameter appears to be critical for the performance of the algorithm. However, we show that for any constant ϵ > 0, there exists an online optimization setting where, again, ADAM has non-zero average regret asymptotically (see Theorem 6 in Section F of the appendix). The above examples of non-convergence are catastrophic insofar that ADAM and RMSPROP converge to a point that is worst amongst all points in the set [ 1, 1]. Note that above example also holds for constant step size αt = α. Also note that classic SGD and ADAGRAD do not suffer from this problem and for these algorithms, average regret asymptotically goes to 0. This problem is especially aggravated in high dimensional settings and when the variance of the gradients with respect to time is large. This example also provides intuition for why large β2 is advisable while using ADAM algorithm, and indeed in practice using large β2 helps. However the following result shows that for any constant β1 and β2 with β1 < β2, we can design an example where ADAM has non-zero average rate asymptotically. Theorem 2. For any constant β1, β2 [0, 1) such that β1 < β2, there is an online convex optimization problem where ADAM has non-zero average regret i.e., RT /T 0 as T . The above results show that with constant β1 and β2, momentum or regularization via ϵ will not help in convergence of the algorithm to the optimal solution. Note that the condition β1 < β2 is benign and is typically satisfied in the parameter settings used in practice. Furthermore, such condition is assumed in convergence proof of Kingma & Ba (2015). We can strengthen this result by providing a similar example of non-convergence even in the easier stochastic optimization setting: Published as a conference paper at ICLR 2018 Algorithm 2 AMSGRAD Input: x1 F, step size {αt}T t=1, {β1t}T t=1, β2 Set m0 = 0, v0 = 0 and ˆv0 = 0 for t = 1 to T do gt = ft(xt) mt = β1tmt 1 + (1 β1t)gt vt = β2vt 1 + (1 β2)g2 t ˆvt = max(ˆvt 1, vt) and ˆVt = diag(ˆvt) xt+1 = ΠF, ˆVt(xt αtmt/ ˆvt) end for Theorem 3. For any constant β1, β2 [0, 1) such that β1 < β2, there is a stochastic convex optimization problem for which ADAM does not converge to the optimal solution. These results have important consequences insofar that one has to use problem-dependent ϵ, β1 and β2 in order to avoid bad convergence behavior. In high-dimensional problems, this typically amounts to using, unlike the update in Equation (3), a different ϵ, β1 and β2 for each dimension. However, this defeats the purpose of adaptive methods since it requires tuning a large set of parameters. We would also like to emphasize that while the example of non-convergence is carefully constructed to demonstrate the problems in ADAM, it is not unrealistic to imagine scenarios where such an issue can at the very least slow down convergence. We end this section with the following important remark. While the results stated above use constant β1 and β2, the analysis of ADAM in (Kingma & Ba, 2015) actually relies on decreasing β1 over time. It is quite easy to extend our examples to the case where β1 is decreased over time, since the critical parameter is β2 rather than β1, and as long as β2 is bounded away from 1, our analysis goes through. Thus for the sake of clarity, in this paper we only prove non-convergence of ADAM in the setting where β1 is held constant. 4 A NEW EXPONENTIAL MOVING AVERAGE VARIANT: AMSGRAD In this section, we develop a new principled exponential moving average variant and provide its convergence analysis. Our aim is to devise a new strategy with guaranteed convergence while preserving the practical benefits of ADAM and RMSPROP. To understand the design of our algorithms, let us revisit the quantity Γt in (2). For ADAM and RMSPROP, this quantity can potentially be negative. The proof in the original paper of ADAM erroneously assumes that Γt is positive semi-definite and is hence, incorrect (refer to Appendix D for more details). For the first part, we modify these algorithms to satisfy this additional constraint. Later on, we also explore an alternative approach where Γt can be made positive semi-definite by using values of β1 and β2 that change with t. AMSGRAD uses a smaller learning rate in comparison to ADAM and yet incorporates the intuition of slowly decaying the effect of past gradients on the learning rate as long as Γt is positive semidefinite. Algorithm 2 presents the pseudocode for the algorithm. The key difference of AMSGRAD with ADAM is that it maintains the maximum of all vt until the present time step and uses this maximum value for normalizing the running average of the gradient instead of vt in ADAM. By doing this, AMSGRAD results in a non-increasing step size and avoids the pitfalls of ADAM and RMSPROP i.e., Γt 0 for all t [T] even with constant β2. Also, in Algorithm 2, one typically uses a constant β1t in practice (although, the proof requires a decreasing schedule for proving convergence of the algorithm). To gain more intuition for the updates of AMSGRAD, it is instructive to compare its update with ADAM and ADAGRAD. Suppose at particular time step t and coordinate i [d], we have vt 1,i > g2 t,i > 0, then ADAM aggressively increases the learning rate, however, as we have seen in the previous section, this can be detrimental to the overall performance of the algorithm. On the other hand, ADAGRAD slightly decreases the learning rate, which often leads to poor performance in practice since such an accumulation of gradients over a large time period can significantly decrease the learning rate. In contrast, AMSGRAD neither increases nor decreases the learning rate and furthermore, decreases vt which can potentially lead to non-decreasing learning rate even if gradient Published as a conference paper at ICLR 2018 Figure 1: Performance comparison of ADAM and AMSGRAD on synthetic example on a simple one dimensional convex problem inspired by our examples of non-convergence. The first two plots (left and center) are for the online setting and the the last one (right) is for the stochastic setting. is large in the future iterations. For rest of the paper, we use g1:t = [g1 . . . gt] to denote the matrix obtained by concatenating the gradient sequence. We prove the following key result for AMSGRAD. Theorem 4. Let {xt} and {vt} be the sequences obtained from Algorithm 2, αt = α/ t, β1 = β11, β1t β1 for all t [T] and γ = β1/ β2 < 1. Assume that F has bounded diameter D and ft(x) G for all t [T] and x F. For xt generated using the AMSGRAD (Algorithm 2), we have the following bound on the regret i=1 ˆv1/2 T,i + D2 2(1 β1) β1tˆv1/2 t,i αt + α 1 + log T (1 β1)2(1 γ) p i=1 g1:T,i 2. The following result falls as an immediate corollary of the above result. Corollary 1. Suppose β1t = β1λt 1 in Theorem 4, then we have i=1 ˆv1/2 T,i + β1D2 G 2(1 β1)(1 λ)2 + α 1 + log T (1 β1)2(1 γ) p i=1 g1:T,i 2. The above bound can be considerably better than O( d T) regret of SGD when Pd i=1 ˆv1/2 T,i and Pd i=1 g1:T,i 2 d T (Duchi et al., 2011). Furthermore, in Theorem 4, one can use a much more modest momentum decay of β1t = β1/t and still ensure a regret of O( T). We would also like to point out that one could consider taking a simple average of all the previous values of vt instead of their maximum. The resulting algorithm is very similar to ADAGRAD except for normalization with smoothed gradients rather than actual gradients and can be shown to have similar convergence as ADAGRAD. 5 EXPERIMENTS In this section, we present empirical results on both synthetic and real-world datasets. For our experiments, we study the problem of multiclass classification using logistic regression and neural networks, representing convex and nonconvex settings, respectively. Synthetic Experiments: To demonstrate the convergence issue of ADAM, we first consider the following simple convex setting inspired from our examples of non-convergence: ft(x) = 1010x, for t mod 101 = 1 10x, otherwise, with the constraint set F = [ 1, 1]. We first observe that, similar to the examples of nonconvergence we have considered, the optimal solution is x = 1; thus, for convergence, we expect the algorithms to converge to x = 1. For this sequence of functions, we investigate the regret and the value of the iterate xt for ADAM and AMSGRAD. To enable fair comparison, we set β1 = 0.9 and β2 = 0.99 for ADAM and AMSGRAD algorithm, which are typically the parameters settings used for ADAM in practice. Figure 1 shows the average regret (Rt/t) and value of the iterate (xt) for Published as a conference paper at ICLR 2018 Figure 2: Performance comparison of ADAM and AMSGRAD for logistic regression, feedforward neural network and CIFARNET. The top row shows performance of ADAM and AMSGRAD on logistic regression (left and center) and 1-hidden layer feedforward neural network (right) on MNIST. In the bottom row, the two plots compare the training and test loss of ADAM and AMSGRAD with respect to the iterations for CIFARNET. this problem. We first note that the average regret of ADAM does not converge to 0 with increasing t. Furthermore, its iterates xt converge to x = 1, which unfortunately has the largest regret amongst all points in the domain. On the other hand, the average regret of AMSGRAD converges to 0 and its iterate converges to the optimal solution. Figure 1 also shows the stochastic optimization setting: ft(x) = 1010x, with probability 0.01 10x, otherwise. Similar to the aforementioned online setting, the optimal solution for this problem is x = 1. Again, we see that the iterate xt of ADAM converges to the highly suboptimal solution x = 1. Logistic Regression: To investigate the performance of the algorithm on convex problems, we compare AMSGRAD with ADAM on logistic regression problem. We use MNIST dataset for this experiment, the classification is based on 784 dimensional image vector to one of the 10 class labels. The step size parameter αt is set to α/ t for both ADAM and AMSGRAD in for our experiments, consistent with the theory. We use a minibatch version of these algorithms with minibatch size set to 128. We set β1 = 0.9 and β2 is chosen from the set {0.99, 0.999}, but they are fixed throughout the experiment. The parameters α and β2 are chosen by grid search. We report the train and test loss with respect to iterations in Figure 2. We can see that AMSGRAD performs better than ADAM with respect to both train and test loss. We also observed that AMSGRAD is relatively more robust to parameter changes in comparison to ADAM. Neural Networks: For our first experiment, we trained a simple 1-hidden fully connected layer neural network for the multiclass classification problem on MNIST. Similar to the previous experiment, we use β1 = 0.9 and β2 is chosen from {0.99, 0.999}. We use a fully connected 100 rectified linear units (Re LU) as the hidden layer for this experiment. Furthermore, we use constant αt = α throughout all our experiments on neural networks. Such a parameter setting choice of ADAM is consistent with the ones typically used in the deep learning community for training neural networks. A grid search is used to determine parameters that provides the best performance for the algorithm. Finally, we consider the multiclass classification problem on the standard CIFAR-10 dataset, which consists of 60,000 labeled examples of 32 32 images. We use CIFARNET, a convolutional neural network (CNN) with several layers of convolution, pooling and non-linear units, for training a multiclass classifer for this problem. In particular, this architecture has 2 convolutional layers with 64 channels and kernel size of 6 6 followed by 2 fully connected layers of size 384 and 192. The network uses 2 2 max pooling and layer response normalization between the convolutional layers (Krizhevsky et al., 2012). A dropout layer with keep probability of 0.5 is applied in between the fully connected layers (Srivastava et al., 2014). The minibatch size is also set to 128 similar to previous experiments. The results for this problem are reported in Figure 2. The parameters for ADAM and AMSGRAD are selected in a way similar to the previous experiments. We can see that Published as a conference paper at ICLR 2018 AMSGRAD performs considerably better than ADAM on train loss and accuracy. Furthermore, this performance gain also translates into good performance on test loss. 5.1 EXTENSION: ADAMNC ALGORITHM An alternative approach is to use an increasing schedule of β2 in ADAM. This approach, unlike Algorithm 2 does not require changing the structure of ADAM but rather uses a non-constant β1 and β2. The pseudocode for the algorithm, ADAMNC, is provided in the appendix (Algorithm 3). We show that by appropriate selection of β1t and β2t, we can achieve good convergence rates. Theorem 5. Let {xt} and {vt} be the sequences obtained from Algorithm 3, αt = α/ t, β1 = β11 and β1t β1 for all t [T]. Assume that F has bounded diameter D and ft(x) G for all t [T] and x F. Furthermore, let {β2t} be such that the following conditions are satisfied: q Pt j=1 Πt j k=1β2(t k+1)(1 β2j)g2 j,i 1 ζ q Pt j=1 g2 j,i for some ζ > 0 and all t [T], j [d]. v1/2 t,i αt v1/2 t 1,i αt 1 for all t {2, , T} and i [d]. Then for xt generated using the ADAMNC (Algorithm 3), we have the following bound on the regret RT D2 2α(1 β1) Tv1/2 T,i + D2 2(1 β1) β1tv1/2 t,i αt + 2ζ (1 β1)3 i=1 g1:T,i 2. The above result assumes selection of {(αt, β2t)} such that Γt 0 for all t {2, , T}. However, one can generalize the result to deal with the case where this constraint is violated as long as the violation is not too large or frequent. Following is an immediate consequence of the above result. Corollary 2. Suppose β1t = β1λt 1 and β2t = 1 1/t in Theorem 5, then we have D2 2α(1 β1) i=1 g1:T,i 2 + β1D2 G 2(1 β1)(1 λ)2 + 2ζ (1 β1)3 i=1 g1:T,i 2. The above corollary follows from a trivial fact that vt,i = Pt j=1 g2 j,i/t for all i [d] when β2t = 1 1/t. This corollary is interesting insofar that such a parameter setting effectively yields a momentum based variant of ADAGRAD. Similar to ADAGRAD, the regret is data-dependent and can be considerably better than O( d T) regret of SGD when Pd i=1 g1:T,i 2 d T (Duchi et al., 2011). It is easy to generalize this result for setting similar settings of β2t. Similar to Corollary 1, one can use a more modest decay of β1t = β1/t and still ensure a data-dependent regret of O( 6 DISCUSSION In this paper, we study exponential moving variants of ADAGRAD and identify an important flaw in these algorithms which can lead to undesirable convergence behavior. We demonstrate these problems through carefully constructed examples where RMSPROP and ADAM converge to highly suboptimal solutions. In general, any algorithm that relies on an essentially fixed sized window of past gradients to scale the gradient updates will suffer from this problem. We proposed fixes to this problem by slightly modifying the algorithms, essentially endowing the algorithms with a long-term memory of past gradients. These fixes retain the good practical performance of the original algorithms, and in some cases actually show improvements. The primary goal of this paper is to highlight the problems with popular exponential moving average variants of ADAGRAD from a theoretical perspective. RMSPROP and ADAM have been immensely successful in development of several state-of-the-art solutions for a wide range of problems. Thus, it is important to understand their behavior in a rigorous manner and be aware of potential pitfalls while using them in practice. We believe this paper is a first step in this direction and suggests good design principles for faster and better stochastic optimization. Published as a conference paper at ICLR 2018 Peter Auer and Claudio Gentile. Adaptive and self-confident on-line learning algorithms. In Proceedings of the 13th Annual Conference on Learning Theory, pp. 107 117, 2000. Nicol o Cesa-Bianchi, Alex Conconi, and Claudio Gentile. On the generalization ability of on-line learning algorithms. IEEE Transactions on Information Theory, 50:2050 2057, 2004. Timothy Dozat. Incorporating Nesterov Momentum into Adam. In Proceedings of 4th International Conference on Learning Representations, Workshop Track, 2016. John C. Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121 2159, 2011. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of 3rd International Conference on Learning Representations, 2015. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25, pp. 1097 1105, 2012. H. Brendan Mc Mahan and Matthew J. Streeter. Adaptive bound optimization for online convex optimization. In Proceedings of the 23rd Annual Conference On Learning Theory, pp. 244 256, 2010. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929 1958, 2014. T. Tieleman and G. Hinton. Rms Prop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012. Matthew D. Zeiler. ADADELTA: An Adaptive Learning Rate Method. Co RR, abs/1212.5701, 2012. Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the 20th International Conference on Machine Learning, pp. 928 936, 2003. Published as a conference paper at ICLR 2018 A PROOF OF THEOREM 1 Proof. We consider the setting where ft are linear functions and F = [ 1, 1]. In particular, we define the following function sequence: ft(x) = Cx, for t mod 3 = 1 x, otherwise, where C 2. For this function sequence, it is easy to see that the point x = 1 provides the minimum regret. Without loss of generality, assume that the initial point is x1 = 1. This can be assumed without any loss of generality because for any choice of initial point, we can always translate the coordinate system such that the initial point is x1 = 1 in the new coordinate system and then choose the sequence of functions as above in the new coordinate system. Also, since the problem is one-dimensional, we drop indices representing coordinates from all quantities in Algorithm 1. Consider the execution of ADAM algorithm for this sequence of functions with β1 = 0, β2 = 1 1 + C2 and αt = α t where α < 1 β2. Note that since gradients of these functions are bounded, F has bounded L diameter and β2 1/ β2 < 1. Hence, the conditions on the parameters required for ADAM are satisfied (refer to (Kingma & Ba, 2015) for more details). Our main claim is that for iterates {xt} t=1 arising from the updates of ADAM, we have xt > 0 for all t N and furthermore, x3t+1 = 1 for all t N {0}. For proving this, we resort to the principle of mathematical induction. Since x1 = 1, both the aforementioned conditions hold for the base case. Suppose for some t N {0}, we have xi > 0 for all i [3t + 1] and x3t+1 = 1. Our aim is to prove that x3t+2 and x3t+3 are positive and x3t+4 = 1. We first observe that the gradients have the following form: fi(x) = C, for i mod 3 = 1 1, otherwise From (3t + 1)th update of ADAM in Equation (1), we obtain ˆx3t+2 = x3t+1 αC p (3t + 1)(β2v3t + (1 β2)C2) = 1 αC p (3t + 1)(β2v3t + (1 β2)C2) . The equality follows from the induction hypothesis. We observe the following: αC p (3t + 1)(β2v3t + (1 β2)C2) αC p (3t + 1)(1 β2)C2 (3t + 1)(1 β2)) < 1. (4) The second inequality follows from the step size choice that α < 1 β2. Therefore, we have 0 < ˆx3t+2 < 1 and hence x3t+2 = ˆx3t+2 > 0. Furthermore, after the (3t + 2)th and (3t + 3)th updates of ADAM in Equation (1), we have the following: ˆx3t+3 = x3t+2 + α p (3t + 2)(β2v3t+1 + (1 β2)) , ˆx3t+4 = x3t+3 + α p (3t + 3)(β2v3t+2 + (1 β2)) . Since x3t+2 > 0, it is easy to see that x3t+3 > 0. To complete the proof, we need to show that x3t+4 = 1. In order to prove this claim, we show that ˆx3t+4 1, which readily translates to x3t+4 = 1 because x3t+4 = ΠF(ˆx3t+4) and F = [ 1, 1] here ΠF is the simple Euclidean projection (note that in one-dimension, ΠF, Vt = ΠF). We observe the following: ˆx3t+4 = min(ˆx3t+3, 1) + α p (3t + 3)(β2v3t+2 + (1 β2)) . The above equality is due to the fact that ˆx3t+3 > 0 and property of projection operation onto the set F = [ 1, 1]. We consider the following two cases: Published as a conference paper at ICLR 2018 1. Suppose ˆx3t+3 1, then it is easy to see from the above equality that ˆx3t+4 > 1. 2. Suppose ˆx3t+3 < 1, then we have the following: ˆx3t+4 = ˆx3t+3 + α p (3t + 3)(β2v3t+2 + (1 β2)) = x3t+2 + α p (3t + 2)(β2v3t+1 + (1 β2)) + α p (3t + 3)(β2v3t+2 + (1 β2)) (3t + 1)(β2v3t + (1 β2)C2) + α p (3t + 2)(β2v3t+1 + (1 β2)) (3t + 3)(β2v3t+2 + (1 β2)) . The third equality is due to the fact that x3t+2 = ˆx3t+2. Thus, to prove ˆx3t+4 > 1, it is enough to the prove: (3t + 1)(β2v3t + (1 β2)C2) | {z } T1 (3t + 2)(β2v3t+1 + (1 β2)) (3t + 3)(β2v3t+2 + (1 β2)) | {z } T2 We have the following bound on term T1 from Equation (4): (3t + 1)(1 β2)) . (5) Furthermore, we lower bound T2 in the following manner: (3t + 2)(β2v3t+1 + (1 β2)) + α p (3t + 3)(β2v3t+2 + (1 β2)) β2C2 + (1 β2) 1 3t + 2 + 1 3t + 3 β2C2 + (1 β2) 2(3t + 1) + 1 p (3t + 1)(β2C2 + (1 β2)) = α p (3t + 1)(1 β2) T1. (6) The first inequality is due to the fact that vt C2 for all t N. The last inequality follows from inequality in Equation (5). The last equality is due to following fact: r β2C2 + (1 β2) for the choice of β2 = 1/(1 + C2). Therefore, we have T2 T1 and hence, ˆx3t+4 1. Therefore, from both the cases, we see that x3t+4 = 1. Therefore, by the principle of mathematical induction it holds for all t N {0}. Thus, we have f3t+1(x3t+1)+f3t+2(x3t+2)+f3t+2(x3t+2) f3t+1( 1) f3t+2( 1) f3t+3( 1) 2C 4 = 2C 4. Therefore, for every 3 steps, ADAM suffers a regret of at least 2C 4. More specifically, RT (2C 4)T/3. Since C 2, this regret can be very large and furthermore, RT /T 0 as T , which completes the proof. Published as a conference paper at ICLR 2018 B PROOF OF THEOREM 2 Proof. The proof generalizes the optimization setting used in Theorem 1. Throughout the proof, we assume β1 < β2, which is also a condition (Kingma & Ba, 2015) assume in their paper. In this proof, we consider the setting where ft are linear functions and F = [ 1, 1]. In particular, we define the following function sequence: ft(x) = Cx, for t mod C = 1 x, otherwise, where C N, C mod 2 = 0 satisfies the following: (1 β1)βC 1 1 C 1 βC 1 1 , β(C 2)/2 2 C2 1, 3(1 β1) 2 1 β2 1 + γ(1 γC 1) + βC/2 1 1 1 β1 < C where γ = β1/ β2 < 1. It is not hard to see that these conditions hold for large constant C that depends on β1 and β2. Since the problem is one-dimensional, we drop indices representing coordinates from all quantities in Algorithm 1. For this function sequence, it is easy to see that the point x = 1 provides the minimum regret since C 2. Furthermore, the gradients have the following form: fi(x) = C, for t mod C = 1 1, otherwise Our first observation is that mk C 0 for all k N {0}. For k = 0, this holds trivially due to our initialization. For the general case, observe the following: mk C+C = (1 β1) (1 β1)β1 (1 β1)βC 2 1 + (1 β1)βC 1 1 C + βC 1 mk C (8) = (1 βC 1 1 ) + (1 β1)βC 1 1 C + βC 1 mk C. (9) If mk C 0, it can be easily shown that mk C+C 0 for our selection of C in Equation (7) by using the principle of mathematical induction. With this observation we continue to the main part of the proof. Let T be such that t + C τ 2t for all t T where τ 3/2. All our analysis focuses on iterations t T . Note that any regret before T is just a constant because T is independent of T and thus, the average regret is negligible as T . Consider an iterate at time step t of the form k C after T . Our claim is that xt+C min{xt + ct, 1} (10) for some ct > 0. To see this, consider the updates of ADAM for the particular sequence of functions we considered are: t (1 β1)C + β1mt p (1 β2)C2 + β2vt xt+i 1 α t + i 1 (1 β1) + β1mt+i 1 p (1 β2) + β2vt+i 1 for i {2, , C}. For i {2, , C}, we use the following notation: t (1 β1)C + β1mt p (1 β2)C2 + β2vt , δt+i = α t + i (1 β1) + β1mt+i p (1 β2) + β2vt+i for i {1, , C 1}. Note that if δt+j 0 for some j {1, , C 1} then δt+l 0 for all l {j, , C 1}. This follows from the fact that the gradient is negative for all time steps i {2, , C}. Using Lemma 6 for {xt+1, , xt+C} and {δt, , δt+C 1}, we have the following: Published as a conference paper at ICLR 2018 Let i = C/2. In order to prove our claim in Equation (10), we need to prove the following: i=t δi > 0. To this end, we observe the following: i=1 α t + i (1 β1) + β1mt+i p (1 β2) + β2vt+i i=2 α t + i 1 (1 β1) + (1 β1) h Pi 2 j=1 βj 1( 1) i + (1 β1)βi 1 1 C + βi 1mt p (1 β2) + β2vt+i 1 (1 β1) + (1 β1) h Pi 2 j=1 βj 1 i (1 β1)βi 1 1 C p (1 β2) + β2vt+i 1 (1 β1) + (1 β1) h Pi 2 j=1 βj 1 i (1 β2) + β2vt+i 1 t (1 β1)βi 1 1 C p (1 β2) + β2vt+i 1 (1 β1) + (1 β1) h Pi 2 j=1 βj 1 i (1 β2) + β2vt+i 1 t (1 β1)βi 1 1 C q (1 β2) + βi 1 2 (1 β2)C2 (1 β2) + 2β2 α t γ(1 β1)(1 γC 1) C i βi 1 1 1 β1 t γ(1 β1)(1 γC 1) The first equality follows from the definition of mt+i+1. The first inequality follows from the fact that mt 0 when t mod C = 0 (see Equation (9) and arguments based on it). The second inequality follows from the definition of τ that t + C τ 2t for all t T . The third inequality is due to the fact that vt+i 1 (1 β2)βi 2 2 C2. The last inequality follows from our choice of C. The fourth inequality is due to the following upper bound that applies for all i i C: vt+i 1 = (1 β2) j=1 βt+i 1 j 2 g2 j h=1 βt+i 1 h C 2 C2 + j=1 βt+i 1 j 2 βi 1 2 C2 k 1 X h=0 βh C 2 + 1 1 β2 " βi 1 2 C2 1 βC 2 + 1 1 β2 The first inequality follows from online problem setting for the counter-example i.e., gradient is C once every C iterations and 1 for the rest. The last inequality follows from the fact that βi 1 2 C2 Published as a conference paper at ICLR 2018 1 and βC 2 β2. Furthermore, from the above inequality, we have i=t δi δt + α τ C i βi 1 1 1 β1 t γ(1 β1)(1 γC 1) t (1 β1)C + β1mt p (1 β2)C2 + β2vt + α τ C i βi 1 1 1 β1 t γ(1 β1)(1 γC 1) t (1 β1)C p (1 β2)C2 + α τ C i βi 1 1 1 β1 t γ(1 β1)(1 γC 1) 3 βC/2 1 1 1 β1 3(1 β1) 1 + γ(1 γC 1) Note that from our choice of C, it is easy to see that λ 0. Also, observe that λ is independent of t. Thus, xt+C min{1, xt + λ/ t}. From this fact, we also see the following: 1. If xt = 1, then xt+C = 1 for all t T such that t mod C = 0. 2. There exists constant T 1 T such that x T 1 = 1 where T 1 mod C = 0. The first point simply follows from the relation xt+C min{1, xt + λ/ t}. The second point is due to divergent nature of the sum P t=t 1/ t. Therefore, we have i=1 f(k C+i)(xk C+i) i=1 f(k C+i)( 1) 2C 2(C 1) = 2. where k C T 1. Thus, when t T 1, for every C steps, ADAM suffers a regret of at least 2. More specifically, RT 2(T T 1)/C. Thus, RT /T 0 as T , which completes the proof. C PROOF OF THEOREM 3 Proof. Let δ be an arbitrary small positive constant, and C be a large enough constant chosen as a function of β1, β2, δ that will be determined in the proof. Consider the following one dimensional stochastic optimization setting over the domain [ 1, 1]. At each time step t, the function ft(x) is chosen i.i.d. as follows: ( Cx with probability p := 1+δ C+1 x with probability 1 p The expected function is F(x) = δx; thus the optimum point over [ 1, 1] is x = 1. At each time step t the gradient gt equals C with probability p and 1 with probability 1 p. Thus, the step taken by ADAM is t = αt(β1mt 1 + (1 β1)gt) p β2vt 1 + (1 β2)g2 t . We now show that for a large enough constant C, E[ t] 0, which implies that the ADAM s steps keep drifting away from the optimal solution x = 1. Lemma 1. For a large enough constant C (as a function of β1, β2, δ), we have E[ t] 0. Published as a conference paper at ICLR 2018 Proof. Let Et[ ] denote expectation conditioned on all randomness up to and including time t 1. Taking conditional expectation of the step, we have 1 αt Et[ t] = p (β1mt 1 + (1 β1)C) p β2vt 1 + (1 β2)C2 + (1 p) (β1mt 1 (1 β1)) p β2vt 1 + (1 β2) = p (β1mt 1 + (1 β1)C) p β2vt 1 + (1 β2)C2 | {z } T1 +(1 p) β1mt 1 p β2vt 1 + (1 β2) | {z } T2 +(1 p) 1 β1 p β2vt 1 + (1 β2) | {z } T3 (11) We will bound the expectation of the terms T1, T2 and T3 above separately. First, for T1, we have T1 (β1C + (1 β1)C) p (1 β2)C2 1 1 β2 . (12) Next, we bound E[T2]. Define k = log(C+1) log(1/β1) . This choice of k ensures that βk 1C 1 βk 1. Now, note that mt 1 = (1 β1) i=1 βt 1 i 1 gi. Let E denote the event that for every i = t 1, t 2, . . . , max{t k, 1}, gi = 1. Note that Pr[E] 1 kp. Assuming E happens, we can bound mt 1 as follows: mt 1 (1 β1) i=max{t k,1} βt 1 i 1 1+(1 β1) max{t k,1} 1 X i=1 βt 1 i 1 C (1 βk 1)+βk 1C 0, and so T2 0. With probability at most kp, the event E doesn t happen. In this case, we bound T2 as follows. We first bound mt 1 in terms of vt 1 using the Cauchy-Schwarz inequality as follows: mt 1 = (1 β1) i=1 βt 1 i 1 gi (1 β1) r Pt 1 i=1 βt 1 i 2 g2 i Pt 1 i=1( β2 1 β2 )t 1 i β2 (1 β2)(β2 β2 1) | {z } A Thus, vt 1 m2 t 1/A2. Thus, we have T2 = β1mt 1 p β2vt 1 + (1 β2) β1|mt 1| q β2(m2 t 1/A2) = β1(1 β1) p (1 β2)(β2 β2 1) . Hence, we have E[T2] 0 (1 kp) + β1(1 β1) p (1 β2)(β2 β2 1) kp = β1(1 β1)kp p (1 β2)(β2 β2 1) (13) Finally, we lower bound E[T3] using Jensen s inequality applied to the convex function 1 x: E[T3] (1 β1) p β2E[vt 1] + (1 β2) (1 β1) p β2(1 + δ)C2 + (1 β2) . The last inequality follows by using the facts vt 1 = (1 β2) Pt 1 i=1 βt 1 i 2 g2 i , and the random variables g2 1, g2 2, . . . , g2 t 1 are i.i.d., and so E[vt 1] = (1 βt 1 2 )E[g2 1] = (1 βt 1 2 )(C2p+(1 p)) = (1 βt 1 2 )(1+δ)C δ (1+δ)C. (14) Published as a conference paper at ICLR 2018 Combining the bounds in (12), (13), and (14) in the expression for ADAM s step, (11), and plugging in the values of the parameters k and p we get the following lower bound on E[ t]: 1 1 β2 + β1(1 β1) log(C+1) log(1/β1) p (1 β2)(β2 β2 1) β2(1 + δ)C + (1 β2) . It is evident that for C large enough (as a function of δ, β1, β2), the above expression can be made non-negative. For the sake of simplicity, let us assume, as is routinely done in practice, that we are using a version of ADAM that doesn t perform any projection steps2. Then the lemma implies that E[xt+1] E[xt]. Via a simple induction, we conclude that E[xt] x1 for all t. Thus, if we assume that the starting point x1 0, then E[xt] 0. Since F is a monotonically increasing function, we have E[F(xt)] F(0) = 0, whereas F( 1) = δ. Thus the expected suboptimality gap is always δ > 0, which implies that ADAM doesn t converge to the optimal solution. D PROOF OF THEOREM 4 The proof of Theorem 4 presented below is along the lines of the Theorem 4.1 in (Kingma & Ba, 2015) which provides a claim of convergence for ADAM. As our examples showing nonconvergence of ADAM indicate, the proof in (Kingma & Ba, 2015) has problems. The main issue in their proof is the incorrect assumption that Γt defined in their equation (3) is positive semidefinite, and we also identified problems in lemmas 10.3 and 10.4 in their paper. The following proof fixes these issues and provides a proof of convergence for AMSGRAD. Proof. We begin with the following observation: ˆVt(xt αt ˆV 1/2 t mt) = min x F ˆV 1/4 t (x (xt αt ˆV 1/2 t mt)) . Furthermore, ΠF, ˆVt(x ) = x for all x F. In this proof, we will use x i to denote the ith coordinate of x . Using Lemma 4 with u1 = xt+1 and u2 = x , we have the following: ˆV 1/4 t (xt+1 x ) 2 ˆV 1/4 t (xt αt ˆV 1/2 t mt x ) 2 = ˆV 1/4 t (xt x ) 2 + α2 t ˆV 1/4 t mt 2 2αt mt, xt x = ˆV 1/4 t (xt x ) 2 + α2 t ˆV 1/4 t mt 2 2αt β1tmt 1 + (1 β1t)gt, xt x Rearranging the above inequality, we have gt, xt x 1 2αt(1 β1t) h ˆV 1/4 t (xt x ) 2 ˆV 1/4 t (xt+1 x ) 2i + αt 2(1 β1t) ˆV 1/4 t mt 2 + β1t 1 β1t mt 1, xt x 1 2αt(1 β1t) h ˆV 1/4 t (xt x ) 2 ˆV 1/4 t (xt+1 x ) 2i + αt 2(1 β1t) ˆV 1/4 t mt 2 + β1t 2(1 β1t)αt ˆV 1/4 t mt 1 2 + β1t 2αt(1 β1t) ˆV 1/4 t (xt x ) 2. The second inequality follows from simple application of Cauchy-Schwarz and Young s inequality. We now use the standard approach of bounding the regret at each step using convexity of the function 2Projections can be easily handled with a little bit of work but the analysis becomes more messy. Published as a conference paper at ICLR 2018 ft in the following manner: t=1 ft(xt) ft(x ) t=1 gt, xt x " 1 2αt(1 β1t) h ˆV 1/4 t (xt x ) 2 ˆV 1/4 t (xt+1 x ) 2i + αt 2(1 β1t) ˆV 1/4 t mt 2 + β1t 2(1 β1t)αt ˆV 1/4 t mt 1 2 + β1t 2αt(1 β1t) ˆV 1/4 t (xt x ) 2 # The first inequality is due to convexity of function ft. The second inequality follows from the bound in Equation (15). For further bounding this inequality, we need the following intermediate result. Lemma 2. For the parameter settings and conditions assumed in Theorem 4, we have t=1 αt ˆV 1/4 t mt 2 α 1 + log T (1 β1)(1 γ) p i=1 g1:T,i 2 Proof. We start with the following: t=1 αt ˆV 1/4 t mt 2 = t=1 αt ˆV 1/4 t mt 2 + αT t=1 αt ˆV 1/4 t mt 2 + αT m2 T,i v T,i t=1 αt ˆV 1/4 t mt 2 + α (PT j=1(1 β1j)ΠT j k=1 β1(T k+1)gj,i)2 q T((1 β2) PT j=1 βT j 2 g2 j,i) The first inequality follows from the definition of ˆv T,i, which is maximum of all v T,i until the current time step. The second inequality follows from the update rule of Algorithm 2. We further bound the above inequality in the following manner: t=1 αt ˆV 1/4 t mt 2 t=1 αt ˆV 1/4 t mt 2 + α (PT j=1 ΠT j k=1 β1(T k+1))(PT j=1 ΠT j k=1 β1(T k+1)g2 j,i) q T((1 β2) PT j=1 βT j 2 g2 j,i) t=1 αt ˆV 1/4 t mt 2 + α (PT j=1 βT j 1 )(PT j=1 βT j 1 g2 j,i) q T((1 β2) PT j=1 βT j 2 g2 j,i) t=1 αt ˆV 1/4 t mt 2 + α 1 β1 PT j=1 βT j 1 g2 j,i q T((1 β2) PT j=1 βT j 2 g2 j,i) t=1 αt ˆV 1/4 t mt 2 + α βT j 1 g2 j,i q βT j 2 g2 j,i t=1 αt ˆV 1/4 t mt 2 + α j=1 γT j|gj,i| (17) The first inequality follows from Cauchy-Schwarz inequality. The second inequality is due to the fact that β1k β1 for all k [T]. The third inequality follows from the inequality PT j=1 βT j 1 1/(1 β1). By using similar upper bounds for all time steps, the quantity in Equation (17) can Published as a conference paper at ICLR 2018 further be bounded as follows: t=1 αt ˆV 1/4 t mt 2 j=1 γt j|gj,i| j=1 γt j|gj,i| = α t=1 |gt,i| 1 (1 γ) (1 β1)(1 γ) p i=1 g1:T,i 2 t α 1 + log T (1 β1)(1 γ) p i=1 g1:T,i 2 The third inequality follows from the fact that PT j=t γj t 1/(1 γ). The fourth inequality is due to simple application of Cauchy-Schwarz inequality. The final inequality is due to the following bound on harmonic sum: PT t=1 1/t (1 + log T). This completes the proof of the lemma. We now return to the proof of Theorem 4. Using the above lemma in Equation (16) , we have: t=1 ft(xt) ft(x ) " 1 2αt(1 β1t) h ˆV 1/4 t (xt x ) 2 ˆV 1/4 t (xt+1 x ) 2i + β1t 2αt(1 β1t) ˆV 1/4 t (xt x ) 2 # + α 1 + log T (1 β1)2(1 γ) p i=1 g1:T,i 2 1 2α1(1 β1) ˆV 1/4 1 (x1 x ) 2 + 1 2(1 β1) " ˆV 1/4 t (xt x ) 2 αt ˆV 1/4 t 1 (xt x ) 2 " β1t 2αt(1 β1) ˆV 1/4 t (xt x ) 2 # + α 1 + log T (1 β1)2(1 γ) p i=1 g1:T,i 2 = 1 2α1(1 β1) i=1 ˆv1/2 1,i (x1,i x i )2 + 1 2(1 β1) i=1 (xt,i x i )2 " ˆv1/2 t,i αt ˆv1/2 t 1,i αt 1 + 1 2(1 β1) β1t(xt,i x i )2ˆv1/2 t,i αt + α 1 + log T (1 β1)2(1 γ) p i=1 g1:T,i 2. The first inequality and second inequality use the fact that β1t β1. In order to further simplify the bound in Equation (18), we need to use telescopic sum. We observe that, by definition of ˆvt,i, we have ˆv1/2 t,i αt ˆv1/2 t 1,i αt 1 . Using the L bound on the feasible region and making use of the above property in Equation (18), we have: t=1 ft(xt) ft(x ) 1 2α1(1 β1) i=1 ˆv1/2 1,i D2 + 1 2(1 β1) " ˆv1/2 t,i αt ˆv1/2 t 1,i αt 1 + 1 2(1 β1) D2 β1tˆv1/2 t,i αt + α 1 + log T (1 β1)2(1 γ) p i=1 g1:T,i 2 = D2 2αT (1 β1) i=1 ˆv1/2 T,i + D2 2(1 β1) β1tˆv1/2 t,i αt + α 1 + log T (1 β1)2(1 γ) p i=1 g1:T,i 2. Published as a conference paper at ICLR 2018 Algorithm 3 ADAMNC Input: x1 F, step size {αt > 0}T t=1, {(β1t, β2t)}T t=1 Set m0 = 0 and v0 = 0 for t = 1 to T do gt = ft(xt) mt = β1tmt 1 + (1 β1t)gt vt = β2tvt 1 + (1 β2t)g2 t and Vt = diag(vt) xt+1 = ΠF, Vt(xt αtmt/ vt) end for The equality follows from simple telescopic sum, which yields the desired result. One important point to note here is that the regret of AMSGRAD can be bounded by O(G T). This can be easily seen from the proof of the aforementioned lemma where in the analysis the term PT t=1 |gt,i|/ t can also be bounded by O(G T). Thus, the regret of AMSGRAD is upper bounded by minimum of O(G T) and the bound in the Theorem 4 and therefore, the worst case dependence of regret on T in our case is O( E PROOF OF THEOREM 5 Proof. Using similar argument to proof of Theorem 4 until Equation (15), we have the following gt, xt x 1 2αt(1 β1t) h V 1/4 t (xt x ) 2 V 1/4 t (xt+1 x ) 2i + αt 2(1 β1t) V 1/4 t mt 2 + β1t 2(1 β1t)αt V 1/4 t mt 1 2 + β1t 2αt(1 β1t) V 1/4 t (xt x ) 2. The second inequality follows from simple application of Cauchy-Schwarz and Young s inequality. We now use the standard approach of bounding the regret at each step using convexity of the function ft in the following manner: t=1 ft(xt) ft(x ) t=1 gt, xt x " 1 2αt(1 β1t) h V 1/4 t (xt x ) 2 V 1/4 t (xt+1 x ) 2i + αt 2(1 β1t) V 1/4 t mt 2 + β1t 2(1 β1t)αt V 1/4 t mt 1 2 + β1t 2αt(1 β1t) V 1/4 t (xt x ) 2 # The inequalities follow due to convexity of function ft and Equation (19). For further bounding this inequality, we need the following intermediate result. Lemma 3. For the parameter settings and conditions assumed in Theorem 5, we have t=1 αt V 1/4 t mt 2 2ζ (1 β1)2 i=1 g1:T,i 2. Proof. We start with the following: t=1 αt V 1/4 t mt 2 = t=1 αt V 1/4 t mt 2 + αT m2 T,i v T,i t=1 αt ˆV 1/4 t mt 2 + αT (PT j=1(1 β1j)ΠT j k=1 β1(T k+1)gj,i)2 q (PT j=1 ΠT j k=1 β2(T k+1)(1 β2j)g2 j,i) Published as a conference paper at ICLR 2018 The first inequality follows from the update rule of Algorithm 2. We further bound the above inequality in the following manner: t=1 αt V 1/4 t mt 2 t=1 αt V 1/4 t mt 2 + αT (PT j=1 ΠT j k=1 β1(T k+1))(PT j=1 ΠT j k=1 β1(T k+1)g2 j,i) q PT j=1 ΠT j k=1 β2(T k+1)(1 β2j)g2 j,i t=1 αt V 1/4 t mt 2 + αT (PT j=1 βT j 1 )(PT j=1 βT j 1 g2 j,i) q PT j=1 ΠT j k=1 β2(T k+1)(1 β2j)g2 j,i t=1 αt V 1/4 t mt 2 + αT 1 β1 PT j=1 βT j 1 g2 j,i q PT j=1 ΠT j k=1 β2(T k+1)(1 β2j)g2 j,i t=1 αt V 1/4 t mt 2 + ζ 1 β1 PT j=1 βT j 1 g2 j,i q PT j=1 g2 j,i t=1 αt V 1/4 t mt 2 + ζ 1 β1 βT j 1 g2 j,i q Pj k=1 g2 k,i (21) The first inequality follows from Cauchy-Schwarz inequality. The second inequality is due to the fact that β1k β1 for all k [T]. The third inequality follows from the inequality PT j=1 βT j 1 1/(1 β1). Using similar argument for all time steps, the quantity in Equation (21) can be bounded as follows: T X t=1 αt V 1/4 t mt 2 ζ 1 β1 PT j l=0 βl 1g2 j,i q Pj k=1 g2 k,i g2 j,i q Pj k=1 g2 k,i 2ζ (1 β1)2 i=1 g1:T,i 2. The final inequality is due to Lemma 5. This completes the proof of the lemma. Using the above lemma in Equation (20) , we have: t=1 ft(xt) ft(x ) " 1 2α1(1 β1t) h V 1/4 t (xt x ) 2 V 1/4 t (xt+1 x ) 2i + β1t 2αt(1 β1t) V 1/4 t (xt x ) 2 # + 2ζ (1 β1)3 i=1 g1:T,i 2 1 2α1(1 β1) V 1/4 1 (x1 x ) 2 + 1 2(1 β1) " V 1/4 t (xt x ) 2 αt V 1/4 t 1 (xt 1 x ) 2 " β1t 2αt(1 β1) V 1/4 t (xt x ) 2 # + 2ζ (1 β1)3 i=1 g1:T,i 2 = 1 2α1(1 β1) i=1 v1/2 1,i (x1,i x i )2 + 1 2(1 β1) i=1 (xt,i x i )2 " v1/2 t,i αt v1/2 t 1,i αt 1 + 1 2(1 β1) β1t(xt,i x i )2v1/2 t,i αt + 2ζ (1 β1)3 i=1 g1:T,i 2. (22) The first inequality and second inequality use the fact that β1t β1. Furthermore, from the theorem statement, we know that that {(αt.β2t)} are selected such that the following holds: v1/2 t,i αt v1/2 t 1,i αt 1 . Published as a conference paper at ICLR 2018 Using the L bound on the feasible region and making use of the above property in Equation (22), we have: t=1 ft(xt) ft(x ) 1 2α1(1 β1) i=1 v1/2 1,i D2 + 1 2(1 β1) " v1/2 t,i αt v1/2 t 1,i αt 1 + 1 2(1 β1) D2 β1tv1/2 t,i αt + 2ζ (1 β1)3 i=1 g1:T,i 2 = D2 2αT (1 β1) i=1 v1/2 T,i + D2 2(1 β1) β1tv1/2 t,i αt + 2ζ (1 β1)3 i=1 g1:T,i 2. The equality follows from simple telescopic sum, which yields the desired result. F PROOF OF THEOREM 6 Theorem 6. For any ϵ > 0, ADAM with the modified update in Equation (3) and with parameter setting such that all the conditions in (Kingma & Ba, 2015) are satisfied can have non-zero average regret i.e., RT /T 0 as T for convex {fi} i=1 with bounded gradients on a feasible set F having bounded D diameter. Proof. Let us first consider the case where ϵ = 1 (in fact, the same setting works for any ϵ 1). The general ϵ case can be proved by simply rescaling the sequence of functions by a factor of ϵ. We show that the same optimization setting in Theorem 1 where ft are linear functions and F = [ 1, 1], hence, we only discuss the details that differ from the proof of Theorem 1. In particular, we define the following function sequence: ft(x) = Cx, for t mod 3 = 1 x, otherwise, where C 2. Similar to the proof of Theorem 1, we assume that the initial point is x1 = 1 and the parameters are: β1 = 0, β2 = 2 (1 + C2)C2 and αt = α where α < 1 β2. The proof essentially follows along the lines of that of Theorem 1 and is through principle of mathematical induction. Our aim is to prove that x3t+2 and x3t+3 are positive and x3t+4 = 1. The base case holds trivially. Suppose for some t N {0}, we have xi > 0 for all i [3t+1] and x3t+1 = 1. For (3t+1)th update, the only change from the update of in Equation (1) is the additional ϵ in the denominator i.e., we have ˆx3t+2 = x3t+1 αC p (3t + 1)(β2v3t + (1 β2)C2 + ϵ) (3t + 1)(β2v3t + (1 β2)C2) 0. The last inequality follows by simply dropping v3t term and using the relation that α < 1 β2. Therefore, we have 0 < ˆx3t+2 < 1 and hence x3t+2 = ˆx3t+2 > 0. Furthermore, after the (3t + 2)th and (3t + 3)th updates of ADAM in Equation (1), we have the following: ˆx3t+3 = x3t+2 + α p (3t + 2)(β2v3t+1 + (1 β2) + ϵ) , ˆx3t+4 = x3t+3 + α p (3t + 3)(β2v3t+2 + (1 β2) + ϵ) . Since x3t+2 > 0, it is easy to see that x3t+3 > 0. To complete the proof, we need to show that x3t+4 = 1. The only change here from the proof of Theorem 1 is that we need to show the Published as a conference paper at ICLR 2018 following: α p (3t + 2)(β2v3t+1 + (1 β2) + ϵ) + α p (3t + 3)(β2v3t+2 + (1 β2) + ϵ) β2C2 + (1 β2) + ϵ 1 3t + 2 + 1 3t + 3 β2C2 + (1 β2) + ϵ 2(3t + 1) + 1 p (3t + 1)(β2C2 + (1 β2) + ϵ) = αC p (3t + 1)((1 β2)C2 + ϵ) (3t + 1)(β2v3t + (1 β2)C2 + ϵ) . (23) The first inequality is due to the fact that vt C2 for all t N. The last equality is due to following fact: r β2C2 + (1 β2) for the choice of β2 = 2/[(1 + C2)C2] and ϵ = 1. Therefore, we see that x3t+4 = 1. Therefore, by the principle of mathematical induction it holds for all t N {0}. Thus, we have f3t+1(x3t+1) + f3t+2(x3t+2) + f3t+2(x3t+2) f3t+1( 1) f3t+2( 1) f3t+3( 1) 2C 4. Therefore, for every 3 steps, ADAM suffers a regret of at least 2C 4. More specifically, RT (2C 4)T/3. Since C 2, this regret can be very large and furthermore, RT /T 0 as T , which completes the proof of the case where ϵ = 1. For the general ϵ case, we consider the following sequence of functions: ft(x) = C ϵx, for t mod 3 = 1 ϵx, otherwise, The functions are essentially rescaled in a manner so that the resultant updates of ADAM correspond to the one in the optimization setting described above. Using essentially the same argument as above, it is easy to show that the regret RT (2C 4) ϵT/3 and thus, the average regret is non-zero asymptotically, which completes the proof. G AUXILIARY LEMMA Lemma 4 ((Mc Mahan & Streeter, 2010)). For any Q Sd + and convex feasible set F Rd, suppose u1 = minx F Q1/2(x z1) and u2 = minx F Q1/2(x z2) then we have Q1/2(u1 u2) Q1/2(z1 z2) . Proof. We provide the proof here for completeness. Since u1 = minx F Q1/2(x z1) and u2 = minx F Q1/2(x z2) and from the property of projection operator we have the following: z1 u1, Q(z2 z1) 0 and z2 u2, Q(z1 z2) 0. Combining the above inequalities, we have u2 u1, Q(z2 z1) z2 z1, Q(z2 z1) . (24) Also, observe the following: u2 u1, Q(z2 z1) 1 2[ u2 u1, Q(u2 u1) + z2 z1, Q(z2 z1) ] The above inequality can be obtained from the fact that (u2 u1) (z2 z1), Q((u2 u1) (z2 z1)) 0 as Q Sd + and rearranging the terms. Combining the above inequality with Equation (24), we have the required result. Published as a conference paper at ICLR 2018 Lemma 5 ((Auer & Gentile, 2000)). For any non-negative real numbers y1, , yt, the following holds: t X yi q Pi j=1 yj 2 Lemma 6. Suppose F = [a, b] for a, b R and yt+1 = ΠF(yt + δt) for all the t [T], y1 F and furthermore, there exists i [T] such that δj 0 for all j i and δj > 0 for all j > i. Then we have, y T +1 min{b, y1 + Proof. It is first easy to see that yi+1 y1 + Pi j=1 δj since δj 0 for all j i. Furthermore, also observe that y T +1 min{b, yi+1 + PT j=i+1 δj} since δj 0 for all j > i. Combining the above two inequalities gives us the desired result.