# on_the_implicit_bias_of_adam__488af519.pdf On the Implicit Bias of Adam Matias D. Cattaneo * 1 Jason M. Klusowski * 1 Boris Shigida * 1 In previous literature, backward error analysis was used to find ordinary differential equations (ODEs) approximating the gradient descent trajectory. It was found that finite step sizes implicitly regularize solutions because terms appearing in the ODEs penalize the two-norm of the loss gradients. We prove that the existence of similar implicit regularization in RMSProp and Adam depends on their hyperparameters and the training stage, but with a different norm involved: the corresponding ODE terms either penalize the (perturbed) one-norm of the loss gradients or, conversely, impede its reduction (the latter case being typical). We also conduct numerical experiments and discuss how the proven facts can influence generalization. 1. Introduction Gradient descent (GD) can be seen as a numerical method solving the ordinary differential equation (ODE) θ = E(θ), where E( ) is the loss function and E(θ) is its gradient. Starting at θ(0), it creates a sequence of guesses θ(1), θ(2), . . ., which lie close to the solution trajectory θ(t) governed by the aforementioned ODE. Since the step size h is finite, one could search for a modified differential equation θ = e E( θ) such that θ(n) θ(nh) is exactly zero, or at least closer to zero than θ(n) θ(nh), that is, all the guesses of the descent lie exactly on the new solution curve or closer compared to the original curve. This approach to analysing properties of a numerical method is sometimes called backward error analysis in the numerical integration literature (see Chapter IX in Ernst Hairer & Wanner (2006) and references therein). Barrett & Dherin (2021) used this idea for full-batch GD and found that the modified loss function e E( θ) = E( θ) + *Equal contribution 1Department of Operations Research and Financial Engineering, Princeton University, Princeton, NJ, USA. Correspondence to: Boris Shigida . Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). (h/4) E( θ) 2 makes the trajectory of the solution to θ = e E( θ) approximate the sequence {θ(n)} n=0 one order of h better than the original ODE, where is the Euclidean norm. In related work, Miyagawa (2022) obtained the correction term for full-batch GD up to any chosen order, also studying the global error (uniform in the iteration number) as opposed to the local (one-step) error. The analysis was later extended to mini-batch GD in Smith et al. (2021). Assume that the training set is split into batches of size B and there are m batches per epoch (so the training set size is m B). The cost function is rewritten as E(θ) = (1/m) Pm 1 k=0 ˆEk(θ) with mini-batch costs denoted ˆEk(θ) = (1/B) Pk B+B j=k B+1 Ej(θ). It was obtained in that work that after one epoch, the mean iterate of the algorithm, averaged over all possible shuffles of the batch indices, is close to the solution to θ = e ESGD(θ), where the modified loss is given by e ESGD(θ) = E(θ)+h/(4m) Pm 1 k=0 ˆE(θ) 2. Modified equations have also been derived for GD with heavy-ball momentum θ(n+1) = θ(n) h E(θ(n)) + β(θ(n) θ(n 1)), where β is the momentum parameter. In the full-batch setting, it turns out that for n large enough it is close to the continuous trajectory solving 1 β h 1 + β (1 β)3 E(θ) 2 4 | {z } implicit regularization Versions of this general result were proven in Farazmand (2020), Kovachki & Stuart (2021), Ghosh et al. (2023) under different assumptions. The focus of the latter work is the closest to ours since they interpret the correction term as implicit regularization. Their main theorem also provides the analysis for the general mini-batch case. In another recent work, Zhao et al. (2022) introduce a regularization term λ E(θ) to the loss function as a way to ensure finding flatter minima, improving generalization. The only difference between their term and the first-order correction coming from backward error analysis (up to a coefficient) is that the norm is not squared and regularization is applied on a per-batch basis. The application of backward error analysis for approximating the discrete dynamics of adaptive algorithms such as On the Implicit Bias of Adam RMSProp (Tieleman et al., 2012) and Adam (Kingma & Ba, 2015) is currently missing in the literature. Barrett & Dherin (2021) note that it would be interesting to use backward error analysis to calculate the modified loss and implicit regularization for other widely used optimizers such as momentum, Adam and RMSprop . Smith et al. (2021) reiterate that they anticipate that backward error analysis could also be used to clarify the role of finite learning rates in adaptive optimizers like Adam . Ghosh et al. (2023) agree that RMSProp ... and Adam ..., albeit being powerful alternatives to SGD with faster convergence rates, are far from well-understood in the aspect of implicit regularization . In a similar context, in Appendix G to Miyagawa (2022) it is mentioned that its [Adam s] counter term and discretization error are open questions . This work fills the gap by conducting backward error analysis for (mini-batch, and full-batch as a special case) Adam and RMSProp. Our main contributions are listed below. In Theorem 3.1, we provide a global second-order in h continuous piecewise ODE approximation to Adam in the general mini-batch setting. (A similar result for RMSProp is moved to Appendix C.) For the full-batch special case, it was shown in prior work Ma et al. (2022) that the continuous-time limit of both these algorithms is a (perturbed by the numerical stability parameter ε) sign GD flow θ = E(θ)/(| E(θ)| + ε) component-wise; we make this more precise by finding a linear in h correction term on the right. We analyze the full-batch case in the context of regularization (see the summary in Section 2). In contrast to the case of GD, where the two-norm of the loss gradient is implicitly penalized, Adam typically antipenalizes the perturbed one-norm of the loss gradient v 1,ε = Pp i=1 p v2 i + ε (i. e., penalizes the negative norm), as specified in (5). Thus, the implicit bias of Adam that we identify serves as anti-regularization (except for the unusual case β ρ, large ε or very late at training). We provide numerical evidence consistent with our theoretical results by training various vision models on CIFAR10 using full-batch Adam. In particular, we observe that the stronger the implicit anti-regularization effect predicted by our theory, the worse the generalization. This pattern holds across different architectures: Res Nets, simple convolutional neural networks (CNNs) and Vision Transformers. Thus, we propose a novel possible explanation for often-reported poor generalization of adaptive gradient algorithms. The code used for training the models is available at https://github.com/borshigida/ implicit-bias-of-adam. 1.1. Related Work Backward error analysis of first-order methods. We outlined the history of finding ODEs approximating different algorithms above in the introduction. Recently, there have been other applications of backward error analysis related to machine learning. Kunin et al. (2020) show that the approximating continuous-time trajectories satisfy conservation laws that are broken in discrete time. Franc a et al. (2021) use backward error analysis while studying how to discretize continuous-time dynamical systems preserving stability and convergence rates. Rosca et al. (2021) find continuous-time approximations of discrete two-player differential games. Approximating gradient methods by differential equation trajectories. Under the assumption that the hyperparameters β, ρ of the Adam algorithm (see Definition 1.1) tend to 1 at a certain rate as h 0, a first-order continuous ODE approximation to this algorithm was derived in Barakat & Bianchi (2021). On the other hand, if β, ρ are kept fixed, Ma et al. (2022) prove that the trajectories of Adam and RMSProp are close to sign GD dynamics, and investigate different training regimes of these algorithms empirically. SGD is approximated by stochastic differential equations and novel adaptive parameter adjustment policies are devised in Li et al. (2017). Malladi et al. (2022) derive stochastic differential equations that are order-1 weak approximations of RMSProp and Adam. We go in a different direction: instead of clarifying the previously obtained continuous ODE approximations by taking gradient noise into account, we take a deterministic approach but go one order of h further. In particular, we keep β, ρ fixed (thus generalizing the analysis for SGD with momentum), whereas Malladi et al. (2022) take β, ρ 1. Connection with sign GD. The connection of adaptive gradient methods with sign(S)GD is extensively discussed in Bernstein et al. (2018). Balles et al. (2020) study a version of sign GD with an update proportional to E(θ) 1 sign E(θ) as a special case of steepest descent, and discuss when sign-based methods are preferable to GD. Implicit bias of first-order methods. Soudry et al. (2018) prove that GD trained to classify linearly separable data with logistic loss converges to the direction of the max-margin vector (the solution to the hard margin SVM). This result has been extended to different loss functions in Nacson et al. (2019b), to SGD in Nacson et al. (2019c), Ada Grad in Qian & Qian (2019), (S)GD with momentum, deterministic Adam and stochastic RMSProp in Wang et al. (2022), more generic optimization methods in Gunasekar et al. (2018a), to the nonseparable case in Ji & Telgarsky (2018b), Ji & Telgarsky (2019). This line of research has been generalized to studying implicit biases of linear networks (Ji & Telgarsky, 2018a; Gunasekar et al., 2018b), homogeneous neural On the Implicit Bias of Adam networks (Ji & Telgarsky, 2020; Nacson et al., 2019a; Lyu & Li, 2019). Woodworth et al. (2020) study the gradient flow of a diagonal linear network with squared loss and show that large initializations lead to minimum two-norm solutions while small initializations lead to minimum one-norm solutions. Even et al. (2023) extend this work to the case of non-zero step sizes and mini-batch training. Wang et al. (2021) prove that Adam and RMSProp maximize the margin of homogeneous neural networks. Our perspective on the implicit bias is different since we are considering a generic loss function without any assumptions on the network architecture. Beneventano (2023) proves that in expectation over batch sampling the trajectory of SGD without replacement differs from that of SGD with replacement by an additional step on a regularizer. As opposed to the work on backward error analysis for SGD discussed above, they do not assume the largest eigenvalue of the hessian to be bounded. Generalization of adaptive methods. Cohen et al. (2022) investigate the edge-of-stability regime of adaptive gradient algorithms and the effect of sharpness (the largest eigenvalue of the hessian) on generalization. Granziol (2020); Chen et al. (2021) observe that adaptive methods find sharper minima than SGD and Zhou et al. (2020); Xie et al. (2022) argue theoretically that it is the case. Jiang et al. (2022) introduce a statistic that measures the uniformity of the hessian diagonal and argue that adaptive gradient algorithms are biased towards making this statistic smaller. Keskar & Socher (2017) propose to improve generalization of adaptive methods by switching to SGD in the middle of training. 1.2. Notation We denote the loss of the kth minibatch as a function of the network parameters θ Rp by Ek(θ), and in the full-batch setting we omit the index and write E(θ). E means the gradient of E, and with indices denotes partial derivatives, e. g. ijs E is a shortcut for 3E θi θj θs . The norm notation without indices is the two-norm of a vector, 1 is the one-norm and 1,ε is the perturbed one-norm defined as v 1,ε = Pp i=1 p v2 i + ε. (Of course, if ε > 0 the perturbed one-norm is not really a norm, but taking ε = 0 makes it the one-norm.) For a real number a the floor a is the largest integer not exceeding a. To provide the names and notations for hyperparameters, we define the algorithm below. Definition 1.1. The Adam algorithm (Kingma & Ba, 2015) is an optimization algorithm with numerical stability hyperparameter ε > 0, squared gradient momentum hyperparameter ρ (0, 1), gradient momentum hyperparameter β (0, 1), initialization θ(0) Rp, ν(0) = 0 Rp, m(0) = 0 Rp and the following update rule: for each n 0, j {1, . . . , p} ν(n+1) j = ρν(n) j + (1 ρ) j En(θ(n)) 2, m(n+1) j = βm(n) j + (1 β) j En(θ(n)), θ(n+1) j = θ(n) j h m(n+1) j /(1 βn+1) q ν(n+1) j /(1 ρn+1) + ε . Remark 1.2. Note that the numerical stability hyperparameter ε > 0, which is introduced in these algorithms to avoid division by zero, is inside the square root in our definition. This way we avoid division by zero in the derivative too: the first derivative of x 7 x + ε 1 is bounded for x 0. This is useful for our analysis. In Theorems B.4 and D.4, the original versions of RMSProp and Adam are also tackled, though with an additional assumption which requires that no component of the gradient can come very close to zero in the region of interest. This is true only for the initial period of learning (whereas Theorem 3.1 tackles the whole period). Practitioners do not seem to make a distinction between the version with ε inside vs. outside the square root: tutorials with both versions abound on machine learning related websites. Moreover, the popular Tensorflow and Optax variants of RMSProp have ε inside the square root. Empirically we also observed that moving ε inside or outside the square root does not change the behavior of Adam or RMSProp qualitatively. 2. Implicit Bias of Full-Batch Adam: an Informal Summary We are ready to describe our theoretical result (Theorem 3.1 below) in the full-batch special case. Assume E(θ) is the loss, whose partial derivatives up to the fourth order are bounded. Let {θ(n)} be iterations of Adam as defined in Definition 1.1. We find an ODE whose solution trajectory θ(t) is h2-close to {θ(n)}, meaning that for any time horizon T > 0 there is a constant C such that for any step size h (0, T) we have θ(nh) θ(n) Ch2 (for n between 0 and T/h ). The ODE is written the following way (up to terms that rapidly go to zero as n grows): for the component number j {1, . . . , p} θj(t) = j E θ(t) + correctionj θ(t) q j E θ(t) 2 + ε (2) with initial conditions θj(0) = θ(0) j for all j, where the correction term is correctionj(θ) 1 ρ + 1 + ρ 1 ρ ε | j E(θ)|2 + ε On the Implicit Bias of Adam j E(θ) 1,ε. (3) Depending on hyperparameters and the training stage, the correction term can take two extreme forms listed below. The reality is in between, but typically much closer to the first case. If ε is small compared to all components of E θ(t) , i. e. minj j E θ(t) ε, which is usually the case during most of the training, then we can write correctionj(θ) h j E(θ) 1,ε. (4) For small ε, the perturbed one-norm is indistinguishable from the usual one-norm, and for β > ρ it is penalized (in much the same way as the squared two-norm is implicitly penalized in the case of GD), but for the typical case ρ > β its decrease is actually hindered by this term (so the bias is anti-regularization). The ODE in (2) approximately becomes θj(t) = j e E θ(t) j E θ(t) , with e E(θ) = E(θ) + h E(θ) 1 | {z } implicit anti-regularization (if ρ > β) If ε is large compared to all gradient components, i. e. maxj j E θ(t) ε (which may happen during the late learning stage, or if non-standard hyperparameter values are chosen), the fraction in (3) with ε in the numerator approaches one, the dependence on ρ cancels out, and ε 1 + i E θ(t) 2/(2ε) = p ε + 1 2 ε E θ(t) 2. (6) In other words, 1,ε becomes 2/(2 ε) up to an additive constant, giving correctionj(θ) 4 ε 1(1 β) 1(1 + β) j E(θ) 2. The form of the ODE in this case is θj(t) = j e E θ(t) , with e E(θ) = 1 ε E(θ) + h 4 ε 1 + β 1 β E(θ) 2 . (7) These two extreme cases are summarized in Table 1. In Figure 1, we use the one-dimensional (p = 1) case to illustrate what kind of term is being implicitly penalized. Table 1. Implicit bias of Adam: special cases. Small and large are in relation to squared gradient components (Adam in the latter case is close to GD with momentum). ε small ε large ρ > β E(θ) 1-penalized E(θ) 2 2-penalized β ρ E(θ) 1-penalized E(θ) 2 2-penalized Usually, ε is chosen to be small, and during most of the training Adam is much better described by the first extreme case. It is clear from (5) that, if ρ > β, the correction term provides the opposite of regularization, in contrast to (1). The larger ρ compared to β, the stronger the antiregularization effect is. This finding may partially explain why adaptive gradient methods have been reported to generalize worse than nonadaptive ones (Chen et al., 2018; Wilson et al., 2017), as it offers a previously unknown perspective on why they are biased towards higher-curvature regions and find sharper minima. Indeed, note that standard (non-adaptive) ℓ -sharpness at θ can be defined by max δ r E(θ + δ) E(θ) for some radius r. This or similar definitions have been considered often in literature, see, e. g., Andriushchenko et al. (2023), Foret et al. (2021). Replacing the difference of the losses with its first-order approximation under the maximum (Foret et al., 2021; Ghosh et al., 2023) max δ r E(θ + δ) E(θ) max δ r E(θ) δ = r E(θ) 1, we see that Adam typically anti-penalizes the approximation of ℓ -sharpness. Although the connection between sharpness and generalization is not clear-cut (Andriushchenko et al., 2023), our empirical results (Section 5) are consistent with this theory. This overview also applies to RMSProp by setting β = 0; see Theorem C.4 for the formal result. Example 2.1 (Backward Error Analysis for GD with Heavy-ball Momentum). Assume ε is large compared to all squared gradient components during the whole training process, so that the form of the ODE is approximated by (7). Since Adam with a large ε and after a certain number of iterations approximates SGD with heavy-ball momentum with step size h(1 β)/ ε, a linear step size change (and corresponding time change) gives exactly the equations in Theorem 4.1 of Ghosh et al. (2023). Taking β = 0 (no momentum), we get the implicit regularization of GD from Barrett & Dherin (2021). On the Implicit Bias of Adam =0.995 =0.99 =0.95 =0.9 =0.8 =0.75 =0.5 =0.1 =0.995 =0.99 =0.95 =0.9 =0.8 =0.75 =0.5 =0.1 =0.995 =0.99 =0.95 =0.9 =0.8 =0.75 =0.5 =0.1 Figure 1. To illustrate what term is being implicitly penalized in the simple case p = 1, we plot the graphs of x 7 F(x) := h 2 R x 0 1+β 1 ρ ε y2+ε d p ε + y2 with β = 0.95. In this case, the correction term in (3) is itself the gradient of the function F(E (θ)), where E is the derivative (=gradient) of the loss: specifically, correction = d dθ F(E (θ)). Hence, Adam s iteration penalizes F(E (θ)). If ε is small and ρ > β, the negative one-norm of the gradient is penalized (leftmost picture, highest values of ρ); in other words, the one-norm is anti-penalized. 3. Main Result: ODE Approximating Mini-Batch Adam We only make one assumption, which is standard in the literature: the loss Ek for each mini-batch is 4 times continuously differentiable, and partial derivatives of Ek up to order 4 are bounded, i. e. there is a positive constant M such that for θ in the region of interest sup i | i Ek(θ)| sup i,j | ij Ek(θ)| sup i,j,s | ijs Ek(θ)| sup i,j,s,r | ijsr Ek(θ)| M. (8) Theorem 3.1. Assume (8) holds. Let {θ(n)} be iterations of Adam as defined in Definition 1.1, θ(t) be the continuous solution to the piecewise ODE θj(t) = M (n) j θ(t)) R(n) j ( θ(t) + h M (n) j θ(t) 2P (n) j θ(t) + P (n) j θ(t) 2R(n) j θ(t) 3 2L(n) j θ(t) + L(n) j θ(t) 2R(n) j θ(t) for t [nh, (n+1)h] with the initial condition θ(0) = θ(0), where R(n) j (θ) := s Pn k=0 ρn k(1 ρ)( j Ek(θ))2 1 ρn+1 + ε, M (n) j (θ) := Pn k=0 βn k(1 β) j Ek(θ) L(n) j (θ) := 1 1 βn+1 k=0 βn k(1 β) i=1 ij Ek(θ) M (l) i (θ) R(l) i (θ) , L(n) j (θ) := 1 1 βn+1 k=0 βn k(1 β) i=1 ij Ek(θ)M (n) i (θ) R(n) i (θ) , P (n) j (θ) := 1 1 ρn+1 k=0 ρn k(1 ρ) j Ek(θ) i=1 ij Ek(θ) M (l) i (θ) R(l) i (θ) , P (n) j (θ) := 1 1 ρn+1 k=0 ρn k(1 ρ) j Ek(θ) i=1 ij Ek(θ)M (n) i (θ) R(n) i (θ) . Then, for any fixed positive time horizon T > 0 there exists a constant C (depending on T, ρ, β, ε) such that for any step size h (0, T) we have θ(nh) θ(n) Ch2 for n {0, . . . , T/h }. Remark 3.2. In the full-batch setting Ek E, the terms above simplify to R(n) j (θ) = (| j E(θ)|2 + ε)1/2, M (n) j (θ) = j E(θ), L(n) j (θ) = β 1 β (n + 1)βn+1 L(n) j (θ), L(n) j (θ) = j E(θ) 1,ε, P (n) j (θ) = ρ 1 ρ (n + 1)ρn+1 P (n) j (θ), P (n) j (θ) = j E(θ) j E(θ) 1,ε. If the iteration number n is large, (9) rapidly becomes as described in (2) and (3). Derivation sketch. The proof is in the appendix (this is Theorem E.4; see Appendix A for the overview of the appendix). To help the reader understand how the ODE (9) is obtained, apart from the full proof, we include an informal derivation in Appendix I, and provide an even briefer sketch of this derivation here. Our goal is to find such a trajectory θ(t) that, denoting On the Implicit Bias of Adam tn := nh, we have θj(tn+1) = θj(tn) h T (n) β,j q T (n) ρ,j + O(h3) with (10) T (n) β,j := 1 1 βn+1 k=0 βn k(1 β) j Ek θ(tk) , T (n) ρ,j := 1 1 ρn+1 k=0 ρn k(1 ρ) j Ek θ(tk) 2 + ε. Ignoring the terms of order higher than one in h, we can take a first-order approximation for granted: θj(tn+1) = θj(tn) h A(n) j ( θ(tn)) + O(h2) with A(n) j (θ) := M (n) j (θ)/R(n) j (θ). The challenge is to make this more precise by finding an equality of the form θj(tn+1) = θj(tn) h A(n) j θ(tn) + h2B(n) j θ(tn) + O(h3), (11) where B(n) j ( ) is a known function. This is a numerical iteration to which standard backward error analysis (Chapter IX in Ernst Hairer & Wanner (2006)) can be applied. Using the Taylor series, we can write j Ek θ(tn 1) = j Ek θ(tn) i=1 ij Ek θ(tn) θi(tn 1) θi(tn) + O(h2) = j Ek θ(tn) i=1 ij Ek θ(tn) M (n 1) i θ(tn 1) R(n 1) i θ(tn 1) + O(h2) = j Ek θ(tn) i=1 ij Ek θ(tn) M (n 1) i θ(tn) R(n 1) i θ(tn) + O(h2), where in the last equality we just replaced tn 1 with tn in the h-term since it only affects higher-order terms. Doing this again for steps n 2, n 3, . . ., and adding the resulting equations will give for k < n j Ek θ(tk) = j Ek θ(tn) i=1 ij Ek θ(tn) n 1 X M (l) i θ(tn) R(l) i θ(tn) + O(h2), where we could safely ignore that n k is not bounded because of exponential averaging. Taking the square of this formal power series in h, multiplying this square by ρn k(1 ρ) and summing up over k will give k=0 ρn k(1 ρ) j Ek θ(tk) 2 + ε = R(n) j θ(tn) 2 + 2h P (n) j θ(tn) + O(h2), which, using the expression for the inverse square root P r=0 arhr 1/2 of a formal power series P r=0 arhr, gives us an expansion T (n) ρ,j = 1 R(n) j θ(tn) h P (n) j θ(tn) R(n) j θ(tn) 3 + O(h2). A similar process provides an expansion for T (n) β,j : T (n) β,j = M (n) j θ(tn) + h L(n) j θ(tn) + O(h2). Inserting these two expansions into (10) leads to an expression for B(n) j ( ): B(n) j (θ) = M (n) j (θ)P (n) j (θ) R(n) j (θ)3 L(n) j (θ) R(n) j (θ) . We are now ready to find an ODE for t [tn, tn+1] of the form θ = f θ(t) whose discretization is (11). This is a task for standard backward error analysis: expand f( ) into f(θ) = f(θ) + hf1(θ) + O(h2). By Taylor expansion, we have θ(tn+1) = θ(tn) + h θ(t+ n ) + h2 2 θ(t+ n ) + O(h3) = θ(tn) + h f θ(tn) + hf1 θ(tn) + O(h2) 2 f θ(tn) f θ(tn) + O(h) + O(h3) = θ(tn) + hf θ(tn) + h2 f1 θ(tn) + f θ(tn) f θ(tn) It is left to equate the terms before the corresponding powers of h here and in (11), giving fj(θ) = A(n) j (θ) and f1,j(θ) = 1 2 Pp i=1 ifj(θ)fi(θ) + B(n) j (θ). Omitting some algebra, the piecewise ODE (9) is derived. 4. Illustration: Simple Bilinear Model We now analyze the effect of the first-order term for Adam in the same model as Barrett & Dherin (2021) and Ghosh et al. (2023) have studied. Namely, assume the parameter θ = (θ1, θ2) is 2-dimensional, and the loss is given by E(θ) := 1/2(3/2 2θ1θ2)2. The loss is minimized on the hyperbola θ1θ2 = 3/4. We graph the trajectories of Adam in this case: the left part of Figure 2 shows that increasing β forces the trajectory to the region with smaller E(θ) 1, and increasing ρ does the opposite. The right part shows that increasing the learning rate moves Adam towards the region with smaller E(θ) 1 if β > ρ (just like in the case of GD, except the norm is different if ε is small compared to gradient components), and does the opposite if ρ > β. All these observations are exactly what Theorem 3.1 predicts. On the Implicit Bias of Adam 0.4 0.6 0.8 1.0 1.2 1.4 1.6 =0.999, h=0.001, =1e-06 =0.997 =0.999 =0.9999 0.4 0.6 0.8 1.0 1.2 1.4 1.6 =0.995, =0.75, =1e-06 minima h=1e-05 h=0.0001 h=0.00024 h=0.0005 h=0.001 0.4 0.6 0.8 1.0 1.2 1.4 1.6 =0.9, h=0.001, =1e-06 =0.997 =0.999 =0.9999 0.4 0.6 0.8 1.0 1.2 1.4 1.6 =0.9, =0.999, =1e-06 minima h=1e-05 h=0.0001 h=0.00024 h=0.0005 h=0.001 Figure 2. Left: increasing β moves the trajectory of Adam towards the regions with smaller one-norm of the gradient (if ε is sufficiently small); increasing ρ does the opposite. Right: increasing the learning rate moves the Adam trajectory towards the regions with smaller one-norm of the gradient if β is significantly larger than ρ and does the opposite if ρ is larger than β. The cross denotes the limit point of gradient one-norm minimizers on the level sets 4θ1θ2 3 = c. The minimizers are drawn with a dashed line. All Adam trajectories start at (2.8, 3.5). 5. Numerical Experiments As a first sanity check, we train a relatively small fullyconnected neural network with around 105 parameters on the first 10,000 images of MNIST with full-batch Adam for 100 epochs and plot the value θ(n) θ(tn) , i. e. the maximal weight difference between the Adam iteration and the piecewise ODE solution.1 We see in Figure 3 that even on this very large time horizon the trajectories are close in infinity-norm. Further, we offer some preliminary empirical evidence that Adam (anti-)penalizes the perturbed one-norm of the gradients, as discussed in Section 2. 1Since it makes little sense to numerically solve an ODE by further discretization, θ(tn) is estimated using the iteration (11) with O(h3) ignored. Strictly speaking, this is not the trajectory obtained by the final backward error analysis step but rather the step immediately preceding it (after removing long-term memory but before converting the iteration to an ODE). 0 20 40 60 80 100 Epoch Weight Difference h=1e-06, first order h=1e-06, second order h=3e-06, first order h=3e-06, second order h=1e-05, first order h=1e-05, second order Figure 3. θ(n) θ(tn) for a MLP trained with full-batch Adam on truncated MNIST, where θ(tn) is either first (sign GD perturbed by ε) or second order approximation to Adam; β = 0.9, ρ = 0.95, ε = 10 6. Precise definitions are provided in Appendix H, specifically (63). On the Implicit Bias of Adam Ma et al. (2022) divide training regimes of Adam into three categories: the spike regime when ρ is much larger than β, in which the training loss curve contains very large spikes and the training is obviously unstable; the (stable) oscillation regime when ρ is sufficiently close to β, in which the loss curve contains fast and small oscillations; the divergence regime when β is much larger than ρ, in which Adam diverges. We exclude the last regime. In the spike regime, the loss spikes to large values at irregular intervals. This has also been observed in the context of large transformers, and mitigation strategies have been proposed in Chowdhery et al. (2022) and Molybog et al. (2023). Since it is unlikely that an unstable Adam trajectory can be meaningfully approximated by a smooth ODE solution, we reduce the incidence of large spikes by only considering β and ρ that are not too far apart, which is what Ma et al. (2022) recommend to do in practice. We train Resnet-50, CNNs and Vision Transformers (Dosovitskiy et al., 2020) on the CIFAR-10 dataset with full-batch Adam. In this section, we provide the results for Resnet-50; the pictures for CNNs and Transformers are similar and are given in Appendix H.4. Figure 4 shows that in the stable oscillation regime increasing ρ appears to increase the perturbed one-norm (consistent with our analysis: the smaller ρ, the more this norm is penalized) and decrease the test accuracy. Figure 5 shows that increasing β appears to decrease the perturbed one-norm (consistent with our analysis: the larger β, the more this norm is penalized) and increase the test accuracy. The picture confirms the finding in Ghosh et al. (2023) (for the momentum parameter in momentum GD). 0.92 0.94 0.96 0.98 Perturbed 1-norm h = 2.5e-05 h = 5e-05 h = 7.5e-05 h = 0.0001 h = 0.00015 0.92 0.94 0.96 0.98 Test accuracy h = 2.5e-05 h = 5e-05 h = 7.5e-05 h = 0.0001 h = 0.00015 Figure 4. Resnet-50 on CIFAR-10 trained with full-batch Adam, ε = 10 8, β = 0.99. As ρ increases, the norm rises and the test accuracy falls. We train longer than necessary for near-perfect classification on the train dataset (at least 2-3 thousand epochs), and the test accuracies plotted here are maximal. The perturbed norms are also maximal after excluding the initial training period (i. e., the plotted norms are at peaks of the hills described in Section 5). All results are averaged across five runs with different initialization seeds. Additional evidence and more details are provided in Appendix H. Figure 6 shows the graphs of E 1,ε as functions of the 0.92 0.94 0.96 0.98 1.00 Perturbed 1-norm h = 2.5e-05 h = 5e-05 h = 7.5e-05 h = 0.0001 0.92 0.94 0.96 0.98 1.00 Test accuracy h = 2.5e-05 h = 5e-05 h = 7.5e-05 h = 0.0001 Figure 5. Resnet-50 on CIFAR-10 trained with full-batch Adam, ρ = 0.999, ε = 10 8. The perturbed one-norm falls as β increases, and the test accuracy rises. Both metrics are calculated as in Figure 4. All results are averaged across three runs with different initialization seeds. epoch number. The norm decreases, then rises again, and then decreases further until it flatlines.2 Throughout most of the training, the larger β the smaller the norm . The hills of the norm curves are higher with smaller β and larger ρ. This is consistent with our analysis because the larger ρ compared to β, the more E 1,ε is prevented from falling by the correction term. 0 500 1000 1500 2000 Epoch Perturbed 1-norm =0.9572 =0.9687 =0.9771 =0.9833 =0.9878 =0.9911 =0.9935 =0.9952 =0.9965 =0.9974 0 500 1000 1500 2000 Epoch Perturbed 1-norm =0.9479 =0.9579 =0.966 =0.9726 =0.9778 =0.9821 =0.9856 =0.9883 =0.9906 =0.9924 Figure 6. Plots of E 1,ε after each epoch for full-batch Adam, h = 10 4, ε = 10 8. Resnet-50 on CIFAR-10, left: ρ = 0.999, right: β = 0.97. 6. Limitations and Future Directions As far as we know, the assumption similar to (8) is explicitly or implicitly present in all previous work on backward error analysis of gradient-based machine learning algorithms. (Recently, Beneventano (2023) weakened this assumption for SGD without replacement, but their focus is somewhat different.) There is evidence that large-batch algorithms often operate near or at the edge of stability (Cohen et al., 2021; 2022), in which the largest eigenvalue of the hessian can be large, making it unclear whether the higher-order partial derivatives can safely be assumed bounded near op- 2Note that the perturbed one-norm cannot be near-zero at the end of training because it is bounded from below by p ε. On the Implicit Bias of Adam timality. In addition, as Smith et al. (2021) point out, in the mini-batch setting backward error analysis can be more accurate. We leave a qualitative analysis of the behavior of first-order terms in Theorem 3.1 in the mini-batch case as a future direction. Relatedly, Adam does not always generalize worse than SGD: for transformers, Adam often outperforms (Zhang et al., 2020; Kumar et al., 2022). Moreover, for NLP tasks a long time can be spent training close to an interpolating solution. Our analysis suggests that in the latter regime the anti-regularization effect disappears, which does indeed confirm the finding that generalization can be better. However, we believe this explanation is not complete, and more work is needed to connect the implicit bias to the training dynamics of transformers. In addition, the constant C in Theorem 3.1 goes to infinity as ε goes to zero. Theoretically, our proof does not exclude the case where for very small ε the trajectory of the piecewise ODE is only close to the Adam trajectory for small, suboptimal learning rates, at least at later stages of learning. (For the initial learning period, this is not a problem.) It appears to also be true of Proposition 1 in Ma et al. (2022) (zeroth-order approximation by sign-GD). This is especially noticeable in the large-spike regime of training (see Section 5) which, despite being obviously unstable, can still lead to acceptable test errors. It would be worthwhile to investigate this regime in detail. Acknowledgments We extend our special thanks to Boris Hanin, Sam Smith, and the anonymous reviewers for their insightful comments and suggestions that greatly enhanced this work. We are also grateful to Jianqing Fan, Pier Beneventano, and Rae Yu for engaging and productive discussions. Cattaneo gratefully acknowledges financial support from the National Science Foundation through DMS-2210561 and SES-2241575. Klusowski gratefully acknowledges financial support from the National Science Foundation through CAREER DMS2239448. Additionally, we acknowledge the Princeton Research Computing resources, coordinated by the Princeton Institute for Computational Science and Engineering (PICSci E) and the Office of Information Technology s Research Computing. Impact Statement This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here. Andriushchenko, M., Croce, F., M uller, M., Hein, M., and Flammarion, N. A modern look at the relationship between sharpness and generalization. In Proceedings of the 40th International Conference on Machine Learning, ICML 23. JMLR.org, 2023. Balles, L., Pedregosa, F., and Roux, N. L. The geometry of sign gradient descent. ar Xiv preprint ar Xiv:2002.08056, 2020. Barakat, A. and Bianchi, P. Convergence and dynamical behavior of the adam algorithm for nonconvex stochastic optimization. SIAM Journal on Optimization, 31(1):244 274, 2021. Barrett, D. and Dherin, B. Implicit gradient regularization. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum? id=3q5Iq Urkc F. Beneventano, P. On the trajectories of sgd without replacement. ar Xiv preprint ar Xiv:2312.16143, 2023. Bernstein, J., Wang, Y.-X., Azizzadenesheli, K., and Anandkumar, A. signsgd: Compressed optimisation for nonconvex problems. In International Conference on Machine Learning, pp. 560 569. PMLR, 2018. Beyer, L., Zhai, X., and Kolesnikov, A. Better plain vit baselines for imagenet-1k. ar Xiv preprint ar Xiv:2205.01580, 2022. Chen, J., Zhou, D., Tang, Y., Yang, Z., Cao, Y., and Gu, Q. Closing the generalization gap of adaptive gradient methods in training deep neural networks. ar Xiv preprint ar Xiv:1806.06763, 2018. URL https:// arxiv.org/pdf/1806.06763. Chen, X., Hsieh, C.-J., and Gong, B. When vision transformers outperform resnets without pre-training or strong data augmentations. ar Xiv preprint ar Xiv:2106.01548, 2021. Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra, G., Roberts, A., Barham, P., Chung, H. W., Sutton, C., Gehrmann, S., et al. Palm: Scaling language modeling with pathways. ar Xiv preprint ar Xiv:2204.02311, 2022. Cohen, J., Kaur, S., Li, Y., Kolter, J. Z., and Talwalkar, A. Gradient descent on neural networks typically occurs at the edge of stability. In International Conference on Learning Representations, 2021. URL https: //openreview.net/forum?id=jh-r Ttvk Ge M. Cohen, J. M., Ghorbani, B., Krishnan, S., Agarwal, N., Medapati, S., Badura, M., Suo, D., Cardoze, D., Nado, Z., Dahl, G. E., et al. Adaptive gradient methods at the edge of stability. ar Xiv preprint ar Xiv:2207.14484, 2022. On the Implicit Bias of Adam Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al. An image is worth 16x16 words: Transformers for image recognition at scale. ar Xiv preprint ar Xiv:2010.11929, 2020. Ernst Hairer, C. L. and Wanner, G. Geometric numerical integration. Springer-Verlag, Berlin, 2 edition, 2006. ISBN 3-540-30663-3. Even, M., Pesme, S., Gunasekar, S., and Flammarion, N. (s) gd over diagonal linear networks: Implicit regularisation, large stepsizes and edge of stability. ar Xiv preprint ar Xiv:2302.08982, 2023. Farazmand, M. Multiscale analysis of accelerated gradient methods. SIAM Journal on Optimization, 30(3):2337 2354, 2020. Foret, P., Kleiner, A., Mobahi, H., and Neyshabur, B. Sharpness-aware minimization for efficiently improving generalization. In International Conference on Learning Representations, 2021. URL https://openreview. net/forum?id=6Tm1mposlr M. Franc a, G., Jordan, M. I., and Vidal, R. On dissipative symplectic integration with applications to gradient-based optimization. Journal of Statistical Mechanics: Theory and Experiment, 2021(4):043402, 2021. Ghosh, A., Lyu, H., Zhang, X., and Wang, R. Implicit regularization in heavy-ball momentum accelerated stochastic gradient descent. In The Eleventh International Conference on Learning Representations, 2023. URL https: //openreview.net/forum?id=Zzd Bht EH9y B. Granziol, D. Flatness is a false friend. ar Xiv preprint ar Xiv:2006.09091, 2020. Gunasekar, S., Lee, J., Soudry, D., and Srebro, N. Characterizing implicit bias in terms of optimization geometry. In International Conference on Machine Learning, pp. 1832 1841. PMLR, 2018a. Gunasekar, S., Lee, J. D., Soudry, D., and Srebro, N. Implicit bias of gradient descent on linear convolutional networks. Advances in neural information processing systems, 31, 2018b. He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770 778, 2016. Hoffer, E., Hubara, I., and Soudry, D. Train longer, generalize better: closing the generalization gap in large batch training of neural networks. Advances in neural information processing systems, 30, 2017. Ji, Z. and Telgarsky, M. Gradient descent aligns the layers of deep linear networks. ar Xiv preprint ar Xiv:1810.02032, 2018a. Ji, Z. and Telgarsky, M. Risk and parameter convergence of logistic regression. ar Xiv preprint ar Xiv:1803.07300, 2018b. Ji, Z. and Telgarsky, M. The implicit bias of gradient descent on nonseparable data. In Conference on Learning Theory, pp. 1772 1798. PMLR, 2019. Ji, Z. and Telgarsky, M. Directional convergence and alignment in deep learning. Advances in Neural Information Processing Systems, 33:17176 17186, 2020. Jiang, K., Malik, D., and Li, Y. How does adaptive optimization impact local neural network geometry? ar Xiv preprint ar Xiv:2211.02254, 2022. Keskar, N. S. and Socher, R. Improving generalization performance by switching from adam to sgd. ar Xiv preprint ar Xiv:1712.07628, 2017. Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015. Kovachki, N. B. and Stuart, A. M. Continuous time analysis of momentum methods. Journal of Machine Learning Research, 22(17):1 40, 2021. Kumar, A., Shen, R., Bubeck, S., and Gunasekar, S. How to fine-tune vision models with sgd. ar Xiv preprint ar Xiv:2211.09359, 2022. Kunin, D., Sagastuy-Brena, J., Ganguli, S., Yamins, D. L., and Tanaka, H. Neural mechanics: Symmetry and broken conservation laws in deep learning dynamics. ar Xiv preprint ar Xiv:2012.04728, 2020. Lee, C.-Y., Xie, S., Gallagher, P., Zhang, Z., and Tu, Z. Deeply-supervised nets. In Artificial intelligence and statistics, pp. 562 570. Pmlr, 2015. Li, Q., Tai, C., and E, W. Stochastic modified equations and adaptive stochastic gradient algorithms. In Precup, D. and Teh, Y. W. (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 2101 2110. PMLR, 8 2017. URL https://proceedings. mlr.press/v70/li17f.html. Lyu, K. and Li, J. Gradient descent maximizes the margin of homogeneous neural networks. ar Xiv preprint ar Xiv:1906.05890, 2019. On the Implicit Bias of Adam Ma, C., Wu, L., and Weinan, E. A qualitative study of the dynamic behavior for adaptive gradient algorithms. In Mathematical and Scientific Machine Learning, pp. 671 692. PMLR, 2022. Malladi, S., Lyu, K., Panigrahi, A., and Arora, S. On the sdes and scaling rules for adaptive gradient algorithms. Advances in Neural Information Processing Systems, 35: 7697 7711, 2022. Miyagawa, T. Toward equation of motion for deep neural networks: Continuous-time gradient descent and discretization error analysis. In Oh, A. H., Agarwal, A., Belgrave, D., and Cho, K. (eds.), Advances in Neural Information Processing Systems, 2022. URL https: //openreview.net/forum?id=qq84D17BPu. Molybog, I., Albert, P., Chen, M., De Vito, Z., Esiobu, D., Goyal, N., Koura, P. S., Narang, S., Poulton, A., Silva, R., et al. A theory on adam instability in large-scale machine learning. ar Xiv preprint ar Xiv:2304.09871, 2023. Nacson, M. S., Gunasekar, S., Lee, J., Srebro, N., and Soudry, D. Lexicographic and depth-sensitive margins in homogeneous and non-homogeneous deep models. In International Conference on Machine Learning, pp. 4683 4692. PMLR, 2019a. Nacson, M. S., Lee, J., Gunasekar, S., Savarese, P. H. P., Srebro, N., and Soudry, D. Convergence of gradient descent on separable data. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 3420 3428. PMLR, 2019b. Nacson, M. S., Srebro, N., and Soudry, D. Stochastic gradient descent on separable data: Exact convergence with a fixed learning rate. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 3051 3059. PMLR, 2019c. Qian, Q. and Qian, X. The implicit bias of adagrad on separable data. Advances in Neural Information Processing Systems, 32, 2019. Rosca, M. C., Wu, Y., Dherin, B., and Barrett, D. Discretization drift in two-player games. In International Conference on Machine Learning, pp. 9064 9074. PMLR, 2021. Smith, S. L., Dherin, B., Barrett, D., and De, S. On the origin of implicit regularization in stochastic gradient descent. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum? id=rq_Qr0c1Hyo. Soudry, D., Hoffer, E., Nacson, M. S., Gunasekar, S., and Srebro, N. The implicit bias of gradient descent on separable data. The Journal of Machine Learning Research, 19(1):2822 2878, 2018. Tieleman, T., Hinton, G., et al. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4 (2):26 31, 2012. Wang, B., Meng, Q., Chen, W., and Liu, T.-Y. The implicit bias for adaptive optimization algorithms on homogeneous neural networks. In Meila, M. and Zhang, T. (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 10849 10858. PMLR, 7 2021. URL https://proceedings.mlr.press/ v139/wang21q.html. Wang, B., Meng, Q., Zhang, H., Sun, R., Chen, W., Ma, Z.- M., and Liu, T.-Y. Does momentum change the implicit regularization on separable data? Advances in Neural Information Processing Systems, 35:26764 26776, 2022. Wilson, A. C., Roelofs, R., Stern, M., Srebro, N., and Recht, B. The marginal value of adaptive gradient methods in machine learning. Advances in neural information processing systems, 30, 2017. Woodworth, B., Gunasekar, S., Lee, J. D., Moroshko, E., Savarese, P., Golan, I., Soudry, D., and Srebro, N. Kernel and rich regimes in overparametrized models. In Conference on Learning Theory, pp. 3635 3673. PMLR, 2020. Xie, Z., Wang, X., Zhang, H., Sato, I., and Sugiyama, M. Adaptive inertia: Disentangling the effects of adaptive learning rate and momentum. In International conference on machine learning, pp. 24430 24459. PMLR, 2022. Zhang, J., Karimireddy, S. P., Veit, A., Kim, S., Reddi, S., Kumar, S., and Sra, S. Why are adaptive methods good for attention models? Advances in Neural Information Processing Systems, 33:15383 15393, 2020. Zhao, Y., Zhang, H., and Hu, X. Penalizing gradient norm for efficiently improving generalization in deep learning. In International Conference on Machine Learning, pp. 26982 26992. PMLR, 2022. Zhou, P., Feng, J., Ma, C., Xiong, C., Hoi, S. C. H., et al. Towards theoretically understanding why sgd generalizes better than adam in deep learning. Advances in Neural Information Processing Systems, 33:21285 21296, 2020. On the Implicit Bias of Adam A. Overview The appendix provide some omitted details and proofs. We consider two algorithms: RMSProp and Adam, and two versions of each algorithm (with the numerical stability ε parameter inside and outside of the square root in the denominator). This means there are four main theorems: Theorem B.4, Theorem C.4, Theorem D.4 and Theorem E.4, each residing in the section completely devoted to one algorithm. The simple induction argument taken from Ghosh et al. (2023), essentially the same for each of these theorems, is based on an auxiliary result whose corresponding versions are Theorem B.3, Theorem C.3, Theorem D.3 and Theorem E.3. The proof of this result is also elementary but long, and it is done by a series of lemmas in Appendix F and Appendix G. Out of these four, we only prove Theorem B.3 since the other three results are proven in the same way with obvious changes. Appendix H contains some details about the numerical experiments. A.1. Notation We denote the loss of the kth minibatch as a function of the network parameters θ Rp by Ek(θ), and in the full-batch setting we omit the index and write E(θ). As usual, E means the gradient of E, and nabla with indices means partial derivatives, e. g. ijs E is a shortcut for 3E θi θj θs . The letter T > 0 will always denote a finite time horizon of the ODEs, h will always denote the training step size, and we will replace nh with tn when convenient, where n {0, 1, . . .} is the step number. We will use the same notation for the iteration of the discrete algorithm n θ(k)o k Z 0 , the piecewise ODE solution θ(t) and some auxiliary terms for each of the four algorithms: see Definition B.1, Definition C.1, Definition D.1, Definition E.1. This way, we avoid cluttering the notation significantly. We are careful to reference the relevant definition in all theorem statements. B. RMSProp with ε Outside the Square Root Definition B.1. In this section, for some θ(0) Rp, ν(0) = 0 Rp, ρ (0, 1), let the sequence of p-vectors n θ(k)o k Z 0 be defined for n 0 by ν(n+1) j = ρν(n) j + (1 ρ) j En θ(n) 2 , θ(n+1) j = θ(n) j h q ν(n+1) j + ε j En θ(n) . (12) Let θ(t) be defined as a continuous solution to the piecewise ODE θj(t) = j En θ(t) R(n) j θ(t) + ε j En θ(t) 2P (n) j θ(t) + P (n) j θ(t) 2 R(n) j θ(t) + ε 2 R(n) j θ(t) Pp i=1 ij En θ(t) i En( θ(t)) R(n) i ( θ(t))+ε 2 R(n) j ( θ(t)) + ε for t [tn, tn+1] with the initial condition θ(0) = θ(0), where R(n)(θ), P(n)(θ) and P(n)(θ) are p-dimensional functions with components R(n) j (θ) := k=0 ρn k(1 ρ)( j Ek(θ))2, P (n) j (θ) := k=0 ρn k(1 ρ) j Ek(θ) i=1 ij Ek(θ) R(l) i (θ) + ε , On the Implicit Bias of Adam P (n) j (θ) := k=0 ρn k(1 ρ) j Ek(θ) i=1 ij Ek(θ) i En(θ) R(n) i (θ) + ε . Assumption B.2. 1. For some positive constants M1, M2, M3, M4 we have sup i sup k sup θ | i Ek(θ)| M1, sup i,j sup k sup θ | ij Ek(θ)| M2, sup i,j,s sup k sup θ | ijs Ek(θ)| M3, sup i,j,s,r sup k sup θ | ijsr Ek(θ)| M4. 2. For some R > 0 we have for all n {0, 1, . . . , T/h } R(n) j θ(tn) R, k=0 ρn k(1 ρ) j Ek θ(tk) 2 R2, where θ(t) is defined in Definition B.1. Theorem B.3 (RMSProp with ε outside: local error bound). Suppose Assumption B.2 holds. Then for all n {0, 1, . . . , T/h }, j {1, . . . , p} θj(tn+1) θj(tn) + h j En θ(tn) Pn k=0 ρn k(1 ρ) j Ek θ(tk) 2 + ε for a positive constant C1 depending on ρ. The proof of Theorem B.3 is conceptually simple but very technical, and we delay it until Appendix G. For now assuming it as given and combining it with a simple induction argument gives a global error bound which follows. Theorem B.4 (RMSProp with ε outside: global error bound). Suppose Assumption B.2 holds, and k=0 ρn k(1 ρ) j Ek θ(k) 2 R2 for n θ(k)o k Z 0 defined in Definition B.1. Then there exist positive constants d1, d2, d3 such that for all n {0, 1, . . . , T/h } en d1ed2nhh2 and en+1 en d3ed2nhh3, where en := θ(tn) θ(n). The constants can be defined as d2 := 1 + M2 p M 2 1 R(R + ε) + 1 d1 d3 := C1d2. Proof. We will show this by induction over n, the same way an analogous bound is shown in Ghosh et al. (2023). On the Implicit Bias of Adam The base case is n = 0. Indeed, e0 = θ(0) θ(0) = 0. Then the jth component of e1 e0 is [e1 e0]j = [e1]j = θj(t1) θ(0) j + h j E0 θ(0) (1 ρ) j E0 θ(0) 2 + ε = θj(t1) θj(t0) + h j E0 θ(t0) (1 ρ) j E0 θ(t0) 2 + ε By Theorem B.3, the absolute value of the right-hand side does not exceed C1h3, which means e1 e0 C1h3 p. Since C1 p d3, the base case is proven. Now suppose that for all k = 0, 1, . . . , n 1 the claim ek d1ed2khh2 and ek+1 ek d3ed2khh3 is proven. Then en (a) en 1 + en en 1 d1ed2(n 1)hh2 + d3ed2(n 1)hh3 = d1ed2(n 1)hh2 1 + d3 h (b) d1ed2(n 1)hh2(1 + d2h) (c) d1ed2(n 1)hh2 ed2h = d1ed2nhh2, where (a) is by the triangle inequality, (b) is by d3/d1 d2, in (c) we used 1 + x ex for all x 0. Next, combining Theorem B.3 with (12), we have [en+1 en]j C1h3 + h A + ε j En θ(n) where to simplify notation we put k=0 ρn k(1 ρ) j Ek θ(tk) 2 , k=0 ρn k(1 ρ) j Ek θ(k) 2 . Using A R2, B R2, we have 1 2R(R + ε)2 . (15) But since j Ek θ(tk) 2 j Ek θ(k) 2 = j Ek θ(tk) j Ek θ(k) j Ek θ(tk) + j Ek θ(k) 2M1 j Ek θ(tk) j Ek θ(k) 2M1M2 p θ(tk) θ(k) , On the Implicit Bias of Adam |A B| 2M1M2 p k=0 ρn k(1 ρ) θ(tk) θ(k) . (16) Combining (15) and (16), we obtain A + ε j En θ(n) j En θ(tn) 1 j En θ(tn) j En θ(n) M1 2M1M2 p Pn k=0 ρn k(1 ρ) θ(tk) θ(k) 2R(R + ε)2 + M2 p θ(tn) θ(n) = M 2 1 M2 p R(R + ε)2 k=0 ρn k(1 ρ) θ(tk) θ(k) + M2 p (a) M 2 1 M2 p R(R + ε)2 k=0 ρn k(1 ρ)d1ed2khh2 + M2 p R + ε d1ed2nhh2, (17) where in (a) we used the induction hypothesis and that the bound on en is already proven. Now note that since 0 < ρe d2h ρ, we have Pn k=0 ρe d2h k P k=0 ρk = 1 1 ρ, which is rewritten as k=0 ρn k(1 ρ)ed2kh ed2nh. Then we can continue (17): A + ε j En θ(n) M 2 1 R(R + ε) + 1 d1ed2nhh2 (18) Again using 1 ed2nh, we conclude from (14) and (18) that en+1 en C1 + M2 p M 2 1 R(R + ε) + 1 d1 p | {z } d3 finishing the induction step. B.1. RMSProp with ε outside: full-batch In the full-batch setting Ek E, the terms in (13) simplify to R(n) j (θ) = | j E(θ)| p P (n) j (θ) = k=0 ρn k(1 ρ) j E(θ) i=1 ij E(θ) | i E(θ)| p 1 ρl+1 + ε , P (n) j (θ) = 1 ρn+1 j E(θ) i=1 ij E(θ) i E(θ) | i E(θ)| p 1 ρn+1 + ε . If ε is small and the iteration number n is large, (13) simplifies to θj(t) = sign j E( θ(t)) + h ρ 1 ρ Pp i=1 ij E( θ(t)) sign i E( θ(t)) j E( θ(t)) On the Implicit Bias of Adam = j E( θ(t)) 1 j E( θ(t)) + h ρ 1 ρ j E( θ(t)) 1 C. RMSProp with ε Inside the Square Root Definition C.1. In this section, for some θ(0) Rp, ν(0) = 0 Rp, ρ (0, 1), let the sequence of p-vectors n θ(k)o k Z 0 be defined for n 0 by ν(n+1) j = ρν(n) j + (1 ρ) j En θ(n) 2 , θ(n+1) j = θ(n) j h q ν(n+1) j + ε j En θ(n) . (19) Let θ(t) be defined as a continuous solution to the piecewise ODE θj(t) = j En θ(t) R(n) j θ(t) j En θ(t) 2P (n) j θ(t) + P (n) j θ(t) 2R(n) j θ(t) 3 Pp i=1 ij En θ(t) i En( θ(t)) R(n) i ( θ(t)) 2R(n) j ( θ(t)) for t [tn, tn+1] with the initial condition θ(0) = θ(0), where R(n)(θ), P(n)(θ) and P(n)(θ) are p-dimensional functions with components R(n) j (θ) := k=0 ρn k(1 ρ)( j Ek(θ))2 + ε, P (n) j (θ) := k=0 ρn k(1 ρ) j Ek(θ) i=1 ij Ek(θ) R(l) i (θ) , P (n) j (θ) := k=0 ρn k(1 ρ) j Ek(θ) i=1 ij Ek(θ) i En(θ) R(n) i (θ) . Assumption C.2. For some positive constants M1, M2, M3, M4 we have sup i sup k sup θ | i Ek(θ)| M1, sup i,j sup k sup θ | ij Ek(θ)| M2, sup i,j,s sup k sup θ | ijs Ek(θ)| M3, sup i,j,s,r sup k sup θ | ijsr Ek(θ)| M4. Theorem C.3 (RMSProp with ε inside: local error bound). Suppose Assumption C.2 holds. Then for all n {0, 1, . . . , T/h }, j {1, . . . , p} θj(tn+1) θj(tn) + h j En θ(tn) Pn k=0 ρn k(1 ρ) j Ek θ(tk) 2 + ε for a positive constant C2 depending on ρ, where θ(t) is defined in Definition C.1. The argument is the same as for Theorem B.3. On the Implicit Bias of Adam Theorem C.4 (RMSProp with ε inside: global error bound). Suppose Assumption C.2 holds. Then there exist positive constants d4, d5, d6 such that for all n {0, 1, . . . , T/h } en d4ed5nhh2 and en+1 en d6ed5nhh3, where en := θ(tn) θ(n); θ(t) and n θ(k)o k Z 0 are defined in Definition C.1. The constants can be defined as d5 := 1 + M2 p ε M 2 1 ε + 1 d4 d6 := C2d5. The argument is the same as for Theorem B.4. C.1. RMSProp with ε Inside: Full-Batch In the full-batch setting Ek E, the terms in (20) simplify to R(n) j (θ) = q | j E(θ)|2(1 ρn+1) + ε, P (n) j (θ) = k=0 ρn k(1 ρ) j E(θ) i=1 ij E(θ) | i E(θ)|2(1 ρl+1) + ε , P (n) j (θ) = (1 ρn+1) j E(θ) i=1 ij E(θ) i E(θ) p | i E(θ)|2(1 ρn+1) + ε . If the iteration number n is large, (20) rapidly becomes θj(t) = 1 q | j E( θ(t))|2 + ε j E( θ(t)) + correctionj θ(t) , correctionj θ(t) := h 1 ρ + 1 + ρ 1 ρ ε | j E( θ(t))|2 + ε j E( θ(t)) 1,ε. D. Adam with ε Outside the Square Root Definition D.1. In this section, for some θ(0) Rp, ν(0) = 0 Rp, β, ρ (0, 1), let the sequence of p-vectors n θ(k)o k Z 0 be defined for n 0 by ν(n+1) j = ρν(n) j + (1 ρ) j En θ(n) 2 , m(n+1) j = βm(n) j + (1 β) j En θ(n) , θ(n+1) j = θ(n) j h m(n+1) j / 1 βn+1 ν(n+1) j /(1 ρn+1) + ε or, rewriting, θ(n+1) j = θ(n) j h 1 1 βn+1 Pn k=0 βn k(1 β) j Ek θ(k) 1 1 ρn+1 Pn k=0 ρn k(1 ρ) j Ek θ(k) 2 + ε On the Implicit Bias of Adam Let θ(t) be defined as a continuous solution to the piecewise ODE θj(t) = M (n) j θ(t) R(n) j θ(t) + ε M (n) j θ(t) 2P (n) j θ(t) + P (n) j θ(t) 2 R(n) j θ(t) + ε 2 R(n) j θ(t) 2L(n) j θ(t) + L(n) j θ(t) 2 R(n) j θ(t) + ε for t [tn, tn+1] with the initial condition θ(0) = θ(0), where R(n)(θ), P(n)(θ), P(n)(θ), M(n)(θ), L(n)(θ), L(n)(θ) are p-dimensional functions with components R(n) j (θ) := k=0 ρn k(1 ρ)( j Ek(θ))2/(1 ρn+1), M (n) j (θ) := 1 1 βn+1 k=0 βn k(1 β) j Ek(θ), L(n) j (θ) := 1 1 βn+1 k=0 βn k(1 β) i=1 ij Ek(θ) M (l) i (θ) R(l) i (θ) + ε , L(n) j (θ) := 1 1 βn+1 k=0 βn k(1 β) i=1 ij Ek(θ) M (n) i (θ) R(n) i (θ) + ε , P (n) j (θ) := 1 1 ρn+1 k=0 ρn k(1 ρ) j Ek(θ) i=1 ij Ek(θ) M (l) i (θ) R(l) i (θ) + ε , P (n) j (θ) := 1 1 ρn+1 k=0 ρn k(1 ρ) j Ek(θ) i=1 ij Ek(θ) M (n) i (θ) R(n) i (θ) + ε . Assumption D.2. 1. For some positive constants M1, M2, M3, M4 we have sup i sup k sup θ | i Ek(θ)| M1, sup i,j sup k sup θ | ij Ek(θ)| M2, sup i,j,s sup k sup θ | ijs Ek(θ)| M3, sup i,j,s,r sup k sup θ | ijsr Ek(θ)| M4. 2. For some R > 0 we have for all n {0, 1, . . . , T/h } R(n) j θ(tn) R, 1 1 ρn+1 k=0 ρn k(1 ρ) j Ek θ(tk) 2 R2, where θ(t) is defined in Definition D.1. Theorem D.3 (Adam with ε outside: local error bound). Suppose Assumption D.2 holds. Then for all n {0, 1, . . . , T/h }, j {1, . . . , p} θj(tn+1) θj(tn) + h 1 1 βn+1 Pn k=0 βn k(1 β) j Ek θ(tk) 1 1 ρn+1 Pn k=0 ρn k(1 ρ) j Ek θ(tk) 2 + ε On the Implicit Bias of Adam for a positive constant C3 depending on β and ρ. The argument is the same as for Theorem B.3. Theorem D.4 (Adam with ε outside: global error bound). Suppose Assumption D.2 holds, and k=0 ρn k(1 ρ) j Ek θ(k) 2 R2 for n θ(k)o k Z 0 defined in Definition D.1. Then there exist positive constants d7, d8, d9 such that for all n {0, 1, . . . , T/h } en d7ed8nhh2 and en+1 en d9ed8nhh3, where en := θ(tn) θ(n). The constants can be defined as d8 := 1 + M2 p M 2 1 R(R + ε) + 1 d7 d9 := C3d8. Proof. Analogously to Theorem B.4, we will prove this by induction over n. The base case is n = 0. Indeed, e0 = θ(0) θ(0) = 0. Then the jth component of e1 e0 is [e1 e0]j = [e1]j = θj(t1) θ(0) j + h j E0 θ(0) j E0 θ(0) + ε = θj(t1) θj(t0) + h j E0 θ(t0) r j E0 θ(t0) 2 + ε By Theorem D.3, the absolute value of the right-hand side does not exceed C3h3, which means e1 e0 C3h3 p. Since C3 p d9, the base case is proven. Now suppose that for all k = 0, 1, . . . , n 1 the claim ek d7ed8khh2 and ek+1 ek d9ed8khh3 is proven. Then en (a) en 1 + en en 1 d7ed8(n 1)hh2 + d9ed8(n 1)hh3 = d7ed8(n 1)hh2 1 + d9 h (b) d7ed8(n 1)hh2(1 + d8h) (c) d7ed8(n 1)hh2 ed8h = d7ed8nhh2, where (a) is by the triangle inequality, (b) is by d9/d7 d8, in (c) we used 1 + x ex for all x 0. Next, combining Theorem D.3 with (22), we have [en+1 en]j C3h3 + h N where to simplify notation we put N := 1 1 βn+1 k=0 βn k(1 β) j Ek θ(k) , On the Implicit Bias of Adam N := 1 1 βn+1 k=0 βn k(1 β) j Ek θ(tk) , D := 1 1 ρn+1 k=0 ρn k(1 ρ) j Ek θ(k) 2 , D := 1 1 ρn+1 k=0 ρn k(1 ρ) j Ek θ(tk) 2 . Using D R2, D R2, we have 1 2R(R + ε)2 . (26) But since j Ek θ(k) 2 j Ek θ(tk) 2 = j Ek θ(k) j Ek θ(tk) j Ek θ(k) + j Ek θ(tk) 2M1 j Ek θ(k) j Ek θ(tk) 2M1M2 p θ(k) θ(tk) , |D D | 2M1M2 p k=0 ρn k(1 ρ) θ(k) θ(tk) . (27) |N N | 1 1 βn+1 k=0 βn k(1 β) j Ek θ(k) j Ek θ(tk) k=0 βn k(1 β)M2 p θ(k) θ(tk) . Combining (26), (27) and (28), we get N k=0 βn k(1 β)M1 2M1M2 p 2R(R + ε)2(1 ρn+1) k=0 ρn k(1 ρ) θ(k) θ(tk) + M2 p (R + ε)(1 βn+1) k=0 βn k(1 β) θ(k) θ(tk) = M 2 1 M2 p R(R + ε)2(1 ρn+1) k=0 ρn k(1 ρ) θ(k) θ(tk) + M2 p (R + ε)(1 βn+1) k=0 βn k(1 β) θ(k) θ(tk) (a) M 2 1 M2 p R(R + ε)2(1 ρn+1) k=0 ρn k(1 ρ)d7ed8khh2 + M2 p (R + ε)(1 βn+1) k=0 βn k(1 β)d7ed8khh2, (29) On the Implicit Bias of Adam where in (a) we used the induction hypothesis and that the bound on en is already proven. Now note that since 0 < ρe d8h < ρ, we have Pn k=0 ρe d8h k Pn k=0 ρk = 1 ρn+1 /(1 ρ), which is rewritten as k=0 ρn k(1 ρ)ed8kh ed8nh. By the same logic, k=0 βn k(1 β)ed8kh ed8nh. Then we can continue (29): M 2 1 R(R + ε) + 1 d7ed8nhh2 (30) Again using 1 ed8nh, we conclude from (25) and (30) that en+1 en C3 + M2 p M 2 1 R(R + ε) + 1 d7 p | {z } d9 finishing the induction step. E. Adam with ε Inside the Square Root Definition E.1. In this section, for some θ(0) Rp, ν(0) = 0 Rp, β, ρ (0, 1), let the sequence of p-vectors n θ(k)o k Z 0 be defined for n 0 by ν(n+1) j = ρν(n) j + (1 ρ) j En θ(n) 2 , m(n+1) j = βm(n) j + (1 β) j En θ(n) , θ(n+1) j = θ(n) j h m(n+1) j / 1 βn+1 ν(n+1) j /(1 ρn+1) + ε . Let θ(t) be defined as a continuous solution to the piecewise ODE θj(t) = M (n) j θ(t) R(n) j θ(t) M (n) j θ(t) 2P (n) j θ(t) + P (n) j θ(t) 2R(n) j θ(t) 3 2L(n) j θ(t) + L(n) j θ(t) 2R(n) j θ(t) for t [tn, tn+1] with the initial condition θ(0) = θ(0), where R(n)(θ), P(n)(θ), P(n)(θ), M(n)(θ), L(n)(θ), L(n)(θ) On the Implicit Bias of Adam are p-dimensional functions with components R(n) j (θ) := k=0 ρn k(1 ρ)( j Ek(θ))2/(1 ρn+1) + ε, M (n) j (θ) := 1 1 βn+1 k=0 βn k(1 β) j Ek(θ), L(n) j (θ) := 1 1 βn+1 k=0 βn k(1 β) i=1 ij Ek(θ) M (l) i (θ) R(l) i (θ) , L(n) j (θ) := 1 1 βn+1 k=0 βn k(1 β) i=1 ij Ek(θ)M (n) i (θ) R(n) i (θ) , P (n) j (θ) := 1 1 ρn+1 k=0 ρn k(1 ρ) j Ek(θ) i=1 ij Ek(θ) M (l) i (θ) R(l) i (θ) , P (n) j (θ) := 1 1 ρn+1 k=0 ρn k(1 ρ) j Ek(θ) i=1 ij Ek(θ)M (n) i (θ) R(n) i (θ) . Assumption E.2. For some positive constants M1, M2, M3, M4 we have sup i sup k sup θ | i Ek(θ)| M1, sup i,j sup k sup θ | ij Ek(θ)| M2, sup i,j,s sup k sup θ | ijs Ek(θ)| M3, sup i,j,s,r sup k sup θ | ijsr Ek(θ)| M4. Theorem E.3 (Adam with ε inside: local error bound). Suppose Assumption E.2 holds. Then for all n {0, 1, . . . , T/h }, j {1, . . . , p} θj(tn+1) θj(tn) + h 1 1 βn+1 Pn k=0 βn k(1 β) j Ek θ(tk) 1 1 ρn+1 Pn k=0 ρn k(1 ρ) j Ek θ(tk) 2 + ε for a positive constant C4 depending on β and ρ. The argument is the same as for Theorem B.3. Theorem E.4 (Adam with ε inside: global error bound). Suppose Assumption E.2 holds for n θ(k)o k Z 0 defined in Definition E.1. Then there exist positive constants d10, d11, d12 such that for all n {0, 1, . . . , T/h } en d10ed11nhh2 and en+1 en d12ed11nhh3, where en := θ(tn) θ(n). The constants can be defined as d11 := 1 + M2 p ε M 2 1 ε + 1 d10 d12 := C4d11. The argument is the same as for Theorem D.4. On the Implicit Bias of Adam F. Bounding the Derivatives of the ODE Solution Our first goal is to argue that the first derivative of t 7 θj(t) is uniformly bounded in absolute value. To achieve this, we just need to bound all the terms on the right-hand side of the ODE (13). Lemma F.1. Suppose Assumption B.2 holds. Then for all n {0, 1, . . . , T/h } P (n) j (θ) C5, (34) P (n) j (θ) C6, (35) with constants C5, C6 defined as follows: C5 := p M 2 1 M2 R + ε ρ 1 ρ, C6 := p M 2 1 M2 R + ε . Proof of Lemma F.1. Both bounds are straightforward: P (n) j (θ) = sup θ k=0 ρn k(1 ρ) j Ek(θ) i=1 ij Ek(θ) R(l) i (θ) + ε p M 2 1 M2 R + ε (1 ρ) k=0 ρn k(n k) p M 2 1 M2 R + ε (1 ρ) k=0 ρkk = C5. P (n) j (θ) = sup θ k=0 ρn k(1 ρ) j Ek(θ) i=1 ij Ek(θ) i En(θ) R(n) i (θ) + ε p M 2 1 M2 R + ε (1 ρ) k=0 ρn k p M 2 1 M2 R + ε = C6, concluding the proof of Lemma F.1. Lemma F.2. Suppose Assumption B.2 holds. Then the first derivative of t 7 θj(t) is uniformly over j and t [0, T] bounded in absolute value by some positive constant, say D1. Proof. This follows immediately from h T, (34), (35) and the definition of θ(t) given in (13). Our next goal is to argue that the second derivative of t 7 θj(t) is bounded in absolute value. For this, we need to bound the first derivatives of all the three additive terms on the right-hand side of (13). Lemma F.3. Suppose Assumption B.2 holds. Then for all n, k {0, 1, . . . , T/h }, j {1, . . . , p} we have sup t [0,T ] j En θ(t) C7, (36) sup t [tn,tn+1] i=1 ij Ek θ(t) θi(t) + i En θ(t) R(n) i θ(t) + ε sup t [0,T ] i=1 ij Ek θ(t) n 1 X R(l) i θ(t) + ε (n k)C9 for k < n, (38) On the Implicit Bias of Adam sup t [0,T ] P (n) j θ(t) C10 + C14, (39) sup t [0,T ] P (n) j ( θ(t)) C15, (40) sup t [0,T ] i=1 ij Ek θ(t) i En θ(t) R(n) i θ(t) + ε sup t [0,T ] j En θ(t) 2P (n) j θ(t) + P (n) j θ(t) 2 R(n) j θ(t) + ε 2 R(n) j θ(t) sup t [0,T ] Pp i=1 ij En θ(t) i En( θ(t)) R(n) i ( θ(t))+ε 2 R(n) j ( θ(t)) + ε with constants C7, C8, C9, C10, C11, C12, C13, C14, C15, C16, C17, C18 defined as follows: C7 := p M2D1, M1(2C5 + C6) 2(R + ε)2R + p M1M2 C9 := p M1M2 C10 := D1p2 M1M 2 2 R + ε ρ 1 ρ, C11 := D1p M1M2 C12 := D1p2 M1M3 C13 := C12 + p M2 R + ε + M1 (R + ε)2 C11 M1M3 + M 2 2 + M 2 1 M 2 2 (R + ε)R C14 := M1C13 C15 := D1p2M1M 2 2 R + ε + D1p2M 2 1 M3 R + ε + D1p2M1M 2 2 R + ε + p M 2 1 M2C11 (R + ε)2 , C16 := 2C11 R(R + ε)3 + C11 (R + ε)4 , C17 := D1p M2 (2C5 + C6) 2(R + ε)2R + M1(2(C10 + C14) + C15) 2(R + ε)2R + M1(2C5 + C6)C16 C18 := 1 2(R + ε) R + ε + p2D1M 2 2 R + ε + p M1M2C11 R + ε C11 (R + ε)2 . Proof of Lemma F.3. We prove the inequalities one by one. The bound (36) is straightforward: j En θ(t) = i=1 ij En θ(t) θi(t) On the Implicit Bias of Adam The inequality (37) follows immediately from the fact that by (13) we have for t [tn, tn+1] θj(t) + j En θ(t) R(n) j θ(t) + ε h M1(2C5 + C6) 2(R + ε)2R + p M1M2 The bound (38) follows from the assumptions immediately. We will prove (39) by bounding the two additive terms on the right-hand side of the equality d dt P (n) j θ(t) k=0 ρn k(1 ρ) u=1 ju Ek θ(t) θu(t) i=1 ij Ek θ(t) n 1 X R(l) i θ(t) + ε k=0 ρn k(1 ρ) j Ek θ(t) p X ij Ek θ(t) n 1 X R(l) i θ(t) + ε It is easily shown that the first term in (44) is bounded in absolute value by C10: k=0 ρn k(1 ρ) u=1 ju Ek θ(t) θu(t) i=1 ij Ek θ(t) n 1 X R(l) i θ(t) + ε D1p2 M1M 2 2 R + ε (1 ρ) D1p2 M1M 2 2 R + ε (1 ρ) For the proof of (39), it is left to show that the second term in (44) is bounded in absolute value by C14. To bound Pp i=1 d dt ij Ek θ(t) Pn 1 l=k i El( θ(t)) R(l) i ( θ(t))+ε , we can use ij Ek θ(t) n 1 X R(l) i θ(t) + ε n ij Ek θ(t) o n 1 X R(l) i θ(t) + ε i=1 ij Ek θ(t) n 1 X R(l) i θ(t) + ε By the Cauchy-Schwarz inequality applied twice, n ij Ek θ(t) o n 1 X R(l) i θ(t) + ε On the Implicit Bias of Adam ijs Ek θ(t) 2 v u u t v u u u u t R(l) i θ(t) + ε v u u u u t R(l) i θ(t) + ε Next, for any n and j d dt R(n) j θ(t) = 1 R(n) j θ(t) k=0 ρn k(1 ρ) j Ek θ(t) p X i=1 ij Ek θ(t) θi(t) R(n) j θ(t) D1p M1M2 k=0 ρn k(1 ρ) C11. R(l) i θ(t) + ε Pp s=1 is El θ(t) θs(t) R(l) i θ(t) + ε + i El θ(t) d dt R(l) i θ(t) R(l) i θ(t) + ε 2 R + ε + M1 (R + ε)2 C11. We have obtained ij Ek θ(t) n 1 X R(l) i θ(t) + ε (n k)C13. (46) This gives a bound on the second term in (44): k=0 ρn k(1 ρ) j Ek θ(t) p X ij Ek θ(t) n 1 X R(l) i θ(t) + ε k=0 ρn k(1 ρ)(n k)C13 C14, concluding the proof of (39). We will prove (40) by bounding the four terms in the expression k=0 ρn k(1 ρ) j Ek θ(t) p X i=1 ij Ek θ(t) i En θ(t) R(n) i θ(t) + ε = Term1 + Term2 + Term3 + Term4, k=0 ρn k(1 ρ) d n j Ek θ(t) o p X i=1 ij Ek θ(t) i En θ(t) R(n) i θ(t) + ε , On the Implicit Bias of Adam k=0 ρn k(1 ρ) j Ek θ(t) p X n ij Ek θ(t) o i En θ(t) R(n) i θ(t) + ε , k=0 ρn k(1 ρ) j Ek θ(t) p X i=1 ij Ek θ(t) d dt n i En θ(t) o R(n) i θ(t) + ε , k=0 ρn k(1 ρ) j Ek θ(t) p X i=1 ij Ek θ(t) i En θ(t) d dt R(n) i θ(t) R(n) i θ(t) + ε 2 . To bound Term1, use d dt n j Ek θ(t) o D1p M2, giving |Term1| D1p2M1M 2 2 R + ε k=0 ρn k(1 ρ) D1p2M1M 2 2 R + ε . To bound Term2, use d dt n ij Ek θ(t) o D1p M3, giving |Term2| D1p2M 2 1 M3 R + ε k=0 ρn k(1 ρ) D1p2M 2 1 M3 R + ε . To bound Term3, use d dt n i En θ(t) o D1p M2, giving |Term3| D1p2M1M 2 2 R + ε k=0 ρn k(1 ρ) D1p2M1M 2 2 R + ε . To bound Term4, use (45), giving |Term4| p M 2 1 M2C11 (R + ε)2 k=0 ρn k(1 ρ) p M 2 1 M2C11 (R + ε)2 . The proof of (40) is finished. The inequality (41) is already proven in (46). To prove (42), note that the bound (45) gives R(n) j θ(t) dt R(n) j θ(t) R(n) j θ(t) 2 C11 R(n) j θ(t) + ε dt R(n) j θ(t) R(n) j θ(t) + ε 2 C11 (R + ε)2 , (48) 1 R(n) j θ(t) + ε 2 dt R(n) j θ(t) R(n) j θ(t) + ε 3 2C11 (R + ε)3 . (49) On the Implicit Bias of Adam Combining two bounds above, we have d dt R(n) j θ(t) + ε 2 R(n) j ( θ(t)) 1 R(n) j θ(t) + ε 2 R(n) j ( θ(t)) + dt n R(n) j ( θ(t)) 1o R(n) j θ(t) + ε 2 C16. We are ready to conclude j En θ(t) 2P (n) j θ(t) + P (n) j θ(t) 2 R(n) j θ(t) + ε 2 R(n) j θ(t) j En θ(t) 2P (n) j θ(t) + P (n) j θ(t) 2 R(n) j θ(t) + ε 2 R(n) j θ(t) j En θ(t) 2P (n) j θ(t) + P (n) j θ(t) 2 R(n) j θ(t) + ε 2 R(n) j θ(t) j En θ(t) 2P (n) j θ(t) + P (n) j θ(t) R(n) j θ(t) + ε 2 R(n) j ( θ(t)) 1 C17. It is left to prove (43). Since i=1 ij En θ(t) i En θ(t) R(n) i θ(t) + ε and, as we have already seen in the argument for (40), i=1 ij En θ(t) i En θ(t) R(n) i θ(t) + ε R + ε + p2D1M 2 2 R + ε + p M1M2C11 we are ready to bound Pp i=1 ij En θ(t) i En( θ(t)) R(n) i ( θ(t))+ε 2 R(n) j ( θ(t)) + ε The proof of Lemma F.3 is concluded. Lemma F.4. Suppose Assumption B.2 holds. Then the second derivative of t 7 θj(t) is uniformly over j and t [0, T] bounded in absolute value by some positive constant, say D2. Proof. This follows from the definition of θ(t) given in (13), h T and that the first derivatives of all three terms in (13) are bounded by Lemma F.3. On the Implicit Bias of Adam Finally, we need to argue that the third derivative of t 7 θj(t) is bounded in absolute value. To achieve this, we need to bound the second derivatives of the terms on the right-hand side of (13). Lemma F.5. Suppose Assumption B.2 holds. Then for all n, k {0, 1, . . . , T/h }, j {1, . . . , p} sup t [0,T ] j En θ(t) C19, (50) sup t [0,T ] R(n) j θ(t) C20, (51) sup t [0,T ] R(n) j θ(t) + ε 2 C21, (52) sup t [0,T ] R(n) j θ(t) 1 C22, (53) sup t [0,T ] R(n) j θ(t) + ε 2 R(n) j θ(t) 1 C23, (54) sup t [0,T ] i=1 ij Ek θ(t) n 1 X R(l) i θ(t) + ε (n k)C24 for k < n, (55) with constants C19, C20, C21, C22, C23, C24 defined as follows: C19 := p2M3D2 1 + p M2D2, R2 p M1M2D1 + 1 Rp2M 2 2 D2 1 + 1 Rp2M1M3D2 1 + 1 11 (R + ε)4 + 2C20 (R + ε)3 , 11 R3 + C20 11 R2(R + ε)3 + C22 (R + ε)2 , D1M 2 2 p + D1M1M3p (R + ε)2 + M1M2 11 (R + ε)3 + C20 (R + ε)2 + 2D2 1M2M3p2 + M2 D2 1M3p2 + D2M2p + M1 D2 1M4p2 + D2M3p Proof of Lemma F.5. We prove the inequalities one by one. The proof of (50) is straightforward: j En θ(t) = s=1 ijs En θ(t) θs(t) θi(t) + i=1 ij En θ(t) θt(t) To prove (51), note that R(n) j θ(t) = R(n) j θ(t) 1 n X k=0 ρn k(1 ρ) j Ek θ(t) p X i=1 ij Ek θ(t) θi(t) + R(n) j θ(t) 1 n X k=0 ρn k(1 ρ) j Ek θ(t) p X i=1 ij Ek θ(t) θi(t) On the Implicit Bias of Adam + R(n) j θ(t) 1 n X k=0 ρn k(1 ρ) j Ek θ(t) p X ij Ek θ(t) θi(t) + R(n) j θ(t) 1 n X k=0 ρn k(1 ρ) j Ek θ(t) p X i=1 ij Ek θ(t) θi(t), giving by (47) R(n) j θ(t) C11 R2 p M1M2D1 k=0 ρn k(1 ρ) + 1 Rp2M 2 2 D2 1 k=0 ρn k(1 ρ) Rp2M1M3D2 1 k=0 ρn k(1 ρ) + 1 Rp M1M2D2 X k=0 ρn k(1 ρ) To prove (52), note that R(n) j θ(t) + ε 2 = 6 R(n) j θ(t) 2 R(n) j θ(t) + ε 4 2 R(n) j θ(t) R(n) j θ(t) + ε 3 , giving by (45) and (51) R(n) j θ(t) + ε 2 C21. The bound (53) follows from (45), (51) and R(n) j θ(t) 1 = 2 R(n) j θ(t) 2 R(n) j θ(t) 3 R(n) j θ(t) R(n) j θ(t) 2 . To justify (54), put temporarily a := R(n) j θ(t) + ε 2 , b := R(n) j θ(t) 1 and use |a| 1 (R + ε)2 , |b| 1 | a| 2C11 (R + ε)3 , b C11 | a| C21, b C22 combined with (ab) = ab + 2 a b + a b. To justify (55), put temporarily a := ij Ek θ(t) , b := i El θ(t) , c := R(l) i θ(t) + ε 1 , On the Implicit Bias of Adam |a| M2, | a| p M3D1, | a| p2M4D2 1 + p M3D2, |b| M1, b p M2D1, b p2M3D2 1 + p M2D2, |c| 1 R + ε, | c| C11 (R + ε)2 , | c| 2C2 11 (R + ε)3 + C20 (R + ε)2 , from which (55) follows. The proof of Lemma F.5 is concluded. Lemma F.6. Suppose Assumption B.2 holds. Then the third derivative of t 7 θj(t) is uniformly over j and t [0, T] bounded in absolute value by some positive constant, say D3. Proof. By (38), (46) and (55) i=1 ij Ek θ(t) n 1 X R(l) i θ(t) + ε i=1 ij Ek θ(t) n 1 X R(l) i θ(t) + ε i=1 ij Ek θ(t) n 1 X R(l) i θ(t) + ε From the definition of t 7 P (n) j θ(t) , it means that its derivatives up to order two are bounded. Similarly, the same is true for t 7 P (n) j θ(t) . It follows from (52) and its proof that the derivatives up to order two of t 7 R(n) j θ(t) + ε 2 R(n) j θ(t) 1 are also bounded. These considerations give the boundedness of the second derivative of the term t 7 j En θ(t) 2P (n) j θ(t) + P (n) j θ(t) 2 R(n) j θ(t) + ε 2 R(n) j θ(t) in (13). The boundedness of the second derivatives of the other two terms is shown analogously. By (13) and since h T, this means sup j sup t [0,T ] ... θ j(t) D3 for some positive constant D3. G. Proof of Theorem B.3 Our next objective is proving and identifying the constant in the equality Pn k=0 ρn k(1 ρ) j Ek θ(tk) 2 + ε On the Implicit Bias of Adam R(n) j θ(tn) + ε h P (n) j θ(tn) R(n) j θ(tn) + ε 2 R(n) j θ(tn) + O(h2). We will make some preparations and achieve this objective in Lemma G.5. Then we will conclude the proof of Theorem B.3. Lemma G.1. Suppose Assumption B.2 holds. Then for all n {0, 1, . . . , T/h }, k {0, 1, . . . , n 1}, j {1, . . . , p} we have j Ek θ(tk) j Ek θ(tn) C7(n k)h (56) Proof. (56) follows from the mean value theorem applied n k times. Lemma G.2. In the setting of Lemma G.1, for any l {k, k + 1, . . . , n 1} we have j Ek θ(tl) j Ek θ(tl+1) h i=1 ij Ek θ(tn) i El θ(tn) R(l) i θ(tn) + ε (C19/2 + C8 + (n l 1)C13)h2. Proof. By the Taylor expansion of t 7 j Ek θ(t) on the segment [tl, tl+1] at tl+1 on the left j Ek θ(tl) j Ek θ(tl+1) + h i=1 ij Ek θ(tl+1) θi t l+1 C19 Combining this with (37) gives j Ek θ(tl) j Ek θ(tl+1) h i=1 ij Ek θ(tl+1) i El θ(tl+1) R(l) i θ(tl+1) + ε (C19/2 + C8)h2. Now applying the mean-value theorem n l 1 times, we have by (46) i=1 ij Ek θ(tl+1) i El θ(tl+1) R(l) i θ(tl+1) + ε i=1 ij Ek θ(tl+2) i El θ(tl+2) R(l) i θ(tl+2) + ε i=1 ij El θ(tn 1) i Ek θ(tn 1) R(l) i θ(tn 1) + ε i=1 ij Ek θ(tn) i El θ(tn) R(l) i θ(tn) + ε and in particular i=1 ij Ek θ(tl+1) i El θ(tl+1) R(l) i θ(tl+1) + ε i=1 ij Ek θ(tn) i El θ(tn) R(l) i θ(tn) + ε (n l 1)C13h. Combining this with (57), we conclude the proof of Lemma G.2. On the Implicit Bias of Adam Lemma G.3. In the setting of Lemma G.1, j Ek θ(tk) j Ek θ(tn) h i=1 ij Ek θ(tn) n 1 X R(l) i θ(tn) + ε (n k)(C19/2 + C8) + (n k)(n k 1) Proof. Fix n Z 0. Note that j Ek θ(tk) j Ek θ(tn) h i=1 ij Ek θ(tn) n 1 X R(l) i θ(tn) + ε j Ek θ(tl) j Ek θ(tl+1) h i=1 ij Ek θ(tn) i El θ(tn) R(l) i θ(tn) + ε j Ek θ(tl) j Ek θ(tl+1) h i=1 ij Ek θ(tn) i El θ(tn) R(l) i θ(tn) + ε l=k (C19/2 + C8 + (n l 1)C13)h2 = (n k)(C19/2 + C8) + (n k)(n k 1) where (a) is by Lemma G.2. Lemma G.4. Suppose Assumption B.2 holds. Then for all n {0, 1, . . . , T/h }, j {1, . . . , p} k=0 ρn k(1 ρ) j Ek θ(tk) 2 R(n) j θ(tn) 2 C25h (58) k=0 ρn k(1 ρ) j Ek θ(tk) 2 R(n) j θ(tn) 2 2h P (n) j θ(tn) C26h2 (59) with C25 and C26 defined as follows: C25(ρ) := 2M1C7 C26(ρ) := M1|C19 + 2C8 C13| ρ 1 ρ M1C13 + |C19 + 2C8 C13|C9 + (C19 + 2C8 C13)2 ! ρ(1 + ρ) (1 ρ)2 + C13C9 + C13 2 |C19 + 2C8 C13| ρ 1 + 4ρ + ρ2 (1 ρ)3 + C2 13 4 ρ 1 + 11ρ + 11ρ2 + ρ3 Proof. Note that j Ek θ(tk) 2 j Ek θ(tn) 2 j Ek θ(tk) j Ek θ(tn) j Ek θ(tk) + j Ek θ(tn) (a) C7(n k)h 2M1, On the Implicit Bias of Adam where (a) is by (56). Using the triangle inequality, we can conclude k=0 ρn k(1 ρ) j Ek θ(tk) 2 R(n) j θ(tn) 2 2M1C7h(1 ρ) k=0 (n k)ρn k = 2M1C7h(1 ρ) k=0 kρk = 2M1C7 (58) is proven. We continue by showing j Ek θ(tk) 2 j Ek θ(tn) 2 2 j Ek θ(tn) h i=1 ij Ek θ(tn) n 1 X R(l) i θ(tn) + ε (n k)(C19/2 + C8) + (n k)(n k 1) (n k)(C19/2 + C8) + (n k)(n k 1) + (n k)(C19/2 + C8) + (n k)(n k 1) To prove this, use a2 b2 2b Kh 2|b| |a b Kh| + 2|K| h |a b Kh| + (a b Kh)2 a := j Ek θ(tk) , b := j Ek θ(tn) , K := i=1 ij Ek θ(tn) n 1 X R(l) i θ(tn) + ε , and bounding |a b Kh| (a) (n k)(C19/2 + C8) + (n k)(n k 1) |b| M1, |K| (n k)C9, where (a) is by Lemma G.3. (60) is proven. We turn to the proof of (59). By (60) and the triangle inequality k=0 ρn k(1 ρ) j Ek θ(tk) 2 R(n) j θ(tn) 2 2h P (n) j θ(tn) k=0 ρn k Poly1(n k)h2 + Poly2(n k)h3 + Poly3(n k)h4 k=0 ρk Poly1(k)h2 + Poly2(k)h3 + Poly3(k)h4 , Poly1(k) := 2M1 k(C19/2 + C8) + k(k 1) = M1C13k2 + M1(C19 + 2C8 C13)k, On the Implicit Bias of Adam Poly2(k) := 2k C9 k(C19/2 + C8) + k(k 1) = C13C9k3 + (C19 + 2C8 C13)C9k2, Poly3(k) := k(C19/2 + C8) + k(k 1) 13 4 k4 + C13 2 (C19 + 2C8 C13)k3 + 1 4(C19 + 2C8 C13)2k2. It is left to combine this with k=0 kρk = ρ (1 ρ)2 , k=0 k2ρk = ρ(1 + ρ) k=0 k3ρk = ρ 1 + 4ρ + ρ2 k=0 k4ρk = ρ 1 + 11ρ + 11ρ2 + ρ3 k=0 ρn k(1 ρ) j Ek θ(tk) 2 R(n) j θ(tn) 2 2h P (n) j θ(tn) (1 ρ)2 + M1|C19 + 2C8 C13| ρ 1 ρ ρ 1 + 4ρ + ρ2 (1 ρ)3 + |C19 + 2C8 C13|C9 ρ(1 + ρ) (1 ρ)2 13 4 ρ 1 + 11ρ + 11ρ2 + ρ3 (1 ρ)4 + C13 2 |C19 + 2C8 C13|ρ 1 + 4ρ + ρ2 4(C19 + 2C8 C13)2 ρ(1 + ρ) (a) M1|C19 + 2C8 C13| ρ 1 ρ M1C13 + |C19 + 2C8 C13|C9 + (C19 + 2C8 C13)2 ! ρ(1 + ρ) (1 ρ)2 + C13C9 + C13 2 |C19 + 2C8 C13| ρ 1 + 4ρ + ρ2 13 4 ρ 1 + 11ρ + 11ρ2 + ρ3 where in (a) we used that h < 1. (59) is proven. Lemma G.5. Suppose Assumption B.2 holds. Then k=0 ρn k(1 ρ) j Ek θ(tk) 2 + ε R(n) j θ(tn) + ε 1 On the Implicit Bias of Adam +h P (n) j θ(tn) R(n) j θ(tn) + ε 2 R(n) j θ(tn) C25(ρ)2 + R2C26(ρ) 2R3(R + ε)2 h2. Proof. Note that if a R2, b R2, we have b + ε + a b 2R3(R + ε)2 . By the triangle inequality, 2R3(R + ε)2 + |a b c| 2R3(R + ε)2 + |a b c| 2R(R + ε)2 . Apply this with k=0 ρn k(1 ρ) j Ek θ(tk) 2 , b := R(n) j θ(tn) 2 , c := 2h P (n) j θ(tn) and use bounds |a b| 2M1C7 ρ 1 ρh, |a b c| C26(ρ)h2 by Lemma G.4. We are finally ready to prove Theorem B.3. Proof of Theorem B.3. By (42) and (43), the first derivative of the function j En θ(t) 2P (n) j θ(t) + P (n) j θ(t) 2 R(n) j θ(t) + ε 2 R(n) j θ(t) Pp i=1 ij En θ(t) i En( θ(t)) R(n) i ( θ(t))+ε 2 R(n) j ( θ(t)) + ε is bounded in absolute value by a positive constant C27 = C17 + C18. By (13), this means R(n) j θ(t) + ε On the Implicit Bias of Adam Combining this with θj(tn+1) θj(tn) θj t+ n h θj(t+ n ) 2 h2 D3 by Taylor expansion, we get θj(tn+1) θj(tn) θj t+ n h + h2 R(n) j θ(t) + ε Using θj(tn) + j En θ(tn) R(n) j θ(tn) + ε with C28 defined as C28 := M1(2C5 + C6) 2(R + ε)2R + p M1M2 by (13), and calculating the derivative, it is easy to show R(n) j θ(t) + ε for a positive constant C29, where Fr Der := Fr Der Num R(n) j θ(tn) + ε 2 R(n) j θ(tn) Fr Der Num := j En θ(tn) P (n) j θ(tn) R(n) j θ(tn) + ε R(n) j θ(tn) p X i=1 ij En θ(tn) i En θ(tn) R(n) i θ(tn) + ε , C29 := p M2 R + ε + M 2 1 M2p (R + ε)2R From (61) and (62), by the triangle inequality θj(tn+1) θj(tn) θj t+ n h + h2 2 Fr Der D3 6 + C27 + C29 which, using (13), is rewritten as θj(tn+1) θj(tn) + h j En θ(tn) R(n) j θ(tn) + ε h2 j En θ(tn) P (n) j θ(tn) R(n) j θ(tn) + ε 2 R(n) j θ(tn) 6 + C27 + C29 It is left to combine this with Lemma G.5, giving the assertion of the theorem with 6 + C27 + C29 25 + R2C26 2R3(R + ε)2 . On the Implicit Bias of Adam H. Numerical Experiments H.1. Models We use small modifications of Resnet-50 and Resnet-101 implementations in the torchvision library for training on CIFAR-10 and CIFAR-100. The first convolution layer conv1 has 3 3 kernel, stride 1 and same padding. Then comes batch normalization, and relu. Max pooling is removed, and otherwise conv2_x to conv5_x are as described in He et al. (2016) (see Table 1 there) except downsampling is performed by the middle convolution of each bottleneck block, as in version 1.53. After conv5 there is global average pooling and 10 or 100-way fully connected layer (for CIFAR-10 and CIFAR-100 respectively). The MLP that we use for showing the closeness of trajectories in Figure 3 consists of two fully connected layers, each with 32 units and Ge LU activation, followed by a fully-connected layer with 10 units. In Figure 3, the curves called first order plot θ(n) θ (n) and the curves called second order plot θ(n) θ(n) , where θ(n) is the Adam iteration defined in Definition 1.1 and θ(n+1) j = θ(n) j h A(n) j θ(n) + h2B(n) j θ(n) , θ(n+1) j = θ(n) j h A(n) j θ (n) (63) for A(n) j ( ) and B(n) j ( ) as defined in Section 3, with the same initial point θ(0) = θ (0) = θ(0). H.2. Data Augmentation We subtract the per-pixel mean and divide by standard deviation, and we use the data augmentation scheme from Lee et al. (2015), following He et al. (2016), section 4.2. During each pass over the training dataset, each 32 32 initial image is padded evenly with zeros so that it becomes 40 40, then random crop is applied so that the picture becomes 32 32 again, and random (probability 0.5) horizontal (left to right) flip is used. H.3. Experiment Details In experiments whose results are reported in Figures 4 and 5 of the main paper, we train for a few thousand epochs and stop training when the train accuracy is near-perfect (Figure 11) and the testing accuracy does not significantly improve (Figure 12). Therefore, the maximal test accuracies are the final ones reached, and the maximal perturbed one-norms, after excluding the initial fall at the beginning of training, are at peaks of the hills on the norm curves (Figure 12). Since the full dataset does not fit into GPU memory, we divide it into 100 ghost batches and accumulate the gradients before doing one optimization step. This means that we use ghost batch normalization (Hoffer et al., 2017) as opposed to full-dataset batch normalization, similarly to Cohen et al. (2021). H.4. Additional Evidence We provide evidence that the results in Figures 4 and 5 are robust to the change of architectures. In Figures 7 and 8, we show that the pictures are similar for a simple CNN created by the following code: # First block nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3, padding='same'), nn.Re LU(), nn.Conv2d(in_channels=32, out_channels=32, kernel_size=3, padding='same'), nn.Re LU(), nn.Max Pool2d(kernel_size=2, stride=2), # Second block 3https://catalog.ngc.nvidia.com/orgs/nvidia/resources/resnet_50_v1_5_for_pytorch On the Implicit Bias of Adam nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3, padding='same'), nn.Re LU(), nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, padding='same'), nn.Re LU(), nn.Max Pool2d(kernel_size=2, stride=2), # Third block nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, padding='same'), nn.Re LU(), nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, padding='same'), nn.Re LU(), nn.Max Pool2d(kernel_size=2, stride=2), # Flatten and Dense layers nn.Flatten(), nn.Linear(in_features=128 * 4 * 4, out_features=512), nn.Re LU(), nn.Linear(in_features=512, out_features=num_classes), ] return nn.Sequential(*layers) In Figures 9 and 10, we show that the same conclusions can be made for a Vision Transformer (Dosovitskiy et al., 2020; Beyer et al., 2022). In these experiments, we use the Simple Vi T architecture from the vit-pytorch library with 4 4 patches, 6 transformer blocks with 16 heads, embedding size 512 and MLP dimension of 1024 (following Andriushchenko et al. (2023)). 0.92 0.94 0.96 0.98 Perturbed 1-norm h = 7.5e-05 h = 0.00015 0.92 0.94 0.96 0.98 Test accuracy h = 7.5e-05 h = 0.00015 Figure 7. A simple CNN trained on CIFAR-10 with full-batch Adam, β = 0.99, ε = 10 8. As ρ increases, the perturbed one-norm rises and the test accuracy falls. Both metrics are calculated as in Figures 4 and 5 of the main paper. All results are averaged across five runs with different initialization seeds. On the Implicit Bias of Adam 0.96 0.97 0.98 0.99 1.00 Perturbed 1-norm h = 7.5e-05 h = 0.00015 0.96 0.97 0.98 0.99 1.00 80 Test accuracy h = 7.5e-05 h = 0.00015 Figure 8. A simple CNN trained on CIFAR-10 with full-batch Adam, ρ = 0.999, ε = 10 8. The perturbed one-norm falls as β increases, and the test accuracy rises. Both metrics are calculated as in Figures 4 and 5 of the main paper. All results are averaged across three runs with different initialization seeds. I. Adam with ε Inside the Square Root: Informal Derivation Our goal is to find such a trajectory θ(t) that θj(tn+1) = θj(tn) h Pn k=0 βn k(1 β) j Ek θ(tk) /(1 βn+1) q Pn k=0 ρn k(1 ρ) j Ek θ(tk) 2/(1 ρn+1) + ε + O(h3). Result I.1. For n {0, 1, 2, . . .} we have θj(tn+1) = θj(tn) h M (n) j θ(tn) R(n) j θ(tn) M (n) j θ(tn) P (n) j θ(tn) R(n) j θ(tn) 3 L(n) j θ(tn) R(n) j θ(tn) Derivation. We take θj(tn+1) = θj(tn) h M (n) j θ(tn) R(n) j θ(tn) + O h2 for granted. Using this and the Taylor series, we can write j Ek θ(tn 1) = j Ek θ(tn) + i=1 ij Ek θ(tn) n θi(tn 1) θi(tn) o + O h2 On the Implicit Bias of Adam 0.92 0.94 0.96 0.98 Perturbed 1-norm h = 7.5e-05 h = 0.00015 0.92 0.94 0.96 0.98 Test accuracy h = 7.5e-05 h = 0.00015 Figure 9. A vision transformer trained on CIFAR-10 with full-batch Adam. The setting and conclusions are the same as in Figure 7. = j Ek θ(tn) + h i=1 ij Ek θ(tn) M (n 1) j θ(tn 1) R(n 1) j θ(tn 1) + O h2 = j Ek θ(tn) + h i=1 ij Ek θ(tn) M (n 1) j θ(tn) R(n 1) j θ(tn) + O h2 , where in the last equality we just replaced tn 1 with tn in the h-term since it only affects higher-order terms. Now doing this again for step n 1 instead of step n, we will have j Ek θ(tn 2) = j Ek θ(tn 1) + h i=1 ij Ek θ(tn 1) M (n 2) j θ(tn 1) R(n 2) j θ(tn 1) + O h2 = j Ek θ(tn 1) + h i=1 ij Ek θ(tn 1) M (n 2) j θ(tn) R(n 2) j θ(tn) + O h2 , where in the last equality we again replaced tn 1 with tn since it only affects higher-order terms. Proceeding like this and adding the resulting equations, we have for n {0, 1, . . .}, k {0, . . . , n 1} that = j Ek θ(tn) + h i=1 ij Ek θ(tn) n 1 X M (l) i θ(tn) R(l) i θ(tn) + O h2 , where we ignored the fact that n k is not bounded (we will get away with this because of exponential averaging). Hence, taking the square of this formal power series, ρn k(1 ρ) j Ek θ(tk) 2 = ρn k(1 ρ) j Ek θ(tn) 2 On the Implicit Bias of Adam 0.96 0.97 0.98 0.99 1.00 Perturbed 1-norm h = 7.5e-05 h = 0.00015 0.96 0.97 0.98 0.99 1.00 Test accuracy h = 7.5e-05 h = 0.00015 Figure 10. A vision transformer trained on CIFAR-10 with full-batch Adam. The setting and conclusions are the same as in Figure 8. + h 2ρn k(1 ρ) j Ek θ(tn) p X i=1 ij Ek θ(tn) n 1 X M (l) i θ(tn) R(l) i θ(tn) + O h2 . Summing up over k, we have k=0 ρn k(1 ρ) j Ek θ(tk) 2 + ε = R(n) j θ(tn) 2 + 2h P (n) j θ(tn) + O h2 , which, using the expression for the inverse square root (P r=0 arhr) 1/2 of a formal power series P r=0 arhr, gives us v u u t 1 1 ρn+1 k=0 ρn k(1 ρ) j Ek θ(tk) 2 + ε R(n) j θ(tn) h P (n) j θ(tn) R(n) j θ(tn) 3 + O h2 . k=0 (1 β)βn k j Ek θ(tk) = 1 1 βn+1 k=0 (1 β)βn k j Ek θ(tn) k=0 (1 β)βn k p X i=1 ij Ek θ(tn) n 1 X M (l) i θ(tn) R(l) i θ(tn) + O h2 = M (n) j θ(tn) + h L(n) j θ(tn) + O h2 . On the Implicit Bias of Adam 0 1000 2000 3000 4000 Epoch =0.92 =0.9354 =0.9479 =0.9579 =0.966 =0.9726 =0.9778 =0.9821 =0.9856 =0.9883 =0.9906 =0.9924 =0.9939 =0.995 =0.996 0 1000 2000 3000 4000 Epoch Train accuracy =0.92 =0.9354 =0.9479 =0.9579 =0.966 =0.9726 =0.9778 =0.9821 =0.9856 =0.9883 =0.9906 =0.9924 =0.9939 =0.995 =0.996 Figure 11. Train loss and train accuracy curves for full-batch Adam, Res Net-50 on CIFAR-10, β = 0.99, ε = 10 8, h = 10 4. We conclude θj(tn+1) = θj(tn) h M (n) j θ(tn) + h L(n) j θ(tn) + O h2 R(n) j θ(tn) h P (n) j θ(tn) R(n) j θ(tn) 3 + O h2 = θj(tn) h M (n) j θ(tn) R(n) j θ(tn) M (n) j θ(tn) P (n) j θ(tn) R(n) j θ(tn) 3 L(n) j θ(tn) R(n) j θ(tn) Result I.2. For tn t < tn+1, the modified equation is (32). Derivation. Assume that the modified flow for tn t < tn+1 satisfies θ = f θ(t) where f(θ) = f(θ) + hf1(θ) + O h2 . On the Implicit Bias of Adam 0 1000 2000 3000 4000 Epoch Test accuracy =0.92 =0.9354 =0.9479 =0.9579 =0.966 =0.9726 =0.9778 =0.9821 =0.9856 =0.9883 =0.9906 =0.9924 =0.9939 =0.995 =0.996 0 1000 2000 3000 4000 Epoch Perturbed 1-norm =0.92 =0.9354 =0.9479 =0.9579 =0.966 =0.9726 =0.9778 =0.9821 =0.9856 =0.9883 =0.9906 =0.9924 =0.9939 =0.995 =0.996 Figure 12. Test accuracy and E 1,ε after each epoch. The setting is the same as in Figure 11. By Taylor expansion, we have θ(tn+1) = θ(tn) + h θ t+ n + h2 2 θ t+ n + O h3 = θ(tn) + h h f θ(tn) + hf1 θ(tn) + O h2 i h f θ(tn) f θ(tn) + O(h) i + O h3 = θ(tn) + hf θ(tn) + h2 f1 θ(tn) + f θ(tn) f θ(tn) Using Lemma I.1 and equating the terms before the corresponding powers of h in (64) and (65), we obtain fj(θ) = M (n) j (θ) R(n) j (θ) , f1,j(θ) = 1 i=1 ifj(θ)fi(θ) + M (n) j (θ)P (n) j (θ) R(n) j (θ)3 L(n) j (θ) R(n) j (θ) . It is left to find ifj(θ). Using i R(n) j (θ) = Pn k=0 ρn k(1 ρ) ij Ek(θ) j Ek(θ) (1 ρn+1)R(n) j (θ) , i M (n) j (θ) = Pn k=0 βn k(1 β) ij Ek(θ) M (n) j (θ) On the Implicit Bias of Adam R(n) j (θ)2 1 βn+1 Pn k=0 βn k(1 β) ij Ek(θ) M (n) j (θ) 1 ρn+1 Pn k=0 ρn k(1 ρ) ij Ek(θ) j Ek(θ) R(n) j (θ)3 = Pn k=0 βn k(1 β) ij Ek(θ) (1 βn+1)R(n) j (θ) + M (n) j (θ) Pn k=0 ρn k(1 ρ) ij Ek(θ) j Ek(θ) (1 ρn+1)R(n) j (θ)3 Inserting this into (66) concludes the proof.