# secondorder_minmax_optimization_with_lazy_hessians__dec7dd3d.pdf Published as a conference paper at ICLR 2025 SECOND-ORDER MIN-MAX OPTIMIZATION WITH LAZY HESSIANS Lesi Chen IIIS, Tsinghua University & Shanghai Qizhi Institute chenlc23@mails.tsinghua.edu.cn Chengchang Liu The Chinese University of Hong Kong 7liuchengchang@gmail.com Jingzhao Zhang IIIS, Tsinghua University & Shanghai Qizhi Institute & Shanghai AI Lab jingzhaoz@mail.tsinghua.edu.cn This paper studies second-order methods for convex-concave minimax optimization. Monteiro & Svaiter (2012) proposed a method to solve the problem with an optimal iteration complexity of O(ϵ 3/2) to find an ϵ-saddle point. However, it is unclear whether the computational complexity, O((N + d2)dϵ 2/3), can be improved. In the above, we follow Doikov et al. (2023) and assume the complexity of obtaining a first-order oracle as N and the complexity of obtaining a second-order oracle as d N. In this paper, we show that the computation cost can be reduced by reusing Hessian across iterations. Our methods take the overall computational complexity of O((N + d2)(d + d2/3ϵ 2/3)), which improves those of the previous methods by a factor of d1/3. Furthermore, we generalize our method to strongly-convex-strongly-concave minimax problems and establish the complexity of O((N + d2)(d + d2/3κ2/3)) when the condition number of the problem is κ, enjoying a similar speedup upon the state-of-the-art method. Numerical experiments on both real and synthetic datasets also verify the efficiency of our method. 1 INTRODUCTION We consider the following minimax optimization problem: min x Rdx max y Rdy f(x, y), (1) where we suppose f(x, y) is (strongly-)convex in x and (strongly-)concave in y. This setting covers many useful applications, including functionally constrained optimization (Xu, 2020), game theory (Von Neumann & Morgenstern, 1947), robust optimization (Ben-Tal et al., 2009), fairnessaware machine learning (Zhang et al., 2018), reinforcement learning (Du et al., 2017; Wang, 2017; Paternain et al., 2022; Wai et al., 2018), decentralized optimization (Kovalev et al., 2021; 2020), AUC maximization (Ying et al., 2016; Hanley & Mc Neil, 1982; Yuan et al., 2021). First-order methods are widely studied for this problem. Classical algorithms include Extra Gradient (EG) (Korpelevich, 1976; Nemirovski, 2004), Optimistic Gradient Descent Ascent (OGDA) (Popov, 1980; Mokhtari et al., 2020a;b), Hybrid Proximal Extragradient (HPE) (Monteiro & Svaiter, 2010), and Dual Extrapolation (DE) (Nesterov & Scrimali, 2006; Nesterov, 2007). When the gradient of f( , ) is L-Lipschitz continuous, these methods achieve the rate of O(ϵ 1) under the convexconcave (C-C) setting and the rate of O((L/µ) log(ϵ 1)) when f( , ) is µ-strongly convex in x and µ-strongly-concave in y (SC-SC) for µ > 0. They are all optimal in C-C and SC-SC settings due to the lower bounds reported by (Nemirovskij & Yudin, 1983; Zhang et al., 2022a). Equal contributions. The corresponding author. Published as a conference paper at ICLR 2025 Second-order methods usually lead to faster rates than first-order methods when the Hessian of f( , ) is ρ-Lipschitz continuous. A line of works (Nesterov & Scrimali, 2006; Huang et al., 2022) extended the celebrated Cubic Regularized Newton (CRN) (Nesterov & Polyak, 2006) method to minimax problems with local superlinear convergence rates and global convergence guarantee. However, the established global convergence rates of O(ϵ 1) by Nesterov & Scrimali (2006) and O((Lρ/µ2) log(ϵ 1)) by Huang et al. (2022) under C-C and SC-SC conditions are no better than the optimal first-order methods. Another line of work generalizes the optimal first-order methods to higher-order methods. Monteiro & Svaiter (2012) proposed the Newton Proximal Extragradient (NPE) method with a global convergence rate of O(ϵ 2/3 log log(ϵ 1)) under the C-C conditions. This result nearly matches the lower bounds (Adil et al., 2022; Lin & Jordan, 2024), except an additional O(log log(ϵ 1)) factor which is caused by the implicit binary search at each iteration. Bullins & Lai (2022); Adil et al. (2022); Huang & Zhang (2022); Lin et al. (2022) provided a simple proof of NPE motivated by the EG analysis and showed that replacing the quadratic regularized Newton step with the cubic regularized Newton (CRN) step in NPE achieves the optimal second-order oracle complexity of O(ϵ 2/3). Recently, Alves & Svaiter (2023) proposed a search-free NPE method to achieve the optimal second-order oracle complexity with pure quadratic regularized Newton step based on ideas from homotopy. Over the past decade, researchers also proposed various secondorder methods, in addition to the NPE framework, that achieve the same convergence rate, such as the second-order extensions of OGDA (Jiang & Mokhtari, 2022; Jiang et al., 2024) (which we refer to as OGDA-2) and DE (Lin & Jordan, 2024) (they name the method Persesus). The results for C-C problems can also be extended to SC-SC problems, where Jiang & Mokhtari (2022) proved the OGDA-2 can converge at the rate of O((ρ/µ)2/3 + log log(ϵ 1)), and Huang & Zhang (2022) proposed the ARE-restart with the rate of O((ρ/µ)2/3 log log(ϵ 1)). Although the aforementioned second-order methods Adil et al. (2022); Lin & Jordan (2024); Lin et al. (2022); Jiang & Mokhtari (2022); Monteiro & Svaiter (2012) enjoy an improved convergence rate over the first-order methods and have achieved optimal iteration complexities, they require querying one new Hessian at each iteration and solving a matrix inversion problem at each Newton step, which leads to a O(d3) computational cost per iteration. This becomes the main bottleneck that limits the applicability of second-order methods. Liu & Luo (2022a) proposed quasi-Newton methods for saddle point problems that access one Hessian-vector product instead of the exact Hessian for each iteration. The iteration complexity is O(d2) for quasi-Newton methods. However, their methods do not have a global convergence guarantee under general (S)C)-(S)C conditions. Jiang et al. (2023) proposed an online-learning guided Quasi-Newton Proximal Extragradient (QNPE) algorithm, but their method relies on more complicated subroutines than classical Newton methods. Although the oracle complexity of QNPE is strictly better than the optimal first-order method EG, their method is worse in terms of total computational complexity. In this paper, we propose a computation-efficient second-order method, which we call LEN (Lazy Extra Newton method). In contrast to all existing second-order methods or quasi-Newton methods for minimax optimization problems that always access new second-order information for the coming iteration, LEN reuses the second-order information from past iterations. Specifically, LEN solves a cubic regularized sub-problem using the Hessian from the snapshot point that is updated every m iteration, then conducts an extra-gradient step by the gradient from the current iteration. We provide a rigorous theoretical analysis of LEN to show it maintains fast global convergence rates and improves the (near)-optimal second-order methods (Monteiro & Svaiter, 2012) in terms of the overall computational complexity. We summarize our contributions as follows (also see Table 1). When the object function f( , ) is convex in x and concave in y, we propose LEN and prove that it finds an ϵ-saddle point in O(m2/3ϵ 2/3) iterations. Under Assumption 3.4, where the complexity of calculating F (z) is N and the complexity of calculating F (z) is d N, the optimal choice is m = Θ(d). In this case, LEN only requires a computational complexity of O((N+d2)(d+d2/3ϵ 2/3)), which is strictly better than O((N+d2)dϵ 2/3) for the existing optimal second-order methods by a factor of d1/3. When the object function f( , ) is µ-strongly-convex in x and µ-strongly-concave in y, we apply the restart strategy on LEN and propose LEN-restart. We prove the algorithm can find an ϵ-root with O((N + d2)(d + d2/3(ρ/µ)2/3)) computational complexity, where ρ means the Hessian of f( , ) is ρ Lipschitz-continuous. Our result is strictly better than the O((N + d2)d(ρ/µ)2/3) in prior works. Published as a conference paper at ICLR 2025 Table 1: We compare the required computational complexity to achieve an ϵ-saddle point of the proposed LEN with the optimal choice m = Θ(d) and other existing algorithms on both convexconcave (C-C) and strongly-convex-strongly-concave (SC-SC) problems. Here, d = dx + dy is the dimension of the problem. We assume the gradient is L-Lipschitz continuous for EG and the Hessian is ρ-Lipschitz continuous for others. We count each gradient oracle call with N computational complexity, and each Hessian oracle with d N computational complexity. Setup Method Computational Cost EG (Korpelevich, 1976) O((N + d)ϵ 1) NPE (Monteiro & Svaiter, 2012) O((N + d2)dϵ 2/3) C-C search-free NPE (Alves & Svaiter, 2023) O((N + d2)dϵ 2/3) OGDA-2 (Jiang & Mokhtari, 2022) O((N + d2)dϵ 2/3) LEN (Theorem 4.3) O((N + d2)(d + d2/3ϵ 2/3)) EG (Korpelevich, 1976) O((N + d)(L/µ)) OGDA-2 (Jiang & Mokhtari, 2022) O((N + d2)d(ρ/µ)2/3) SC-SC ARE-restart (Huang & Zhang, 2022) O((N + d2)d(ρ/µ))2/3) Perseus-restart (Lin & Jordan, 2024) O((N + d2)d(ρ/µ)2/3) LEN-restart (Corollary 4.1) O((N + d2)(d + d2/3(ρ/µ)2/3)) Notations. Throughout this paper, log is base 2 and log+( ) := 1 + log( ). We use to denote the spectral norm and the Euclidean norm of matrices and vectors, respectively. We denote π(t) = t (t mod m) where m N+. 2 RELATED WORKS AND TECHNICAL CHALLENGES Lazy Hessian in minimization problems. The idea of reusing Hessian was initially presented by Shamanskii (1967) and later incorporated into the Levenberg-Marquardt method, the Damped Newton method, and the proximal Newton method (Fan, 2013; Lampariello & Sciandrone, 2001; Wang et al., 2006; Adler et al., 2020). However, the explicit advantage of lazy Hessian update over ordinary Newton (-type) update was not discovered until the recent work of (Doikov et al., 2023; Chayti et al., 2023). They applied the following lazy Hessian update on cubic regularized Newton (CRN) methods (Nesterov & Polyak, 2006): zt+1 = arg min z Rd F (zt), z zt + 1 2 F (zπ(t))(z zt), z zt + M 6 z zt 3 , (2) where M 0 and F : Rd Rd is the gradient field of a convex function. They establish the convergence rates of O( mϵ 3/2) for nonconvex optimization (Doikov et al., 2023), and O( mϵ 1/2) for convex optimization (Chayti et al., 2023) respectively. Such rates lead to the total computational cost of O((N + d2)(d + dϵ 3/2)) and O((N + d2)(d + dϵ 1/2)) by setting m = Θ(d), which strictly improve the result by classical CRN methods by a factor of d in both setups. We have also observed that the idea of the lazy Hessian is widely used in practical second-order methods. KFAC (Martens & Grosse, 2015; Grosse & Martens, 2016) approximates the Fisher information matrix and uses an exponential moving average (EMA) to update the estimate of the Fisher information matrix, which can be viewed as a soft version of lazy update. Sophia (Liu et al., 2024) estimates a diagonal Hessian matrix as a pre-conditioner, which is updated in a lazy manner to reduce the complexity. C2EDEN (Liu et al., 2023) atomizes the communication of local Hessian in several consecutive iterations, which also benefits from the idea of lazy updates. Challenge of using lazy Hessian updates in minimax problems. In comparison to previous work on lazy Hessian, our LEN and LEN-restart methods demonstrate the advantage of using lazy Hessian Published as a conference paper at ICLR 2025 for a broader class of optimization problems, the minimax problems. Our analysis differs from the ones in Doikov et al. (2023); Chayti et al. (2023). Their methods only take a lazy CRN update (2) at each iteration, which makes it easy to bound the error of lazy Hessian updates using Assumption 3.1 and the triangle inequality in the following way: F (zt) F (zπ(t)) ρ zπ(t) zt ρ i=π(t) zi zi+1 . Our method, on the other hand, not only takes a lazy (regularized) Newton update but also requires an extra gradient step (Line 4 in Algorithm 1). Thus, the progress of one Newton update { zi+1/2 zi }t i=π(t) cannot directly bound the error term zt zπ(t) introduced by the lazy Hessian update. Moreover, for minimax problems the matrix F (zπ(t)) is no longer symmetric, which leads to different analysis and implementation of sub-problem solving (Section 4.3). We refer the readers to Section 4 for more detailed discussions. 3 PRELIMINARIES In this section, we introduce the notation and basic assumptions used in our work. We start with several standard definitions for Problem (1). Definition 3.1. We call a function f(x, y) : Rdx Rdy R has ρ-Lipschitz Hessians if we have 2f(x, y) 2f(x , y ) ρ , (x, y), (x , y ) Rdx Rdy. Definition 3.2. A differentiable function f( , ) is µ-strongly-convex-µ-strongly-concave for some µ > 0 if f(x , y) f(x, y) + (x x) xf(x, y) + µ 2 x x 2, x , x Rdx, y Rdy; f(x, y ) f(x, y) + (y y) yf(x, y) µ 2 y y 2, y , y Rdy, x Rdx. We say f is convex-concave if µ = 0. We are interested in finding a saddle point of Problem (1), formally defined as follows. Definition 3.3. We call a point (x , y ) Rdx Rdy a saddle point of a function f( , ) if we have f(x , y) f(x , y ) f(x, y ), x Rdx, y Rdy. Next, we introduce all the assumptions made in this work. In this paper, we focus on Problem (1) that satisfies the following assumptions. Assumption 3.1. We assume the function f( , ) is twice continuously differentiable, has ρ-Lipschitz continuous Hessians, and has at least one saddle point (x , y ). We will study convex-concave problems and strongly-convex-strongly-concave problems. Assumption 3.2 (C-C setting). We assume the function f( , ) is convex in x and concave in y. Assumption 3.3 (SC-SC setting). We assume the function f( , ) is µ-strongly-convex-µ-stronglyconcave. We denote the condition number as κ := ρ/µ. We let d := dx +dy and denote the aggregated variable z := (x, y) Rd. We also denote the GDA field of f and its Jacobian as F (z) := xf(x, y) yf(x, y) , F (z) := 2 xxf(x, y) 2 xyf(x, y) 2 yxf(x, y) 2 yyf(x, y) The GDA field of f( , ) has the following properties. Lemma 3.1 (Lemma 2.7 Lin et al. (2022)). Under Assumptions 3.1 and 3.2, we have Published as a conference paper at ICLR 2025 1. F is monotone, i.e. F (z) F (z ), z z 0, z, z Rd. 2. F is ρ-Lipschitz continuous, i.e. F (z) F (z ) ρ z z , z, z Rd. 3. F (z ) = 0 if and only if z = (x , y ) is a saddle point of function f( , ). Furthermore, if Assumption 3.3 holds, we have F ( ) is µ-strongly-monotone, i.e. F (z) F (z ), z z µ z z 2, z, z Rd. For the C-C case, the commonly used optimality criterion is the following restricted gap. Definition 3.4 (Nesterov (2007)). Let Bβ(w) be the ball centered at w with radius β. Let (x , y ) be a saddle point of function f. For a given point (ˆx, ˆy), we let β sufficiently large such that it holds max { ˆx x , ˆy y } β, we define the restricted gap function as Gap(ˆx, ˆy; β) := max y Bβ(y ) f(ˆx, y) min x Bβ(x ) f(x, ˆy), We call (ˆx, ˆy) an ϵ-saddle point if Gap(ˆx, ˆy; β) ϵ and β = Ω(max{ x0 x , y0 y }). For the SC-SC case, we use the following stronger notion. Definition 3.5. Suppose that Assumption 3.3 holds. Let z = (x , y ) be the unique saddle point of function f. We call ˆz = (ˆx, ˆy) an ϵ-root if ˆz z ϵ. Most previous works only consider the complexity metric as the number of oracle calls, where an oracle takes a point z Rd as the input and returns a tuple (F (z), F (z)) as the output. The existing algorithms (Monteiro & Svaiter, 2012; Bullins & Lai, 2022; Adil et al., 2022; Lin et al., 2022) have achieved optimal complexity regarding the number of oracle calls. In this work, we focus on the computational complexity of the oracle. More specifically, we distinguish between the different computational complexities of calculating the Hessian matrix F (z) and the gradient F (z). Formally, we make the following assumption as Doikov et al. (2023). Assumption 3.4. We count the computational complexity of computing F ( ) as N and the computational complexity of F ( ) as Nd. Remark 3.1. Assumption 3.4 supposes the cost of computing F ( ) is d times that of computing F ( ). It holds in many practical scenarios as one Hessian oracle can be computed via d Hessianvector products on standard basis vectors e1, , ed, and one Hessian-vector product oracle is typically as expensive as one gradient oracle (Wright, 2006): 1. When the computational graph of f is obtainable, both F (z) and F (z)v can be computed using automatic differentiation with the same cost for any z, v Rd. 2. When f is a black box function, we can estimate the Hessian-vector F (z)v via the finitedifference uδ(z; v) = 1 δ (F (z + δv) F (z δv)) and we have limδ 0 uδ(z; v) = F (z)v under mild conditions on F ( ). 4 ALGORITHMS AND CONVERGENCE ANALYSIS In this section, we present novel second-order methods for solving minimax optimization problems (1). We present LEN and its convergence analysis for convex-concave minimax problems in Section 4.1. We generalize LEN for strongly-convex-strongly-concave minimax problems by presenting LEN-restart in Section 4.2. We discuss the details of solving minimax cubic-regularized sub-problem, present detailed implementation of LEN, and give the total computational complexity of proposed methods in Section 4.3. 4.1 THE LEN ALGORITHM FOR CONVEX-CONCAVE PROBLEMS We propose LEN for convex-concave problems in Algorithm 1. Our method builds on the optimal Newton Proximal Extragradient (NPE) method (Monteiro & Svaiter, 2012; Bullins & Lai, 2022; Published as a conference paper at ICLR 2025 Algorithm 1 LEN(z0, T, m, M) 1: for t = 0, , T 1 do 2: Compute lazy cubic step, i.e. find zt+1/2 that satisfies F (zt) = ( F (zπ(t)) + M zt zt+1/2 Id)(zt zt+1/2). 3: Compute γt = M zt zt+1/2 . 4: Compute extra-gradient step zt+1 = zt γ 1 t F (zt+1/2). 5: end for 6: return z T = 1 PT 1 t=0 γ 1 t PT 1 t=0 γ 1 t zt+1/2. Adil et al., 2022; Lin et al., 2022). The only change is that we reuse the Hessian from previous iterates, as colored in blue. Each iteration of LEN contains the following two steps: F (zt) + F (zπ(t))(zt+1/2 zt) + M zt+1/2 zt (zt+1/2 zt) = 0, (Implicit Step) zt+1 = zt F (zt+1/2) M zt+1/2 zt . (Explicit Step) (4) The first step (implicit step) solves a cubic regularized sub-problem based on the F (zπ(t)) computed at the latest snapshot point and F (zt) at the current iteration point. This step is often viewed as an oracle (Bullins & Lai, 2022; Adil et al., 2022; Lin et al., 2022) as there exists efficient solvers, which will also be discussed in Section 4.3. The second one (explicit step) conducts an extra gradient step based on F (zt+1/2). Reusing the Hessian in the implicit step makes each iteration much cheaper, but would cause additional errors compared to previous methods (Monteiro & Svaiter, 2012; Huang & Zhang, 2022; Adil et al., 2022; Lin et al., 2022). The error resulting from the lazy Hessian updates is formally characterized by the following theorem. Lemma 4.1. Suppose that Assumption 3.1 and 3.2 hold. For any z Rd, Algorithm 1 ensures γ 1 t F (zt+1/2), zt+1/2 z 1 2 zt+1 z 2 1 2 zt+1/2 zt+1 2 2 zt zt+1/2 2 + ρ2 2M 2 zt zt+1/2 2 + 2ρ2 M 2 zπ(t) zt 2 Above, (*) is the error from lazy Hessian updates. Note that (*) vanishes when the current Hessian is used. For lazy Hessian updates, the error would accumulate in the epoch. The key step in our analysis shows that we can use the negative terms in the right-hand side of the inequality in Lemma 4.1 to bound the accumulated error by choosing M sufficiently large, with the help of the following technical lemma. Lemma 4.2. For any sequence of positive numbers {rt}t 0, it holds for any m 2 that Pm 1 t=1 Pt 1 i=0 ri 2 m2 2 Pm 1 t=0 r2 t . When m = 1, the algorithm reduces to the NPE algorithm (Monteiro & Svaiter, 2012; Bullins & Lai, 2022; Adil et al., 2022; Lin et al., 2022) without using lazy Hessian updates. When m 2, we use Lemma 4.2 to upper bound the error that arises from lazy Hessian updates. Finally, we prove the following guarantee for our proposed algorithm. Theorem 4.1 (C-C setting). Suppose that Assumption 3.1 and 3.2 hold. Let z = (x , y ) be a saddle point and β = z0 z . Set M 3ρm. The sequence of iterates generated by Algorithm 1 is bounded zt Bβ(z ), zt+1/2 B3β(z ), t = 0, , T 1, and satisfies the following ergodic convergence: Gap( x T , y T ; 3β) 32M z0 z 3 Let M = 3ρm. Algorithm 1 finds an ϵ-saddle point within O(m2/3ϵ 2/3) iterations. Published as a conference paper at ICLR 2025 Algorithm 2 LEN-restart(z0, T, m, M, S) 1: z(0) = z0 2: for s = 0, , S 1 3: z(s+1) = LEN(z(s), T, m, M) end for Discussion on the computational complexity of the oracles. Theorem 4.1 indicates that LEN requires O(m2/3ϵ 2/3) calls to F ( ) and O(m2/3ϵ 2/3/m + 1) calls to F ( ) to find the ϵ-saddle point. Under Assumption 3.4, the computational cost to call the oracles F ( ) and F ( ) is Oracle Computational Cost = O N m2/3ϵ 2/3 + (Nd) ϵ 2/3m 1/3 + 1 . (5) Taking m = Θ(d) minimizes (5) to O(Nd + Nd2/3ϵ 2/3). Compared to state-of-the-art secondorder methods (Monteiro & Svaiter, 2012; Bullins & Lai, 2022; Adil et al., 2022; Lin et al., 2022), whose computational cost in terms of the oracles is O(Ndϵ 2/3) since they require to query F ( ) at each iteration, our methods significantly improve their results by a factor of d1/3. It is worth noticing that the computational cost of an algorithm includes both the computational cost of calling the oracles, which we have discussed above, and the computational cost of performing the updates (i.e. solving auxiliary problems) after accessing the required oracles. We will give an efficient implementation of LEN and analyze the total computational cost later in Section 4.3. 4.2 THE LEN-RESTART ALGORITHM FOR STRONGLY-CONVEX-STRONGLY-CONCAVE PROBLEMS We generalize LEN to solve the strongly-convex-strongly-concave minimax optimization by incorporating the restart strategy introduced by Huang & Zhang (2022); Lin & Jordan (2024). We propose the LEN-restart in Algorithm 2, which works in epochs. Each epoch of LEN-restart invokes LEN (Algorithm 1), which gets z(s) as inputs and outputs z(s+1). The following theorem shows that the sequence {z(s)} enjoys a superlinear convergence in epochs. Furthermore, the required number of iterations in each epoch to achieve such a superlinear rate is only a constant. Theorem 4.2 (SC-SC setting). Suppose that Assumptions 3.1 and 3.3 hold. Let z = (x , y ) be the unique saddle point. Set M = 3ρm as Theorem 4.1 and T = 2M z0 z µ 2/3 . Then the sequence of iterates generated by Algorithm 2 converge to z superlinearly as z(s) z 2 1 2 (3/2)s z0 z 2. In other words, Algorithm 2 finds a point z(s) such that z(s) z ϵ within S = log3/2 log2(1/ϵ) epochs. The total number of inner loop iterations is given by TS = O m2/3κ2/3 log log(1/ϵ) . Under Assumption 3.4, Algorithm 2 with m = Θ(d) takes the computational complexity of O((Nd+ Nd2/3κ2/3) log log(1/ϵ)) to call the oracles F ( ) and F ( ). 4.3 IMPLEMENTATION DETAILS AND COMPUTATIONAL COMPLEXITY ANALYSIS We provide details of implementing the cubic regularized Newton oracle (Implicit Step, (4)). Inspired by Monteiro & Svaiter (2012); Bullins & Lai (2022); Adil et al. (2022); Lin et al. (2022), we transform the sub-problem into a root-finding problem for a univariate function. Lemma 4.3 (Section 4.3 Lin et al. (2022)). Suppose Assumption 3.1 and 3.2 hold for function f : Rdx Rdy R and let F be its GDA field. Define γt = M zt+1/2 zt . The cubic regularized Newton oracle (Implicit Step, (4)) can be rewritten as: zt+1/2 = zt ( F (zπ(t)) + γt Id) 1F (zt), which can be implemented by finding the root of the following univariate function: ϕ(γ) := M ( F (zπ(t)) + γId) 1F (zt) γ. (6) Furthermore, the function ϕ(γ) is strictly decreasing when λ > 0. Published as a conference paper at ICLR 2025 Algorithm 3 Implementation of LEN (z0, T, m, M) 1: for t = 0, , T 1 do 2: if t mod m = 0 do 3: Compute the Schur decomposition such that F (zt) = QUQ 1. 4: end if 5: Let ϕ( ) defined as 6 and compute γt as its root by a binary search. 6: Compute lazy cubic step zt+1/2 = Q(U + γt Id) 1Q 1F (zt). 7: Compute extra-gradient step zt+1 = zt γ 1 t F (zt+1/2). 8: end for 9: return z T = 1 PT 1 t=0 γ 1 t PT 1 t=0 γ 1 t zt+1/2. From the above lemma, to implement the cubic regularized Newton oracle, it suffices to find the root of a strictly decreasing function ϕ(γ), which can be solved within O(1) iteration. The main operation is to solve the following linear system: ( F (zπ(t)) + γId)h = F (zt). (7) Naively solving this linear system at every iteration still results in an expensive computational complexity of O(d3) per iteration. We present a computationally efficient way to implement LEN by leveraging the Schur factorization at the snapshot point F (zπ(t)) = QUQ 1, where Q Cd is a unitary matrix and U Cd is an upper-triangular matrix. Then solving the linear system (7) is equivalent to h = Q(U + γId) 1Q 1F (zt). (8) The final implementable algorithm is presented in Algorithm 3. Now, we are ready to analyze the total computational complexity of LEN, which can be divided into the following two parts: Computational Cost = Oracle Computational Cost + Update Computational Cost, where the first part has been discussed in Section 4.1. Regarding the update computational cost, the Schur decomposition with an computational complexity O(d3) is required once every m iterations. After Schur s decomposition has been given at the snapshot point, the dominant part of the update computational complexity is solving the upper-triangular linear system (8) with the back substitution algorithm within the computational complexity O(d2). Thus, we have Update Computational Cost = O d2 m2/3ϵ 2/3 + d3 ϵ 2/3m 1/3 + 1 , (9) and the total computational cost of LEN is Computational Cost (5),(9) = O (d2 + N) m2/3ϵ 2/3 + (d3 + Nd) ϵ 2/3m 1/3 + 1 . (10) By taking m = Θ(d), we obtain the best computational complexity in (10) of LEN, which is formally stated in the following theorem. Theorem 4.3 (C-C setting). Under the same setting of Theorem 4.1, Algorithm 3 with m = Θ(d) finds an ϵ -saddle point with computational complexity O((N + d2)(d + d2/3ϵ 2/3). We also present the total computational complexity of LEN-restart for SC-SC setting. Corollary 4.1 (SC-SC setting). Under the same setting as in Theorem 4.2, Algorithm 2 implemented in the same way as Algorithm 3 with m = Θ(d) finds an ϵ-root with computational complexity O((N + d2)(d + d2/3κ2/3). In both cases, our proposed algorithms improve the total computational cost of the optimal secondorder methods (Monteiro & Svaiter, 2012; Bullins & Lai, 2022; Adil et al., 2022; Lin et al., 2022) by a factor of d1/3. Published as a conference paper at ICLR 2025 0.0 2.5 5.0 7.5 10.0 time (s) EG NPE LEN-2 LEN-10 LEN-100 0 20 40 time (s) EG NPE LEN-2 LEN-10 LEN-100 0 20 40 60 80 time (s) EG NPE LEN-2 LEN-10 LEN-100 (a) n = 10 (b) n = 100 (c) n = 200 Figure 1: We demonstrate running time v.s. gradient norm F (z) for Problem (11) with different sizes: n {10, 100, 200}. Remark 4.1. In the main text, we assume the use of the classical algorithm for matrix inversion/decomposition, which has a computational complexity of O(d3). The fast matrix multiplication proposed by researchers in the field of theoretical computer science only requires a complexity of dω, where the best known ω is currently around 2.371552 (Williams et al., 2024). This also implies faster standard linear algebra operators including Schur decomposition and matrix inversion (Demmel et al., 2007). However, the large hidden constant factors in these fast matrix multiplication algorithms mean that the matrix dimensions necessary for these algorithms to be superior to classical algorithms are much larger than what current computers can effectively handle. Consequently, these algorithms are not always used in practice. We present the computational complexity of using fast matrix operations in Appendix G. In Appendix H, we extend our algorithms to allow inexact auxiliary CRN sub-problem solving and analyze the total complexity. Specifically, we design an efficient sub-procedure (Algorithm 5) to solve the CRN sub-problem to desired accuracy in only O(log log(1/ϵ)) number of linear system solving. It tightens the O(log(1/ϵ)) iteration complexity in (Bullins & Lai, 2022; Adil et al., 2022). Additionally, (Bullins & Lai, 2022; Adil et al., 2022) assume σmin( F (z)) µ for some positive constant µ, which makes the problem similar to strongly-convex(-strongly-concave) problems, while our analysis does not require such an assumption. 5 NUMERICAL EXPERIMENTS We conduct our algorithms on a regularized bilinear min-max problem and fairness-aware machine learning tasks. We include EG (Korpelevich, 1976) and NPE (Monteiro & Svaiter, 2012; Bullins & Lai, 2022; Adil et al., 2022; Lin et al., 2022) (which is our algorithm with m = 1) as baselines, since they are the optimal firstand second-order methods for convex-concave minimax problems, respectively. We run the programs on an AMD EPYC 7H12 64-Core Processor. 1 5.1 REGULARIZED BILINEAR MIN-MAX PROBLEM We first conduct numerical experiments on the cubic regularized bilinear min-max problem considered in the literature (Lin et al., 2022; Jiang et al., 2024): min x Rn max y Rn f(x, y) = ρ 6 x 3 + y (Ax b). (11) The function f(x, y) is convex-concave and has ρ-Lipschitz continuous Hessians. The unique saddle point z = (x , y ) of f(x, y) can be explicitly calculated as x = A 1b and y = ρ x 2(A ) 1x /2, so we can compute the distance to z to measure the performance of the algorithms. Following Lin et al. (2022), we generate each element in b as independent Rademacher variables in { 1, +1}, set ρ = 1/(20n) and the matrix A = 1 1 ... ... 1 1 1 1The source codes are available at https://github.com/True Nobility303/LEN. Published as a conference paper at ICLR 2025 0 1 2 3 4 5 time (s) 0 200 400 time (s) 0 2000 4000 time (s) (a) heart (b) adult (c) law school Figure 2: We demonstrate running time v.s. gradient norm F (z) for fairness-aware machine learning task (Problem (12)) on datasets heart , adult , and law school . We compare our methods with the baselines on different sizes of the problem: n {10, 100, 200}. For EG, we tune the stepsize in {1, 0.1, 0.01, 0.001}. For LEN, we vary m in {1, 2, 10, 100}. The results of the running time against F (z) are presented in Figure 1. 5.2 FAIRNESS-AWARE MACHINE LEARNING We then examine our algorithm for the task of fairness-aware machine learning. Let {ai, bi, ci}n i=1 be the training set, where ai Rdx denotes the features of the i-th sample, bi { 1, +1} is the corresponding label, and ci { 1, +1} is an additional feature that is required to be protected and debiased. For example, ci can denote gender. Zhang et al. (2018) proposed to solve the following minimax problem to mitigate unwanted bias of ci by adversarial learning: min x Rdx max y R 1 n i=1 ℓ(bia i x) βℓ(ciya i x) + λ x 2 γy2, (12) where ℓis the logit function such that ℓ(t) = log(1+exp( t)). We set λ = γ = 10 4 and β = 0.5. We conduct the experiments on datasets heart (n = 270, dx = 13) (Chang & Lin, 2011), adult (n = 32, 561, dx = 123) (Chang & Lin, 2011) and law school (n = 20, 798, dx = 380) (Le Quy et al., 2022; Liu et al., 2022). For all the datasets, we choose gender as the protected feature. For EG, we tune the stepsize in {0.1, 0.01, 0.001}. For second-order methods (NPE and LEN), as we do not know the value of ρ in advance, we view it as a hyperparameter and tune it in {1, 10, 100}. We set m = 10 for LEN and we find that this simple choice performs well in all the datasets we test. We show the results of the running time against the gradient norm F (z) in Figure 2. 6 CONCLUSION AND FUTURE WORKS In this paper, we propose LEN and LEN-restart for C-C and SC-SC minimax problems, respectively. Using lazy Hessian updates, our methods improve the computational complexity of the current bestknown second-order methods by a factor of d1/3. Numerical experiments on both real and synthetic datasets also verify the efficiency of our method. For future work, it will be interesting to extend our idea to adaptive second-order methods (Wang et al., 2024a; Doikov et al., 2024; Carmon et al., 2022; Antonakopoulos et al., 2022; Liu & Luo, 2022b) and stochastic problems with sub-sampled Newton methods (Lin et al., 2022; Chayti et al., 2023; Zhou et al., 2019; Tripuraneni et al., 2018; Wang et al., 2019). Besides, our methods only focus on the convex-concave case, it is also possible to reduce the Hessian oracle for nonconvex- (strongly)-concave problems (Luo et al., 2022; Lin et al., 2020; Yang et al., 2023; Zhang et al., 2022b; Wang et al., 2024b) or study structured nonconvex-nonconcave problems (Zheng et al., 2024; Diakonikolas et al., 2021; Yang et al., 2020; Lee & Kim, 2021; Chen & Luo, 2024). ACKNOWLEDGMENTS Lesi Chen and Jingzhao Zhang are supported by the Shanghai Qi Zhi Institute Innovation Program. Chengchang Liu is supported by the National Natural Science Foundation of China (624B2125). Published as a conference paper at ICLR 2025 Deeksha Adil, Brian Bullins, Arun Jambulapati, and Sushant Sachdeva. Optimal methods for higherorder smooth monotone variational inequalities. ar Xiv preprint ar Xiv:2205.06167, 2022. Ilan Adler, Zhiyue T. Hu, and Tianyi Lin. New proximal newton-type methods for convex optimization. In CDC, 2020. M. Marques Alves and Benar F. Svaiter. A search-free O(1/k3/2) homotopy inexact proximal-newton extragradient algorithm for monotone variational inequalities. ar Xiv preprint ar Xiv:2308.05887, 2023. Kimon Antonakopoulos, Ali Kavis, and Volkan Cevher. Extra-newton: A first approach to noiseadaptive accelerated second-order methods. In Neur IPS, 2022. Aharon Ben-Tal, Laurent El Ghaoui, and Arkadi Nemirovski. Robust optimization, volume 28. Princeton university press, 2009. Brian Bullins and Kevin A. Lai. Higher-order methods for convex-concave min-max optimization and monotone variational inequalities. SIAM Journal on Optimization, 32(3):2208 2229, 2022. Yair Carmon, Danielle Hausler, Arun Jambulapati, Yujia Jin, and Aaron Sidford. Optimal and adaptive monteiro-svaiter acceleration. In Neur IPS, 2022. Chih-Chung Chang and Chih-Jen Lin. LIBSVM: a library for support vector machines. ACM transactions on intelligent systems and technology (TIST), 2(3):1 27, 2011. El Mahdi Chayti, Nikita Doikov, and Martin Jaggi. Unified convergence theory of stochastic and variance-reduced cubic newton methods. ar Xiv preprint ar Xiv:2302.11962, 2023. Lesi Chen and Luo Luo. Near-optimal algorithms for making the gradient small in stochastic minimax optimization. JMLR, 2024. James Demmel, Ioana Dumitriu, and Olga Holtz. Fast linear algebra is stable. Numerische Mathematik, 108(1):59 91, 2007. Jelena Diakonikolas, Constantinos Daskalakis, and Michael I. Jordan. Efficient methods for structured nonconvex-nonconcave min-max optimization. In AISTATS, 2021. Nikita Doikov, El Mahdi Chayti, and Martin Jaggi. Second-order optimization with lazy hessians. In ICML, 2023. Nikita Doikov, Konstantin Mishchenko, and Yurii Nesterov. Super-universal regularized newton method. SIAM Journal on Optimization, 34(1):27 56, 2024. Simon S Du, Jianshu Chen, Lihong Li, Lin Xiao, and Dengyong Zhou. Stochastic variance reduction methods for policy evaluation. In ICML, 2017. Jinyan Fan. A shamanskii-like levenberg-marquardt method for nonlinear equations. Computational Optimization and Applications, 56(1):63 80, 2013. Roger Grosse and James Martens. A kronecker-factored approximate fisher matrix for convolution layers. In ICML, 2016. James A Hanley and Barbara J Mc Neil. The meaning and use of the area under a receiver operating characteristic (roc) curve. Radiology, 143(1):29 36, 1982. Kevin Huang and Shuzhong Zhang. An approximation-based regularized extra-gradient method for monotone variational inequalities. ar Xiv preprint ar Xiv:2210.04440, 2022. Kevin Huang, Junyu Zhang, and Shuzhong Zhang. Cubic regularized newton method for the saddle point models: A global and local convergence analysis. Journal of Scientific Computing, 91(2): 60, 2022. Ruichen Jiang and Aryan Mokhtari. Generalized optimistic methods for convex-concave saddle point problems. ar Xiv preprint ar Xiv:2202.09674, 2022. Published as a conference paper at ICLR 2025 Ruichen Jiang, Qiujiang Jin, and Aryan Mokhtari. Online learning guided curvature approximation: A quasi-newton method with global non-asymptotic superlinear convergence. In COLT, 2023. Ruichen Jiang, Ali Kavis, Qiujiang Jin, Sujay Sanghavi, and Aryan Mokhtari. Adaptive and optimal second-order optimistic methods for minimax optimization. In Neur IPS, 2024. Galina M Korpelevich. The extragradient method for finding saddle points and other problems. Matecon, 12:747 756, 1976. Dmitry Kovalev, Adil Salim, and Peter Richt arik. Optimal and practical algorithms for smooth and strongly convex decentralized optimization. In Neur IPS, 2020. Dmitry Kovalev, Elnur Gasanov, Alexander Gasnikov, and Peter Richtarik. Lower bounds and optimal algorithms for smooth and strongly convex decentralized optimization over time-varying networks. In Neur IPS, 2021. Francesco Lampariello and Marco Sciandrone. Global convergence technique for the newton method with periodic hessian evaluation. Journal of optimization theory and applications, 111: 341 358, 2001. Tai Le Quy, Arjun Roy, Vasileios Iosifidis, Wenbin Zhang, and Eirini Ntoutsi. A survey on datasets for fairness-aware machine learning. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 12(3):e1452, 2022. Sucheol Lee and Donghwan Kim. Fast extra gradient methods for smooth structured nonconvexnonconcave minimax problems. In Neur IPS, 2021. Tianyi Lin and Michael I Jordan. Perseus: A simple high-order regularization method for variational inequalities. Mathematical Programming, pp. 1 42, 2024. Tianyi Lin, Chi Jin, and Michael I Jordan. Near-optimal algorithms for minimax optimization. In COLT, 2020. Tianyi Lin, Panayotis Mertikopoulos, and Michael I. Jordan. Explicit second-order min-max optimization methods with optimal convergence guarantee. ar Xiv preprint ar Xiv:2210.12860, 2022. Chengchang Liu and Luo Luo. Quasi-newton methods for saddle point problems. In Neur IPS, 2022a. Chengchang Liu and Luo Luo. Regularized newton methods for monotone variational inequalities with Holders continuous jacobians. ar Xiv preprint ar Xiv:2212.07824, 2022b. Chengchang Liu, Shuxian Bi, Luo Luo, and John CS Lui. Partial-quasi-newton methods: efficient algorithms for minimax optimization problems with unbalanced dimensionality. In SIGKDD, 2022. Chengchang Liu, Lesi Chen, Luo Luo, and John CS Lui. Communication efficient distributed newton method with fast convergence rates. In SIGKDD, 2023. Hong Liu, Zhiyuan Li, David Hall, Percy Liang, and Tengyu Ma. Sophia: A scalable stochastic second-order optimizer for language model pre-training. In ICLR, 2024. Luo Luo, Yujun Li, and Cheng Chen. Finding second-order stationary points in nonconvex-stronglyconcave minimax optimization. In Neur IPS, 2022. James Martens and Roger Grosse. Optimizing neural networks with kronecker-factored approximate curvature. In ICML, 2015. Aryan Mokhtari, Asuman Ozdaglar, and Sarath Pattathil. A unified analysis of extra-gradient and optimistic gradient methods for saddle point problems: Proximal point approach. In AISTATS, 2020a. Aryan Mokhtari, Asuman E. Ozdaglar, and Sarath Pattathil. Convergence rate of O(1/k) for optimistic gradient and extragradient methods in smooth convex-concave saddle point problems. SIAM Journal on Optimization, 30(4):3230 3251, 2020b. Published as a conference paper at ICLR 2025 Renato DC Monteiro and Benar F Svaiter. Iteration-complexity of a newton proximal extragradient method for monotone variational inequalities and inclusion problems. SIAM Journal on Optimization, 22(3):914 935, 2012. Renato DC Monteiro and Benar Fux Svaiter. On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean. SIAM Journal on Optimization, 20(6):2755 2787, 2010. Arkadi Nemirovski. Prox-method with rate of convergence O(1/t) for variational inequalities with lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM Journal on Optimization, 15(1):229 251, 2004. Arkadij Semenoviˇc Nemirovskij and David Borisovich Yudin. Problem complexity and method efficiency in optimization. 1983. Yurii Nesterov. Dual extrapolation and its applications to solving variational inequalities and related problems. Mathematical Programming, 109(2-3):319 344, 2007. Yurii Nesterov and Boris T Polyak. Cubic regularization of newton method and its global performance. Mathematical Programming, 108(1):177 205, 2006. Yurii Nesterov and Laura Scrimali. Solving strongly monotone variational and quasi-variational inequalities. 2006. Santiago Paternain, Miguel Calvo-Fullana, Luiz FO Chamon, and Alejandro Ribeiro. Safe policies for reinforcement learning via primal-dual methods. IEEE Transactions on Automatic Control, 68(3):1321 1336, 2022. Leonid Denisovich Popov. A modification of the arrow-hurwicz method for search of saddle points. Mathematical notes of the Academy of Sciences of the USSR, 28:845 848, 1980. R. Tyrrell Rockafellar. Monotone operators and the proximal point algorithm. SIAM journal on control and optimization, 14(5):877 898, 1976. VE Shamanskii. A modification of newton s method. Ukrainian Mathematical Journal, 19(1): 118 122, 1967. Nilesh Tripuraneni, Mitchell Stern, Chi Jin, Jeffrey Regier, and Michael I Jordan. Stochastic cubic regularization for fast nonconvex optimization. In Neur IPS, 2018. John Von Neumann and Oskar Morgenstern. Theory of games and economic behavior, 2nd rev. 1947. Hoi-To Wai, Zhuoran Yang, Zhaoran Wang, and Mingyi Hong. Multi-agent reinforcement learning via double averaging primal-dual optimization. In Neur IPS, 2018. Chang-yu Wang, Yuan-yuan Chen, and Shou-qiang Du. Further insight into the shamanskii modification of newton method. Applied mathematics and computation, 180(1):46 52, 2006. Junlin Wang, Junnan Yang, and Zi Xu. A fully parameter-free second-order algorithm for convexconcave minimax problems with optimal iteration complexity. ar Xiv preprint ar Xiv:2407.03571, 2024a. Mengdi Wang. Primal-dual π-learning: Sample complexity and sublinear run time for ergodic markov decision problems. ar Xiv preprint ar Xiv:1710.06100, 2017. Nuozhou Wang, Junyu Zhang, and Shuzhong Zhang. Efficient first order method for saddle point problems with higher order smoothness. SIAM Journal on Optimization, 34(4):3342 3370, 2024b. Zhe Wang, Yi Zhou, Yingbin Liang, and Guanghui Lan. Stochastic variance-reduced cubic regularization for nonconvex optimization. In AISTATS, 2019. Virginia Vassilevska Williams, Yinzhan Xu, Zixuan Xu, and Renfei Zhou. New bounds for matrix multiplication: from alpha to omega. In SODA, 2024. Published as a conference paper at ICLR 2025 Stephen J. Wright. Numerical optimization, 2006. Yangyang Xu. Primal-dual stochastic gradient method for convex programs with many functional constraints. SIAM Journal on Optimization, 30(2):1664 1692, 2020. Haikuo Yang, Luo Luo, Chris Junchi Li, Michael Jordan, and Maryam Fazel. Accelerating inexact hypergradient descent for bilevel optimization. In OPT 2023: Optimization for Machine Learning, 2023. Junchi Yang, Negar Kiyavash, and Niao He. Global convergence and variance reduction for a class of nonconvex-nonconcave minimax problems. In Neur IPS, 2020. Yiming Ying, Longyin Wen, and Siwei Lyu. Stochastic online auc maximization. In NIPS, 2016. Zhuoning Yuan, Yan Yan, Milan Sonka, and Tianbao Yang. Large-scale robust deep auc maximization: A new surrogate loss and empirical studies on medical image classification. In ICCV, 2021. Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 335 340, 2018. Junyu Zhang, Mingyi Hong, and Shuzhong Zhang. On lower iteration complexity bounds for the convex concave saddle point problems. Mathematical Programming, 194(1-2):901 935, 2022a. Xuan Zhang, Necdet Serhat Aybat, and Mert Gurbuzbalaban. SAPD+: An accelerated stochastic method for nonconvex-concave minimax problems. In Neur IPS, 2022b. Taoli Zheng, Linglingzhi Zhu, Anthony Man-Cho So, Jos e Blanchet, and Jiajin Li. Universal gradient descent ascent method for nonconvex-nonconcave minimax optimization. In Neur IPS, 2024. Dongruo Zhou, Pan Xu, and Quanquan Gu. Stochastic variance-reduced cubic regularization methods. JMLR, 20(134):1 47, 2019.