# gradients_of_functions_of_large_matrices__ec57999c.pdf Gradients of Functions of Large Matrices Nicholas Kr amer, Pablo Moreno-Mu noz, Hrittik Roy, Søren Hauberg Technical University of Denmark Kongens Lyngby, Denmark {pekra, pabmo, hroy, sohau}@dtu.dk Tuning scientific and probabilistic machine learning models for example, partial differential equations, Gaussian processes, or Bayesian neural networks often relies on evaluating functions of matrices whose size grows with the data set or the number of parameters. While the state-of-the-art for evaluating these quantities is almost always based on Lanczos and Arnoldi iterations, the present work is the first to explain how to differentiate these workhorses of numerical linear algebra efficiently. To get there, we derive previously unknown adjoint systems for Lanczos and Arnoldi iterations, implement them in JAX, and show that the resulting code can compete with Diffrax when it comes to differentiating PDEs, GPy Torch for selecting Gaussian process models and beats standard factorisation methods for calibrating Bayesian neural networks. All this is achieved without any problem-specific code optimisation. Find the code at https://github.com/pnkraemer/experiments-lanczos-adjoints and install the library with pip install matfree. 1 Introduction Automatic differentiation has dramatically altered the development of machine learning models by allowing us to forego laborious, application-dependent gradient derivations. The essence of this automation is to evaluate Jacobian-vector and vector-Jacobian products without ever instantiating the full Jacobian matrix, whose column count would match the number of parameters of the neural network. Nowadays, everyone can build algorithms around matrices of unprecedented sizes by exploiting this matrix-free implementation. However, differentiable linear algebra for Jacobian-vector products and similar operations has remained largely unexplored to this day. We introduce a new matrix-free method for automatically differentiating functions of matrices. Our algorithm yields the exact gradients of the forward pass, all gradients are obtained with the same code, and said code runs in linear timeand memory-complexity. For a parametrised matrix A = A(θ) RN N and an analytic function f : R R, we call f(A) a function of the matrix (different properties of A imply different definitions of f(A); one of them is applying f to each eigenvalue of A if A is diagonalisable; see [1]). However, we assume that A is the Jacobian of a large neural network or a matrix of similar size and never materialise f(A). Instead, we only care about the values and gradients of the matrix-function-vector product (θ, v) 7 f[A(θ)]v (1) assuming that A is only accessed via differentiable matrix-vector products. Table 1 lists examples. Evaluating Equation 1 is crucial for building large machine learning models, e.g., Bayesian neural networks: A common hyperparameter-calibration loss of a (Laplace-approximated) Bayesian neural network involves the log-determinant of the generalised Gauss Newton matrix [13] (xi,yi) data [Dθg](xi) [D2 gρ](yi, g(xi))[Dθg](xi) + α2I, (2) 38th Conference on Neural Information Processing Systems (Neur IPS 2024). Table 1: Some applications for functions of matrices. Log-determinants apply by combining log det(A) = trace (log(A)) with stochastic trace estimation, which is why most vectors in this table are Rademacher samples. PDE / ODE = Partial/Ordinary differential equation . Application Function f Matrix A Vector v Parameter θ PDEs & flows [2 5] eλ PDE discret. PDE initial value PDE Gaussian process [6 8] log(λ) Kernel matrix v Rademacher Kernel Invert. Res Nets [9, 10] log(1 + λ) Jacobian matrix v Rademacher Network Gaussian sampler [11] λ Covariance matrix v N(0, I) Covariance Neural ODE [12] λ2 Jacobian matrix v Rademacher Network where Dθg is the parameter-Jacobian of the neural network g, D2 gρ is the Hessian of the loss function ρ with respect to g(xi), and α is a to-be-tuned parameter. The matrix A(α) in Equation 2 has as many rows and columns as the network has parameters, which makes traditional, cubic-complexity linear algebra routines for log-determinant estimation entirely unfeasible. To compute this log-determinant, one chooses between either (i) simplifying the problem by pretending that the Hessian matrix is more structured than it actually is, e.g., diagonal [14]; or (ii) approximating log det(A) by combining stochastic trace estimation [15] trace (A) = E v Av 1 ℓ=1 v ℓAvℓ, for E vv = I, (3) with a Lanczos iteration A(θ) QHQ [16], to reduce the log-determinant to [17, 18] log det(A) = trace (log A) 1 ℓ=1 v ℓlog(A)vℓ 1 ℓ=1 v ℓQ log(H)Q vℓ. (4) The matrix H in A QHQ has as many rows/columns as we are willing to evaluate matrix-vector products with A; thus, it is small enough to evaluate the matrix-logarithm log(H) in cubic complexity. Large matrix-vector product Small factorisation Function of large matrix Parameter θ Optimise θ Arnoldi, Lanczos Cheap linalg Contribution Figure 1: Values (down) and gradients (up) of functions of large matrices. Contributions This article explains how to differentiate not just log-determinants but any Lanczos and Arnoldi iteration so we can build loss functions for large models with such matrix-free algorithms (thereby completing the pipeline in Figure 1). This kind of functionality has been sorely missing from the toolbox of differentiable programming until now, even though the demand for functions of matrices is high in all of probabilistic and scientific machine learning [e.g. 2 12, 19 31]. 2 Related work Here, we focus on applications in machine learning and illustrate how prior work avoids differentiating matrixfree decomposition methods like the Lanczos and Arnoldi iterations. Golub and Meurant [32] discuss applications outside machine learning. Generative models, e.g., normalising flows [27, 33], rely on the change-of-variables formula, which involves the log-determinant of the Jacobian matrix of a neural network. Behrmann et al. [9] and Chen et al. [10] combine stochastic trace estimation with a Taylor-series expansion for the matrix logarithm. Ramesh and Le Cun [34] use Chebyshev expansions instead of Taylor expansions. That said, Ubaru et al. [17] demonstrate how both methods converge more slowly than the Lanczos iteration when combined with stochastic trace estimation. Gaussian process model selection requires values and gradients of log-probability density functions of Gaussian distributions (which involve log-determinants), where the covariance matrix A(θ) has as many rows and columns as there are data points [35]. Recent work [6, 7, 11, 19, 23, 36] all uses some combination of stochastic trace estimation with the Lanczos iteration, and unanimously identifies gradients of log-determinants as ( d shall be an infinitesimal perturbation; see Section 4) µ := log det(K(θ)), dµ = trace K(θ) 1d K(θ) . (5) Another round of stochastic trace estimation then estimates dµ [6, 23, 36]. In contrast, our contribution is more fundamental: not only do we derive the exact gradients of the forward pass, but our formulation also applies to, say, matrix exponentials, whereas Equation 5 only works for log-determinants. Section 5 shows how our black-box gradients match state-of-the-art code for Equation 5 [6]. Laplace approximations and neural tangent kernels face the same problem of computing derivatives of log-determinants but with the generalised Gauss Newton (GGN) matrix from Equation 2. In contrast to the Gaussian process literature, prior work on Laplace approximations prefers structured approximations of the GGN by considering subsets of network weights [37 39], or algebraic approximations of the GGN via diagonal, KFAC, or low-rank factors [31, 40 44]. All such approximations imply simple expressions for log-determinants, which are straightforward to differentiate automatically. Unfortunately, these approximations discard valuable information about the correlation between weights, so a linear-algebra-based approach leads to superior likelihood calibration (Section 7). Linear differential equations, for instance y(t) = Ay(t), y(0) = y0 are solved by matrix exponentials, y(t) = exp(At)y0. By this relation, matrix exponentials have frequent applications not just for the simulation of differential equations [e.g. 2, 45], but also for the construction of exponential integrators [3, 26, 29], state-space models [5, 46], and in generative modelling [4, 26, 28, 47]. There are many ways of computing matrix exponentials [48, 49], but only Al-Mohy and Higham [50] consider the problem of differentiating it and only in forward mode. In contrast, differential equations have a rich history of adjoint methods [e.g. 51, 52] with high-performance open-source libraries [53 56]. Still, the (now differentiable) Arnoldi iteration can compete with state-of-the-art solvers in JAX (Section 6). 3 Problem statement Recall A = A(θ) RN N from Section 1. The focus of this paper is on matrices that are too large to store in memory, like Jacobians of neural networks or discretised partial differential equations: Assumption 3.1. A(θ) is only accessed via differentiable matrix-vector products (θ, v) 7 A(θ)v. The de-facto standard for linear algebra under Assumption 3.1 are matrix-free algorithms [e.g. 57, Chapters 10 & 11], like the conjugate gradient method for solving large sparse linear systems [58]. But there is more to matrix-free linear algebra than conjugate gradient solvers: Figure 2: Lanczos/Arnoldi iteration. Matrix-free implementations of matrix decompositions usually revolve around variations of the Arnoldi iteration [59], which takes an initial vector v RN and a prescribed number of iterations K N and produces a column-orthogonal Q RN K, structured H RK K, residual vector r RN, and length c R such that AQ = QH + r(e K) , and Qe1 = cv (6) hold (Figure 2; e1, e K RK are the first and last unit vectors). If A is symmetric, H is tridiagonal, and the Arnoldi iteration becomes the Lanczos iteration [16]. Both iterations are popular for implementing matrix-function-vector products in a matrix-free fashion [1, 57], because the decomposition in Equation 6 implies A QHQ , thus (θ, v) 7 f(A(θ))v Qf(H)Q v = c 1Qf(H)e1. (7) The last step, Q v = c 1e1, is due to the orthogonality of Q. Since the number of matrix-vector products K rarely exceeds a few hundreds or thousands, the following Assumption 3.2 is mild: Assumption 3.2. The map H 7 f(H)e1 is differentiable, and Q fits into memory. In summary, we evaluate functions of large matrices by firstly decomposing a large matrix into a product of small matrices (with Lanczos or Arnoldi) and, secondly, using conventional linear algebra to evaluate functions of small matrices. Functions of small matrices can already be differentiated efficiently [60 62]. This work contributes gradients of the Lanczos and Arnoldi iteration under Assumptions 3.1 and 3.2, and thereby makes matrix-free implementations of matrix decompositions and functions of large matrices (reverse-mode) differentiable. 0 50 100 150 200 250 Krylov-space depth Wall time (sec) Forward pass Adjoint Backprop N = 11948 rows Suite Sparse: 'bcsstk18' Figure 3: Backpropagation vs our adjoint method on a sparse matrix [63 65]. Automatic differentiation, i.e. backpropagating through the matrix decomposition, is far too inefficient to be a viable option (Figure 3; setup in Appendix A). Our approach via implicit differentiation or the adjoint method, respectively, leads to gradients that inherit the linear runtime and memory-complexity of the forward pass. Limitations and future work The landscape of Lanczosand Arnoldi-style matrix decompositions is vast, and some adjacent problems cannot be solved by this single article: (i) Forward-mode derivatives would require a derivation separate from what comes next. Yet, since functions of matrices map many to few parameters (matrices to vectors), reverse-mode is superior to forward-mode anyway [66, p. 153]. (ii) We only consider real-valued matrices (for their relevance to machine learning), even though the decompositions generalise to complex arithmetic with applications in physics [67]. (iii) We assume Q fits into memory, which relates to combining Arnoldi/Lanczos with full reorthogonalisation [57, 68 71]. Relaxing this assumption requires gradients of partial reorthogonalisation (among other things), which we leave to future work. 4 The method: Adjoints of the Lanczos and Arnoldi iterations Numerical algorithms are rarely differentiated automatically, and usually, some form of what is known as implicit differentiation [66, 72] applies. The same is true for the Lanczos and Arnoldi iterations. However, and perhaps surprisingly, we differentiate the iterations like a dynamical system using the adjoint method [51, 73, 74], a variation of implicit differentiation that uses Lagrange multipliers [66], and not like a linear algebra routine [60, 61, 75]. To clarify this distinction, we briefly review implicit differentiation before the core contributions of this work in Sections 4.1 and 4.2. Notation Let dx be an infinitesimal perturbation of some x. D is the Jacobian operator, and , the Euclidean inner product between two equally-sized inputs. For a loss ρ R that depends on some x, the linearisation dρ = Dxρ dx and the gradient identity dρ = xρ, dx will be important [76, 77]. Implicit differentiation Let a : θ 7 x be a numerical algorithm that computes some x from some θ. Assume that the input and output of a( ) satisfy the constraint c(θ, a(θ)) = 0. For instance, if a( ) solves Ax = b, the constraint is c(A, b; x) = Ax b with θ := {A, b}. We can use c( ) in combination with the chain rule to find the derivatives of a( ), (c = 0 implies dc = 0) 0 = dc(θ, x) = Dxc(θ, x)dx + Dθc(θ, x)dθ. (8) In other words, we linearise the constraint c(θ, x) = 0. The adjoint method [78] proceeds by transposing this linearisation as follows. Let ρ be a loss that depends on y with gradient yρ and recall the gradient identity from the Notation paragraph above. Then, for all Lagrange multipliers λ with the same shape as the outputs of c( ), we know that since c = 0 implies dc = 0, dρ = xρ, dx = xρ, dx + λ, dc = xρ + (Dxc) λ, dx + (Dθc) λ, dθ (9) must hold. By matching Equation 9 to dρ = θρ, dθ (this time, regarding ρ as a function of θ, not of x; recall the Notation paragraph), we conclude that if λ solves the adjoint system xρ + (Dxc) λ = 0, (10) then θρ := (Dθc) λ must be the gradient of ρ with respect to input θ. This is the adjoint method [66, Section 10.4]. In automatic differentiation frameworks like JAX [79], this gradient implements a vector-Jacobian product with the Jacobian of a( ) implicitly via the Lagrange multiplier λ, without differentiating through a( ) explicitly. In comparison to approaches that explicitly target vector-Jacobian products with implicit differentiation [like 66, Proposition 10.1], the adjoint method shines when applied to highly structured, non-vector-valued constraints, such as dynamical systems or the Lanczos and Arnoldi iterations. The reason is that the adjoint method does not change if c( ) becomes matrixor function-space-valued, as long as we can define inner products and adjoint operators, whereas other approaches (like what Blondel et al. [72] use for numerical optimisers) would become increasingly laborious in these cases. In summary, to reverse-mode differentiate a numerical algorithm with the adjoint method, we need four steps: (i) find a constraint, (ii) linearise it, (iii) introduce Lagrange multipliers, and (iv) solve the resulting adjoint system. Carrying out those four steps for the Lanczos and Arnoldi iterations is the main contribution of the paper: Section 4.1 states both adjoint systems and Section 4.2 covers a matrix-free implementation. 4.1 Adjoint system of the Arnoldi and Lanczos iterations Let ej be the jth unit vector. Denote by the element-wise matrix product, and define the matrices I := [δi j]K i,j=1, I< := [δi M) (50) be a symmetrisation operator. We define it for the sole purpose of reconstructing sym I= Γ + I Γ = I Γ + (I Γ) . (51) Let us use it: I Γ + (I Γ) = sym I [Q Qf Hf H + Q A Λ] c cf e1(e1) (52a) = sym I [Q Qf Hf H + Q A Λ] c cf e1(e1) . (52b) This yields the next row/column of Γ + Γ , and therefore the next row of Ψ. From there, we can assemble the next column of Λ and iterate. Figure 8 (respectively Algorithms E.3 and E.4) compare pseudocode for forward and adjoint passes. Altogether, the implementation of the adjoint pass is very similar to that of the forward pass. At the final step, we obtain not the last column of Λ but λ, though this is a byproduct of solving the triangular linear system. It does not need further explanation. Remark E.5 (Σ). Like for gradients of QR decompositions [61, 75], we never solve for Σ. F Setup for Table 2 Hilbert matrix size Loss of accuracy Adj. w/ re-proj. Adj. w/o re-proj. Figure 9: Accuracy loss ε when differentiating I for a Hilbert matrix of increasing size N. Uses double precision. To create Table 2, we implement an operator I : A 7 (H, Q, r, c) 7 QHQ (53) where (H, Q, r, c) are the result of a full-rank Arnoldi iteration (i.e. K = N). For K = N, QHQ = A and I must have an identity Jacobian; thus, ε := IN2 I RMSE (54) measures the loss of accuracy when differentiating the Arnoldi iteration. A small ε is desirable. Then, using double-precision, we construct a Hilbert matrix A = [1/(i + j + 1)]N i,j=1 RN N which is a famously ill-conditioned matrix and a common test-bed for the loss of orthogonality in methods like the Lanczos and Arnoldi iteration [e.g. 71, Table 7.1]. We evaluate three algorithms, all of which rely on the Arnoldi iteration with full reorthogonalisation on the forward-pass: One algorithm does not re-project on the adjoint constraints, another one does, and Figure 10: For matrices with at least 10,000 rows/columns, Ke Ops remains the state of the art. This experiment uses a square-exponential kernel, on an artificial dataset with d = 3 dimensions. for reference we compute ε when backpropagating through the re-orthogonalised Arnoldi iteration as a third option. Figure 3 has demonstrated that the first two options beat the third one in terms of speed, but we consider numerical accuracy here. We evaluate ε for N = 1, ..., 8 (see Figure 9), and show the values for N = 8 in Table 2. The numerical accuracy of the re-projected adjoint method matches that of differentiating through re-orthogonalisation, and outperforms not re-projecting by a margin. G Memory-efficient kernel-matrix-vector products in JAX Matrix-free linear algebra requires efficient matrix-vector products. For kernel function k = k(x, x ), and input data x1, ..., x N, Gaussian process covariance matrices are of the form A = [k(xi, xj)]N i,j=1. Matrix-vector products with A thus look like j=1 k(xi, xj)vj and can be assembled row-wise, either sequentially or parallely. The more rows we assemble in parallel, the faster the runtime but also the higher the memory requirements, so we follow Gardner et al. [6] and choose the largest number of rows of A that still fit into memory, say r such rows, and assemble Av in blocks of r. In practice, we implement this in JAX by combining jax.lax.map and jax.vmap, but care has to be taken with reverse-mode automatic differentiation through (v, θ) 7 A(θ)v because by default, reverse-mode differentiation stores all intermediate results. To solve this problem, we place checkpoints around each such batch of rows, which reduces the memory requirements but roughly doubles the runtime. (We place another checkpoint around each stochastic trace-estimation sample, which roughly doubles the runtime again.) An alternative to doing this manually is the Ke Ops library [82], which GPy Torch [6] builds on. However, there currently exists no JAX-compatible interface to Ke Ops which is why we have to implement the above solution. Figure 10 compares the runtime of our approach to that of Ke Ops custom CUDA code. We see that we are competitive, but roughly 5 slower for medium to large datasets. Multiplying this with the 4 increase due to the checkpoints discussed above explains the 20 increase in runtime compared to GPy Torch. Being 20 slower than GPy Torch per epoch is only due to the matrix-vector products, and has nothing to do with the algorithm contribution. Future work should explore closing this gap with a Ke Ops-to-JAX interface. H Experiment configurations for the Gaussian process study Data For the experiments we use the Protein , KEGG (undirected , KEGG (directed) , Elevators , and Kin40k datasets (Table 7, adapted from Bartels et al. [95]). All are part of the UCI data Table 7: Datasets used in this study. Dataset Source Protein Available here.3 Elevators Camachol [96] Kin40K Schwaighofer and Tresp [97] KEGG (undir) Shannon et al. [98] KEGG (dir) Shannon et al. [98] repository, and accessible through there. The data is subsampled to admit the train/test split of 80/20%, and to admit an even division into the number of row partitions. More specifically, we use 10 partitions for the kernel-matrix vector products. This way, we have to discard less than 1% of the data; e.g., on KEGG (undir), we use 63,600 instead of the original 63,608 points. We calibrate a Mat ern prior with smoothness ν = 1.5, using 10 matrix-vector products per Lanczos iteration, conjugate gradients tolerance of ϵ = 1, a rank-15 pivoted Cholesky preconditioner, and 10 Rademacher samples. We evaluate all samples sequentially (rematerialising on the backward pass to save memory, as discussed in Appendix G). The conjugate-gradients tolerances are taken to be absolute (instead of relative), and the parametrisations of the Gaussian process models and loss functions match that of GPy Torch. For every model, we calibrate an independent lengthscale for each input dimension, as well as an scalar observation noise, scalar output-scale, and the value of a constant prior mean. All parameters are initialised randomly. We use the Adam optimiser with learning rate 0.05 for 75 epochs. All experiments are repeated for three different seeds. I Partial differential equation data We generate data for the differential equations as follows: Recall the problem setup of a partial differential equation 2tu(t; x1, x2) = ω(x1, x2)2 2 x2 1 u(t; x1, x2) + 2 x2 2 u(t; x1, x2) (56) with Neumann boundary conditions. The coefficient field ω is spacebut not time-dependent. First, we discretise the Laplacian operator with central differences on an equidistant, tensor-product mesh that consists of 128 points per dimension, which yields 1282 grid points. The resulting second-order ordinary differential equation dt2 w = ω2Mw, (57) where M is the discretised Laplacian, is then transformed into a first-order differential equation = 0 I ω2M 0 w =: Aw. (58) This equation is solved by the matrix exponential, and the system matrix A is asymmtric (by construction), and highly sparse because M is. Matrix-vector products with A are cheap, because we can implement them with jax.scipy.signal.convolve2d. Then, we sample a true ω from a Gaussian process with a square-exponential covariance kernel, using lengthscale softplus( 0.75) and output-scale softplus( 10). We sample from this process with the Lanczos algorithm [11] using Krylov-depth K = 32. Then, we use another Gaussian process with the same kernel, but lengthscale softplus(0) and output scale softplus(0), to sample 256 initial distributions again with the Lanczos algorithm 0.0 0.5 1.0 0.0 0.0 0.5 1.0 0.0 0.0 0.5 1.0 0.0 0.0 0.5 1.0 0.0 0.0 0.5 1.0 0.0 0.0 0.5 1.0 0.0 0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 3.5 3.0 2.5 2.0 1.5 1.0 0.5 0.0 0.5 1.2 0.9 0.6 0.3 0.0 0.3 0.6 0.9 1.2 7.5 6.0 4.5 3.0 1.5 0.0 1.5 3.0 4.5 6.0 Figure 11: Three exemplary input/output pairs from the PDE dataset. [11]. These 256 initial conditions are solved with Diffrax s implementation of Dopri8 [99] using 128 timesteps. Some example input/output pairs are in Figure 11. This setup is similar to that of the Wave Bench dataset [92], with the main difference being that the Wave Bench dataset uses a slightly different formulation of the wave equation.4 We use the one above because it lends itself more naturally to matrix exponentials, which are at the heart of this experiment. J Implementation details for the Bayesian neural network study J.1 Bayesian neural networks with Laplace approximations Another possible application of the gradients of matrix functions is marginal-likelihood-based optimisation of Bayesian Neural Networks. Suppose gθ(x) is the output of a neural network with parameters θ RP . The choice of the model shall be denoted by M and consist of both continuous and discrete hyperparameters (such as network architecture, likelihood precision, prior precision, etc.). For some choice of prior given by p(θ | M) (59) and likelihood p(y|x, θ, , M) = p(y | x, gθ(x), M) (60) we can specify a Bayesian model. The posterior distribution is then given by: p(θ, y | x, M) p(y | x, gθ(x), M)p(θ | M)dθ. (61) The marginal likelihood is given by normalizing constant of this posterior, i.e. p(y | x, M) = Z p(y | x, gθ(x), M)p(θ | M)dθ. (62) As suggested by Mac Kay [101], this marginal likelihood can be used for model selection in Bayesian neural networks. Immer et al. [30] use the Laplace approximation of the posterior to obtain access to the marginal likelihood of the Bayesian neural network and its stochastic gradients. The Laplace approximation of the marginal likelihood is given by: log p(y | x, M) log p(y, θMAP | x, M) 1 2 log det 1 where HθMAP = 2 θ log p(y, θMAP | x, M). Usual choices of the prior are N(0, α 1I). Usually this Hessian is approximated with the generalized Gauss-Newton (GGN) matrix [102] HθMAP A(α) := j=1 [DθgθMAP])(xj) [D2 gρ](yj, gθMAP(xj))[DθgθMAP])(xj) + α2I (64) 3Link: http://archive.ics.uci.edu/dataset/265/physicochemical+properties+of+protein+tertiary+structure 4To ensure radiating boundary conditions, Liu et al. [92] follow Stanziola et al. [100] s model of simulating the wave equations as a sytem of first-order equations. where D2ρ is the Hessian of the loss, and Dθg the Jacobian of g (recall Equation 2). This objective is used to optimize the prior precision of the model or any continuous model hyperparameters. Matrix-vector products with the GGN matrix can be accessed through automatic differentiation using Jacobian-vector and vector-Jacobian products. With these efficient matrix-vector products, one can estimate the log-determinant of GGN using matrix-free techniques like the Lanczos iteration. To make predictions using the Laplace approximation of the posterior, we also need to sample from the normal distribution N(θMAP, A 1). Samples from this distribution can be written as: θ = θMAP + A 1/2ϵ (65) where ϵ N(0, I). The main bottleneck in this computation is the inversion and matrix square root of the GGN matrix, and we implement it with a Lanczos iteration using f(x) = x 1/2. Since the GGN is empirically known to have low-rank [103], doing a few Lanczos iterations can get us close to an accurate estimation. J.2 Experiment setup We estimate the diagonal of the GGN stochastically via ( is the element-wise product) [104] diagonal (A) = E[v Av] 1 ℓ=1 vℓ Av, E[vv ] = I. (66) We use 150 matrix-vector products for both diagonal calibration and our Lanczos-based estimation. We use 30 Monte-Carlo samples to estimate the log-likelihoods for evaluating the test metrics, and we use places365 [91] as an out-of-distribution dataset to compute OOD-AUROC. We also compute the expected calibration error (ECE) [105] of the model. Data: We show scalability by doing Laplace approximation on Imagenet1k image classification [89]. The training set consists of approximately 1.2 million images, each belonging to one of 1000 classes. We find that we can take small subsets of this dataset and still converge to the same prior precision. Our computational budget allows us to use 10 percent of the samples for each class. However, even for very small subsamples of the data, we converge to a very similar prior precision. Method: To optimize the prior precision we use the marginal likelihood as the objective. We use the RMSprop optimizer with a learning rate of 0.01 for 100 epochs for optimizing both the diagonal and Lanczos approximations of the GGN. Neur IPS Paper Checklist Question: Do the main claims made in the abstract and introduction accurately reflect the paper s contributions and scope? Answer: [Yes] Justification: The main contributions, adjoint systems for the Lanczos and Arnoldi iterations, are explained in Section 4. The three case studies are in Sections 5 to 7. Guidelines: The answer NA means that the abstract and introduction do not include the claims made in the paper. The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. 2. Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [Yes] Justification: The limitations are discussed in the paragraph titled Limitations and future work on page 4. All assumptions (i.e., Assumptions 3.1 and 3.2) are contextualised in the sentences before and after they are have been introduced. Guidelines: The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. The authors are encouraged to create a separate Limitations section in their paper. The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren t acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. 3. Theory Assumptions and Proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: [Yes] Justification: The main contributions, Theorems 4.1 and 4.2 and Corollary 4.3, are proven in Appendices B to D. Appendix E discusses solving the adjoint system in full detail. Guidelines: The answer NA means that the paper does not include theoretical results. All the theorems, formulas, and proofs in the paper should be numbered and crossreferenced. All assumptions should be clearly stated or referenced in the statement of any theorems. The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. Theorems and Lemmas that the proof relies upon should be properly referenced. 4. Experimental Result Reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: [Yes] Justification: The most important information about the experiment setup is a part of the main paper in Sections 5 to 7; further information can be found in Appendices G to J. Appendices A and F discuss the setup for Figure 3 and Table 2. Code will be published upon acceptance. Guidelines: The answer NA means that the paper does not include experiments. If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. While Neur IPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example (a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. (b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. (c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). (d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. 5. Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes] Justification: Code has been submitted as a part of the supplementary material, and will be published upon acceptance. Guidelines: The answer NA means that paper does not include experiments requiring code. Please see the Neur IPS code and data submission guidelines (https://nips.cc/ public/guides/Code Submission Policy) for more details. While we encourage the release of code and data, we understand that this might not be possible, so No is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). The instructions should contain the exact command and environment needed to run to reproduce the results. See the Neur IPS code and data submission guidelines (https: //nips.cc/public/guides/Code Submission Policy) for more details. The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. 6. Experimental Setting/Details Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [Yes] Justification: See the answer to 4. Experimental Result Reproducibility above. Guidelines: The answer NA means that the paper does not include experiments. The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. The full details can be provided either with the code, in appendix, or as supplemental material. 7. Experiment Statistical Significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: [Yes] Justification: All case studies report mean and standard deviations of multiple runs. The only exception is the Bayesian neural network example, which uses a single training run (but evaluates test metrics on multiple seeds). Guidelines: The answer NA means that the paper does not include experiments. The authors should answer Yes if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) The assumptions made should be given (e.g., Normally distributed errors). It should be clear whether the error bar is the standard deviation or the standard error of the mean. It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. 8. Experiments Compute Resources Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: [Yes] Justification: The introductory part of the appendix contains a paragraph titled Compute . Guidelines: The answer NA means that the paper does not include experiments. The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn t make it into the paper). 9. Code Of Ethics Question: Does the research conducted in the paper conform, in every respect, with the Neur IPS Code of Ethics https://neurips.cc/public/Ethics Guidelines? Answer: [Yes] Justification: We use datasets that are either self-created (Section 6 and Appendix I), or common test-cases for machine learning methods (UCI datasets, Image Net) or numerical algorithms (Suite Sparse matrix collection). Guidelines: The answer NA means that the authors have not reviewed the Neur IPS Code of Ethics. If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). 10. Broader Impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: [No] Justification: This research provides a foundational algorithm for computational sciences, and societal impact is difficult if not impossible to predict. Guidelines: The answer NA means that there is no societal impact of the work performed. If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). 11. Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [NA] Justification: This paper does not contribute data or models that would require safeguards. Guidelines: The answer NA means that the paper poses no such risks. Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. 12. Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: [Yes] Justification: All assets used in the experiments have been cited. See also the answer to 9. Code of Ethics . Guidelines: The answer NA means that the paper does not use existing assets. The authors should cite the original paper that produced the code package or dataset. The authors should state which version of the asset is used and, if possible, include a URL. The name of the license (e.g., CC-BY 4.0) should be included for each asset. For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. If this information is not available online, the authors are encouraged to reach out to the asset s creators. 13. New Assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: [Yes] Justification: The code (attached to this submission) is documented. The appendices contain all other information. Guidelines: The answer NA means that the paper does not release new assets. Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. The paper should discuss whether and how consent was obtained from people whose asset is used. At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. 14. Crowdsourcing and Research with Human Subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: [NA] Justification: This study does not involve crowdsourcing nor human subjects. Guidelines: The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. According to the Neur IPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. 15. Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [NA] Justification: This paper does not involve crowdsourcing nor research with human subjects. Guidelines: The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the Neur IPS Code of Ethics and the guidelines for their institution. For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.