# generalization_of_hamiltonian_algorithms__377896fd.pdf Generalization of Hamiltonian algorithms Andreas Maurer Computational Statistics and Machine Learning Istituto Italiano di Tecnologia, 16163 Genoa, Italy am@andreas-maurer.eu The paper proves generalization results for a class of stochastic learning algorithms. The method applies whenever the algorithm generates an absolutely continuous distribution relative to some a-priori measure and the Radon Nikodym derivative has subgaussian concentration. Applications are bounds for the Gibbs algorithm and randomizations of stable deterministic algorithms as well as PAC-Bayesian bounds with data-dependent priors. 1 Introduction A stochastic learning algorithm Q takes as input a sample X = (X1, ..., Xn) X n, drawn from a distribution µ on a space X of data, and outputs a probability measure QX on a loss-class H of functions h : X 7 [0, ). A key problem in the study of these algorithms is to bound the generalization gap (h, X) = E [h (X)] 1 i=1 h (Xi) (1) between the expected and the empirical loss of a hypothesis h drawn from QX. Here we want to generate h only once and seek guarantees with high probability as X µn and h QX. Alternatively one might want a bound on the expectation Eh QX [ (h, X)] with high probability in X µn, corresponding to the use of a stochastic hypothesis, where a new h QX is generated for every test point. We concentrate on the former question, but many of the techniques presented also apply to the latter, often easier problem. From Markov s inequality it follows that for λ, δ > 0 with probability at least 1 δ as X µn and h QX (h, X) ln EX Eh QX eλ (h,X) + ln (1/δ) λ , (2) which suggests to bound the log-moment generating function ln EX [Eh QX [exp (λ (h, X))]]. With such a bound at hand one can optimize λ to establish generalization of the algorithm Q : X 7 QX. Inequality (2) is relevant to stochastic algorithms in general, and in particular to the Gibbs-algorithm, where d QX (h) exp ( (β/n) P h (Xi)) dπ (h) for some inverse temperature parameter β and some nonnegative a priori measure π on H. The Gibbs algorithm has its origins in statistical mechanics (Gibbs [1902]). In the context of machine learning it can be viewed as a randomized version of empirical risk minimization, to which it converges as β , whenever π has full support. The distribution, often called Gibbs posterior (Catoni [2007]), is a minimizer of the PAC-Bayesian bounds (Mc Allester [1999]). It is also the limiting distribution of stochastic gradient Langevin dynamics (Raginsky et al. [2017]) under rather general conditions. Generalization bounds in expectation are given by Raginsky et al. [2017], Kuzborskij et al. [2019], most recently by Aminian et al. [2021]. Bounds in probability are given by Lever et al. [2013], implicitly by Dziugaite and Roy [2018], and 38th Conference on Neural Information Processing Systems (Neur IPS 2024). in Rivasplata et al. [2020] following the method of Kuzborskij et al. [2019]. There is also a bound by Aminian et al. [2023], improving on the one in (Lever et al. [2013]). Bounding ln EX [Eh QX [exp (λ (h, X))]] is also the vehicle (and principal technical obstacle) to prove PAC-Bayesian bounds with data-dependent prior QX, as pointed out by Rivasplata et al. [2020] (Theorem 1). Such bounds with data-independent prior QX = Q, have an over twenty year old tradition in learning theory, starting with the seminal work of Mc Allester (Mc Allester [1999]), Langford and Seeger (Langford and Seeger [2001], Seeger [2002]), see also Guedj [2019]. If the prior is data-independent, the two expectations in ln EX [Eh Q [exp (λ (h, X))]] can be exchanged, which reduces the analysis to classical Chernoffor Hoeffding-inequalities. But a dominant term in these bounds, the KL-divergence KL (P, Q) := Eh P [ln (d P/d Q) (h)], will be large unless P is well aligned to Q, so the prior Q should already put more weight on good hypotheses with small loss. This motivates the use of distribution-dependent priors, and as the distribution is unknown, one is led to think about data-dependent priors. Catoni already considers the data-dependent Gibbs distribution as a prior in a derivation departing from the distribution-dependent Gibbs measure exp ( βEX µ [h (X)]) (Lemma 6.2 in Catoni [2003]). Dziugaite and Roy [2017] used a Gaussian prior with data-dependent width and minimized the PAC-Bayes bound for a Gaussian posterior on a multi-layer neural network, obtaining a good classifier accompanied by a non-vacuous bound. This significant advance raised interest in PAC-Bayes with data-dependent priors. The same authors introduced a method to control data-dependent priors based on differential privacy (Dziugaite and Roy [2018]). Recently Pérez-Ortiz et al. [2021] used Gaussian and Laplace priors, whose means were trained directly from one part of the sample, the remaining part being used to evaluate the PAC-Bayes bound. These developments further motivate the search for in-sample bounds on the log-moment generating function appearing above in (2). We make the following contributions: An economical and general method to bound ln EX [Eh QX [exp (λ (h, X))]] whenever the logarithm of the density of QX concentrates exponentially about its mean. In particular, whenever QX has the Hamiltonian form d QX (h) exp (H (h, X)) dπ (h), then it is sufficient that the Hamiltonian H satisfies a bounded difference condition. The method also extends to the case, when H is only sub-Gaussian in its arguments. Applications to the Gibbs algorithm yielding competitive generalization guarantees, both for bounded and sub-Gaussian losses. Despite its simplicity and generality the method improves over existing results on this well studied problem, removing unnecessary logarithmic factors and various superfluous terms. Generalization guarantees for hypotheses sampled once from stochastic kernels centered at the output of uniformly stable algorithms, considerably strengthening a previous result of Rivasplata et al. [2018]. 2 Notation and Preliminaries For m N, we write [m] := {1, ..., m}. Random variables are written in upper case letters, like X, Y, X etc and primes indicate iid copies, like X , X , X etc. Vectors are bold like x,X, etc. Throughout X is a measurable space of data with a probability measure µ, and X will always be the iid vector X = (X1, ..., Xn) µn and X µ. H is a measurable space of measurable functions h : X [0, ), and there is a nonnegative a priori measure π on H. The measure π need not be a probability measure, it could be Lebesgue measure on the space Rd of parametrizations of H. Averages over π will be written as integrals. P (H) is the set of probability measures on H. Unless otherwise specified E denotes the expectation in X µ or X µn. All functions on H X n appearing in this paper are assumed to have exponential moments of all orders, with respect to both arguments. For x X n, k {1, ..., n} and y, y X we define the substitution operator Sk y acting on X n and the partial difference operator Dk y,y acting on functions f : X n R by Sk yx = (x1, ..., xk 1, y, xk+1, ..., xn) and Dk y,y f (x) = f Sk yx f Sk y x . Dk y,y always refers to the second argument for functions on H X n. The generalization gap (h, X) is defined as in (1). Sometimes we write L (h) = E [h (X)] and ˆL (h, X) = (1/n) Pn i=1 h (Xi), so that (h, X) = L (h) ˆL (h, X). A table of notation is provided in Appendix C. 2.1 Hamiltonian algorithms A stochastic algorithm Q : x X n 7 Qx P (H) will be called absolutely continuous, if Qx is absolutely continuous with respect to π for every x X n and vice versa. We will only consider absolutely continuous algorithms in the sequel. A real function H on H X n is called a Hamiltonian for Q (a term taken from statistical physics), if for all h H and all x X n d Qx (h) = e H(h,x)dπ (h) Z (x) with Z (x) = Z H e H(h,x)dπ (h) . The normalizing function Z is called the partition function. Every absolutely continuous Q has the canonical Hamiltonian HQ (h, x) = ln ((d Qx/dπ) (h)) (logarithm of the Radon Nikodym derivative) with partition function Z 1, but adding any function ζ : X n R to HQ will give a Hamiltonian for the same algorithm with partition function Z (x) = exp (ζ (x)). In practice Q is often defined by specifying some Hamiltonian H, so HQ (h, x) = H (h, x) ln Z (x) in general. If π is a probability measure, then Eh Qx [HQ (h, x)] is the KL-divergence KL (Qx, π). A Hamiltonian for the Gibbs algorithm at inverse temperature β is H (h, x) = β ˆL (h, x) = β i=1 h (xi) , putting larger weights on hypotheses with small empirical loss. This is the simplest case covered by our proposed method. If there is a computational cost associated with each h, it may be included to promote hypotheses with faster execution. We could also add a negative multiple of P i 0. Then with probability at least 1 δ in X µn and h QX we have F (h, X) sup h H ψF (h) + ln (1/δ) . (iii) Let δ > 0. Then with probability at least 1 δ in X µn we have Eh QX [F (h, X)] sup h H ψF (h) + ln (1/δ) . Proof. (i) With Jensen s inequality ln EX µn Eh QX h e F (h,X)i = ln EX µn Z H e F (h,X)+HQ(h,X)dπ (h) H EX µn h e F (h,X)+HQ(h,X) E[HQ(h,X )]i e E[HQ(h,X )]dπ (h) H eψF (h)e E[HQ(h,X )]dπ (h) H E h eψF (h)e HQ(h,X )i dπ (h) = ln E Z H eψF (h)e HQ(h,X )dπ (h) = ln EX µn Eh QX h eψF (h)i sup h H ψF (h) . (ii) then follows from Markov s inequality (Section A.1). (iii) follows also by Markov s inequality, since ln EX µn e Eh QX[F (h,X)] ln EX µn Eh QX e F (h,X) , by Jensen s inequality. To see the point of this proposition let F (h, X) = λ (h, X). Since (h, X) is centered, ψλ is of the form ln EX µn h ef(X) E[f(X )]i . Many concentration inequalities in the literature (Mc Diarmid [1998], Boucheron et al. [2013]) follow the classical ideas of Bernstein and Chernoff and are derived from bounds on such moment generating functions. If F (h, X) is not centered, we can use Hölder s or the Cauchy-Schwarz inequality to separate the contributions which F and HQ make to ψF . The F-contribution can be treated separately and the contribution of HQ can again be treated with the methods of concentration inequalities. The last conclusion of the proposition is to show that we can always get bounds in expectation from bounds on the exponential moment. In the sequel we only state the stronger un-expected or "disintegrated" results. Typically the exponential moment bounds for functions of independent variables depend on the function s stability with respect to changes in its arguments. In Section 4.2 a more advanced concentration inequality will be used, but all other results below depend only on the following classical exponential moment bounds. Most of them can be found in Mc Diarmid [1998], but since the results there are formulated as deviation inequalities, a proof is given in the appendix, Section A.2. Proposition 3.2. Let X, X1, ..., Xn be iid random variables with values in X, X = (X1, ..., Xn) and f : X n R measurable. (i) If f is such that for all k [n], x X n we have EX h ef(Sk Xx) EX [f(Sk X x)]i er2, then E h ef(X) E[f(X )]i enr2. (ii) If Dk y,y f (x) c for all k [n], y, y X and x X n, then E h ef(X) E[f(X )]i enc2/8. (iii) If there is b (0, 2), such that for all k [n] and x X n we have f (x) EX µ f Sk X x b, then with vk = supx X n EX µ h f Sk Xx EX µ f Sk X x 2i EX h ef(X) E[f(X )]i exp Notice that the second conclusion is instrumental in the usual proof of Mc Diarmid s (or boundeddifference-) inequality (Mc Diarmid [1998]). 3.1 Bounded differences In the simplest case the Hamiltonian satisfies a bounded difference condition as in (ii) above. Then the only minor complication is to show that the logarithm of the partition function inherits this property. This is dealt with in the following lemma. Lemma 3.3. Suppose H : H X n R and that for all h H, k [n], y, y X and x X n we have Dk y,y H (h, x) c. Let HQ (h, x) = H (h, x) ln Z (x) with Z (x) = Z H e H(h,x)dπ (h) . Then h H, k [n], y, y X, x X n we have Dk y,y HQ (h, x) 2c. Proof. This follows from the linearity of the partial difference operator Dk y,y and Dk y y ln Z (x) = ln Z Sk y x Z Skyx = ln H exp Dk y ,y H (h, x) exp H h, Sk yx dπ (h) R H exp H h, Skyx dπ (h) ln sup h H exp Dk y ,y H (h, x) c. Theorem 3.4. Suppose H is a Hamiltonian for Q and that for all k [n], h H, y, y X and x X n we have Dk y,y H (h, x) c and h (y) [0, b]. Then (i) For any λ > 0 ln EX µn Eh QX eλ sup h H ψλ (h) n (ii) If δ > 0 then with probability at least 1 δ as X µn and h QX we have Proof. Using the previous lemma Dk y,y (λ (h, x) + HQ (h, x)) (λb/n) + 2c. Since E [λ (h, X)] = 0, Proposition 3.2 (ii) gives, for any h H, ψλ (h) = ln E h eλ (h,X)+HQ(h,X) E[HQ(h,X )]i n and the first conclusion follows from Proposition 3.1 (i) with F (h, x) = λ (h, X). From Proposition 3.1 (ii) we get with probability at least 1 δ as X µn, h QX that (h, X) λ 1 n 8 n + 2c 2 + ln 1 2 + nc2/2 + ln (1/δ) Substitution of the optimal value λ = p (8n/b2) (nc2/2 + ln (1/δ)) and subadditivity of t 7 t give the second conclusion. In applications, such as the Gibbs algorithm, we typically have c = O (1/n). The previous result is simple and already gives competitive bounds for several methods, but it does not take into account the properties of the hypothesis chosen from QX. To make the method more flexible we use the Cauchy-Schwarz inequality to write ψF (h) = ln E h e F (h,X)+HQ(h,X) E[HQ(h,X )]i 1 2 ln E h e2F (h,X)i + 1 2 ln E h e2(HQ(h,X) E[HQ(h,X )])i and treat the two terms separately. The disadvantage here is an increase of constants in the bounds. The big advantage is that different types of bounds can be combined. The next result is based on this idea. It is similar to Bernstein s inequality for sums of independent variables. Theorem 3.5. Under the conditions of Theorem 3.4 define for each h H its variance v (h) := E h (h (X) E [h (X )])2i . Then for δ > 0 with probability at least 1 δ in X µn and h QX v (h) c2 + ln (1/δ) + b c2 + ln (1/δ) Similar to Bernstein s inequality the above gives a better bound if the chosen hypothesis has a small variance. For the proof we use the following lemma, which is a direct consequence of Proposition 3.2 (iii). Lemma 3.6. Assume that for all h H and x X, we have h (x) [0, b] and v (h) as in Theorem 3.5 above. Let λ be a function λ : H (0, n/b) and define Fλ : H X n R by Fλ (h, X) = λ (h) (h, X) λ (h)2 1 bλ (h) /n v (h) Then for all h H we have E e2Fλ(h,X) 1. Proof. For every h H we have 2bλ (h) /n (0, 2). Also x X n, and h H (h, x) EX µ Sk X (h, x) = 1 n (E [h (X )] h (xk)) b Thus for every h H we can apply Proposition 3.2 (iii) to f (x) = 2λ (h) (h, x) and obtain E h e2F (h,X)i1/2 = E [exp (2λ (h) (h, X))]1/2 exp 1 bλ (h) /n v (h) 2 2bλ (h) /n v (h) 1 bλ (h) /n v (h) Proof of Theorem 3.5. Let nc2 + ln (1/δ) nc2 + ln (1/δ) + q and let Fλ be as in Lemma 3.6. It follows from Lemma 3.3 and Proposition 3.2 (ii) that for all h H we have ln E h e2(HQ(h,X) E[HQ(h,X )])i 2nc2. Then (3) and Lemma 3.6 give ψF (h) nc2. Thus from Proposition 3.1 and division by λ (h) Pr X µn,h QX (h, X) > λ (h) 1 bλ (h) /n v (h) n + nc2 + ln (1/δ) Inserting the definition of λ (h) and simplifying completes the proof. At the expense of larger constants the role of the variance in this result can be replaced by the empirical error, using v (h) E [h (X)]2 b E [h (X)] and a simple algebraic inversion, which is given in the appendix, Section A.3. Corollary 3.7. Under the conditions of Theorem 3.5 for δ > 0 with probability at least 1 δ in X µn and h QX ˆL (h, X) b c2 + ln (1/δ) + 5b c2 + ln (1/δ) 3.2 Subgaussian hypotheses Some of the above extends to unbounded hypotheses. A real random variable Y is σ-subgaussian for σ > 0, if E [exp (λ (Y E [Y ]))] eλ2σ2/2 for every λ R . The proof of the following result is given in the appendix (Section A.4) and uses ideas very similar to the proofs above. Theorem 3.8. Let Q have Hamiltonian H and assume that h H there is ρ (h) > 0 such that λ R, E h eλ(h(X) E[h(X )])i exp Let ρ = suph H ρ (h) and suppose that λ R, k [n] , h H E h eλ(H(h,Sk Xx) E[H(h,Sk X x)])i exp λ2σ2 (i) Then for any h H, λ > 0 ln EX µn Eh QX eλ ψλ (h) (λρ (h) / n + 2 nσ)2 and with probability at least 1 δ we have as X µn and h QX that (ii) With probability at least 1 δ we have as X µn and h QX that (h, X) ρ (h) The assumptions mean that every hypothesis has its own subgaussian parameter and that the Hamiltonian is subgaussian in every argument if all other arguments are fixed. The first conclusion parallels the bound for Hamiltonians with bounded differences in Theorem 3.4. The second conclusion has larger constants, but scales with the subgaussian parameter of the hypothesis actually chosen from QX, which can be considerably smaller, similar to the Bernstein-type inequality Theorem 3.5. 4 Applications 4.1 The Gibbs algorithm The Gibbs distribution for a sample x is d Qβ,x (h) = Z 1 exp ( (β/n) Pn i=1 h (xi)) dπ (h), so the Hamiltonian is H (h, x) = (β/n) Pn i=1 h (xi). It is the minimizer of the PAC-Bayesian bounds (Mc Allester [1999]) as well as the limiting distribution of stochastic gradient Langevin dynamics (Raginsky et al. [2017]), generalization bounds for the Gibbs distribution translate to guarantees for these algorithms. Let us first assume bounded hypotheses, for simplicity h : X [0, 1]. Then we can use Theorems 3.4 and 3.5 and Corollary 3.7 with c = β/n. Theorem 3.4 gives with probability at least 1 δ in X µn and h Qβ,X that We were not able to find this simple bound in the literature. It improves over n + 2 + ln ((1 + e) /δ) n obtained in (Rivasplata et al. [2020], Sec. 2.1 and Lemma 3) not only in constants, but, more importantly, in its dependence on the confidence parameter δ. The principal merit of (4), however, lies in the generality and simplicity of its proof (compare the proof of Lemma 3 in Rivasplata et al. [2020]). Upon the substitution c = β/n Theorem 3.5 leads to a variance dependent bound, for which we know of no comparable result. From Corollary 3.7 we get for the Gibbs algorithm with probability at least 1 δ in X and h Qβ,x ˆL (h, X) β2 n2 + ln (1/δ) n2 + ln (1/δ) For hypotheses with small empirical error this approximates a "fast convergence rate" of O (1/n). Comparable bounds in the literature involve the so-called "little KL-divergence". For two numbers s, t [0, 1] the relative entropy of two Bernoulli variables, with means s and t respectively, is kl (s, t) = s ln (s/t) + (1 s) ln ((1 s) / (1 t)). Various authors give bounds on Eh Qβ,x h kl ˆL (h, X) , L (h) i with high probability in the sample. Rivasplata et al. [2020] give Eh Qβ,x h kl ˆL (h, X) , L (h) i 2β2 and there is a similar bound in Dziugaite and Roy [2018] and a slightly weaker one in Lever et al. [2013]. The most useful form of these bounds is obtained using the following inversion rule (Tolstikhin and Seldin [2013], see also Alquier [2021]): if kl ˆL (h, x) , L (h) B then 2ˆL (h, x) B + 2B. If this rule is applied to the kl-bound above, it becomes clear, that it is inferior to (5), not only because of the logarithmic dependence on n, but also because of artifact terms, which are difficult to interpret, like the superfluous β/n3/2. If every h (X) is ρ (h)-subgaussian and ρ = suph ρ (h), then by linearity of the subgaussian parameter H (h, X) is ρβ/n-subgaussian in every argument for every h, and Theorem 3.8 gives with probability at least 1 δ in X µn and h Qβ,X (h, X) ρ (h) Recently Aminian et al. [2023] gave a very interesting bound in probability for sub-gaussian hypotheses, which however is not quite comparable to the above, as it bounds the posterior expectation of and relies on a distribution-dependent prior. 4.2 Randomization of stable algorithms Suppose that H is parametrized by Rd, with π being Lebesgue measure. To simplify notation we identify a hypothesis h H with its parametrizing vector, so that h is simultaneously a vector in Rd and a function h : x X 7 h (x) R. Following Rivasplata et al. [2018] we define the hypothesis sensitivity coefficient of a vector valued algorithm A : X n Rd as c A = max k [n] sup x X n,y,y X Dk y,y A (x) . In typical applications c A = O (1/n) (compare the SVM-application in Rivasplata et al. [2018], as derived originally from Bousquet and Elisseeff [2002]). Consider first the algorithm arising from the Hamiltonian H (h, x) = G (h A (x)) , (6) where G : Rd [0, ) is any function with Lipschitz norm G Lip. One computes A (x) and samples h from the stochastic kernel proportional to exp ( G (h A (x))). By the triangle inequality H satisfies the bounded difference conditions of Theorems 3.4 and 3.5 with c = G Lip c A. If every h H (as a function on X) has range in [0, 1], then, although this algorithm is of a completely different nature, we immediately recover the generalization guarantees as for the Gibbs-algorithm with β/n replaced by G Lip c A. An obvious example is G (h) = h /σ for σ > 0. Another interesting algorithm arises from the Hamiltonian H (h, x) = h A (x) 2 for σ > 0, corresponding to gaussian randomization. For stochastic hypotheses there is an elegant treatment by Rivasplata et al. [2018] using the PAC-Bayesian theorem, and resulting in the bound (with probability at least 1 δ as X µn) Eh QX h kl ˆL (h, X) , L (h) i nc2 A 2σ2 1 + q δ 2 + ln 2 n Since the squared norm is not Lipschitz the previous argument does not work, but with a slight variation of the method we can prove the following result (proof in Section B.1). Theorem 4.1. Let H = Rd with Lebesgue measure π. Suppose Q has Hamiltonian H (h, X) = h A (X) 2 /2σ2, where A has stability coefficient c A. Let δ > 0 and assume that 12nc2 A σ2 and that every h H (as a function on X) has range in [0, 1]. Denote the variance of A by V (A) = E h A (X) E [A (X )] 2i . Then (i) If n > 8 then ln EX h Eh QX h e(n/2)kl(ˆL(h,X),L(h))ii 3 σ2 V (A) + 1 2 ln (2 n) . (ii) If n > 8 then with probability at least 1 δ as X µn Eh QX h kl ˆL (h, X) , L (h) i 6 σ2 V (A) + ln (2 n) + 2 ln (1/δ) (iii) With probability at least 1 δ as X µn and h QX (3/σ2) V (A) + ln (1/δ) (iv) Let v (h) be the variance of h, defined as in Theorem 3.5. Then with probability at least 1 δ as X µn and h QX v (h) (3/σ2) V (A) + ln (1/δ) 3/σ2 V (A) + ln (1/δ) The expected kl-bound (ii) is given only for direct comparison with (7). (iii) and (vi) are stronger, not only by being disintegrated, but also by avoiding the logarithmic dependence on n. In comparison to (7) (ii) has slightly larger constants and we require that 12nc2 A σ2. The latter assumption is mild and holds for sufficiently large n if nc2 A 0 as n (in applications of (7) c A = O (1/n)), but nc2 A may even remain bounded away from zero for 12nc2 A σ2 to hold. On the other hand V (A) = E h A (X) E [A (X )] 2i is always bounded above by nc2 A (see the proof of Lemma 6 in Rivasplata et al. [2018]), so we recover (7) from (ii), while our bound can take advantage of benign distributions. In fortunate cases V (A) can be arbitrarily close to zero, while the nc2 A in (7) is a consequence of the use of Mc Diarmid s inequality in the proof, and (7) remains a worst case bound. The bound (iv) can be inverted as in Corollary 3.7 to give faster rates for small empirical errors, but without the logarithmic dependence in n as in the inverted version of (ii). 4.3 PAC-Bayes bounds with data-dependent priors We quote Theorem 1 (ii) in (Rivasplata et al. [2020]). For the convenience of the reader we give a proof in the appendix (Section B.2). Theorem 4.2. Let F : H X n R be measurable. With probability at least 1 δ in the draw of X µn we have for all P P (H) Eh P [F (h, X)] KL (P, QX) + ln EX h Eh QX h e F (h,X)ii + ln (1/δ) . By substitution of our bounds on ln EX Eh QX e F (h,X) we obtain raw forms of PAC-Bayesian bounds with prior QX for all the Hamiltonian algorithms considered above. But since the final form often involves optimizations, some care is needed. In the simplest case let F (h, X) = (n/2) kl ˆL (h, X) , L (h) , substitute (i) of Theorem 4.1 above and divide by n/2, to prove the following. Theorem 4.3. Under the conditions of Theorem 4.1 we have with probability at least 1 δ in X µn for all P P (H) that Eh P h kl ˆL (h, X) , L (h) i 2KL (P, QX) + 6 σ2 V (A) + ln (2 n) + 2 ln (1/δ) It applies to the case, when the prior is an isotropic Gaussian, centered on the output of the algorithm A, a method related to the methods in Dziugaite and Roy [2018] and Pérez-Ortiz et al. [2021]. Section B.2 sketches how PAC-Bayesian bounds analogous to 4.1 (iii) and (iv) are obtained. 5 Conclusion and future directions The paper presented a method to bound the generalization gap for randomly generated and deterministically executed hypotheses. By using Marton s coupling method as for example in Paulin [2015], one can prove an analogue to Theorem 3.4 for non-iid data generated by a uniformly ergodic Markov chain. It also appears possible to apply the method to iterated stochastic algorithms, where the randomization of a stable "microalgorithm" is repeated, as with stochastic gradient Langevin dynamics (SGLD). An obvious challenge is to give bounds for the Gibbs algorithm in the limit β , or, more generally, in the regime β > n. It is unlikely that the methods of this paper can be successfully applied to this problem without considerable modifications. P. Alquier. User-friendly introduction to pac-bayes bounds. ar Xiv preprint ar Xiv:2110.11216, 2021. G. Aminian, Y. Bu, L. Toni, M. Rodrigues, and G. Wornell. An exact characterization of the generalization error for the gibbs algorithm. Advances in Neural Information Processing Systems, 34:8106 8118, 2021. G. Aminian, Y. Bu, L. Toni, M. R. Rodrigues, and G. W. Wornell. Information-theoretic characterizations of generalization error for the gibbs algorithm. IEEE Transactions on Information Theory, 2023. M. Anthony and P. Bartlett. Learning in Neural Networks: Theoretical Foundations. Cambridge University Press, 1999. S. Boucheron, G. Lugosi, and P. Massart. Concentration Inequalities. Oxford University Press, 2013. O. Bousquet and A. Elisseeff. Stability and generalization. Journal of Machine Learning Research, 2:499 526, 2002. V. V. Buldygin and Y. V. Kozachenko. Sub-gaussian random variables. Ukrainian Mathematical Journal, 32:483 489, 1980. O. Catoni. A pac-bayesian approach to adaptive classification. preprint, 840:2, 2003. O. Catoni. Pac-bayesian supervised classification: the thermodynamics of statistical learning. ar Xiv preprint ar Xiv:0712.0248, 2007. G. K. Dziugaite and D. M. Roy. Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data. ar Xiv preprint ar Xiv:1703.11008, 2017. G. K. Dziugaite and D. M. Roy. Data-dependent pac-bayes priors via differential privacy. Advances in neural information processing systems, 31, 2018. J. W. Gibbs. Elementary principles in statistical mechanics: developed with especial reference to the rational foundations of thermodynamics. C. Scribner s sons, 1902. B. Guedj. A primer on pac-bayesian learning. ar Xiv preprint ar Xiv:1901.05353, 2019. I. Kuzborskij, N. Cesa-Bianchi, and C. Szepesvári. Distribution-dependent analysis of gibbs-erm principle. In Conference on Learning Theory, pages 2028 2054. PMLR, 2019. J. Langford and M. Seeger. Bounds for averaging classifiers. School of Computer Science, Carnegie Mellon University, 2001. G. Lever, F. Laviolette, and J. Shawe-Taylor. Tighter pac-bayes bounds through distribution-dependent priors. Theoretical Computer Science, 473:4 28, 2013. A. Maurer. A note on the pac bayesian theorem. ar Xiv preprint cs/0411099, 2004. A. Maurer. Concentration inequalities for functions of independent variables. Random Structures & Algorithms, 29(2):121 138, 2006. D. A. Mc Allester. Pac-bayesian model averaging. In Proceedings of the twelfth annual conference on Computational learning theory, pages 164 170, 1999. C. Mc Diarmid. Concentration. In Probabilistic Methods of Algorithmic Discrete Mathematics, pages 195 248, Berlin, 1998. Springer. D. Paulin. Concentration inequalities for markov chains by marton couplings and spectral methods. Electron. J. Probab, 20(79):1 32, 2015. M. Pérez-Ortiz, O. Rivasplata, J. Shawe-Taylor, and C. Szepesvári. Tighter risk certificates for neural networks. The Journal of Machine Learning Research, 22(1):10326 10365, 2021. M. Raginsky, A. Rakhlin, and M. Telgarsky. Non-convex learning via stochastic gradient langevin dynamics: a nonasymptotic analysis. In Conference on Learning Theory, pages 1674 1703. PMLR, 2017. O. Rivasplata, E. Parrado-Hernández, J. S. Shawe-Taylor, S. Sun, and C. Szepesvári. Pac-bayes bounds for stable algorithms with instance-dependent priors. Advances in Neural Information Processing Systems, 31, 2018. O. Rivasplata, I. Kuzborskij, C. Szepesvári, and J. Shawe-Taylor. Pac-bayes analysis beyond the usual bounds. Advances in Neural Information Processing Systems, 33:16833 16845, 2020. M. Seeger. Pac-bayesian generalisation error bounds for gaussian process classification. Journal of machine learning research, 3(Oct):233 269, 2002. I. O. Tolstikhin and Y. Seldin. Pac-bayes-empirical-bernstein inequality. Advances in Neural Information Processing Systems, 26, 2013. A Remaining proofs of Section 3 A.1 Markov s inequality We use the following consequence of Markov s inequality. Lemma A.1. For any real random variable Y and δ > 0 we have Pr Y > ln E e Y + ln (1/δ) δ. Proof. From Markov s inequality Pr e Y > E e Y /δ δ. Take logarithms. A.2 Proof of Proposition 3.2 Lemma A.2. (i) Let φ (t) = (et t 1) /t2 if t = 0. Then the function φ is increasing, and if the random variable X satisfies E [X] = 0 and X b for b > 0, then E e X eφ(b)E[X2]. (ii) φ (t) 1/ (2 t) for 0 t < 2. Proof. Part (i) is Lemma 2.8 in Mc Diarmid [1998]. (ii) follows from a term by term comparison of the power series (k + 2)! and 1 2 t = Proposition A.3 (Restatement of Proposition 3.2). Let X, X1, ..., Xn be iid random variables with values in X, X = (X1, ..., Xn) and f : X n R measurable. (i) If f is such that for all k [n], x X n we have EX h ef(Sk Xx) EX [f(Sk X x)]i er2, then E h ef(X) E[f(X )]i enr2. (ii) If Dk y,y f (x) c for all k [n], y, y X and x X n, then E h ef(X) E[f(X )]i enc2/8. (iii) If there is b (0, 2), such that for all k [n] and x X n we have f (x) EX µ f Sk X x b, then with vk = supx X n EX µ h f Sk Xx EX µ f Sk X x 2i EX h ef(X) E[f(X )]i exp Proof. (i) For S [n] we write ES [.] = E .| {Xi}i/ S , so ES [.] is integration over all variables in S. By independence {ES [.] : S [n]} is a set of commuting projections and ES1 [ES2 [.]] = ES1 S2 [.]. E[k] is expectation in all variables up to Xk, and E{k} is expectation only in Xk. The assumption therefore reads E{k} h ef(X) E{k}[f(X)]i er2. Using E[k 1] E[k] [f (X)] = E[k 1] E{k} [f (X)] .We have the telescopic expansion f (X) E [f (X )] = k=1 E[k 1] [f (X)] E[k] [f (X)] k=1 E[k 1] f (X) E{k} [f (X)] , We claim that for all m, 0 m n E h ef(X) E[f(X )]i emr2E k=m+1 E[k 1] f (X) E{k} [f (X)] !# from which the proposition follows with m = n. Because of above telescopic expansion the claim is true for m = 0, and we assume it to hold for m 1. Then E h ef(X) E[f(X )]i k=m E[k 1] f (X) E{k} [f (X)] !# = e(m 1)r2E f (X) E{m} [f (X)] + k=m+1 E[k 1] f (X) E{k} [f (X)] #!# because the later terms depend only on the variables Xm+1, ..., Xn. By Jensen s inequality the last expression is bounded by f (X) E{m} [f (X)] + k=m+1 E[k 1] f (X) E{k} [f (X)] !# = e(m 1)r2E ef(X) E{m}[f(X)] exp k=m+1 E[k 1] f (X) E{k} [f (X)] !# = e(m 1)r2E E{m} h ef(X) E{m}[f(X)]i exp k=m+1 E[k 1] f (X) E{k} [f (X)] !# again because the later terms do not depend on Xm, and by assumption the last expression is bounded by k=m+1 E[k 1] f (X) E{k} [f (X)] !# which completes the induction and the proof of (i). (ii) follows from (i) and Hoeffding s lemma (Lemma 2.2 in Boucheron et al. [2013]) which says that EX µ h ef(Sk Xx) EX µ[f(Sk X x)]i e c2 if f Sk yx as a function of y has range in a set of diameter c. (iii) Follows from (i) and Lemma A.2 since EX µ h ef(Sk Xx) EX µ[f(Sk X x)]i eφ(b)vk. A.3 Proof of Corollary 3.7 Lemma A.4. Let L, ˆL, A 0 and assume that Then L ˆL + 2 p A + A ˆL + 2A A 2 ˆL + 2A = A 2 ˆL + 2 p The lemma follows from 3 + To get Corollary 3.7 apply this to Theorem 3.5. A.4 Subgaussian hypotheses and proof of Theorem 3.8 Lemma A.5. (From Buldygin and Kozachenko [1980]) (i) if Y is σ-subgaussian then E h (Y E [Y ])2i σ2. (ii) if Y1 and Y2 are σ1 - and σ2-subgaussian respectively, the Y1 + Y2 is σ1 + σ2-subgaussian. The next lemma shows that the log-partition function ln Z (X) is exponentially concentrated, whenever the Hamiltonian H (h, X) is subgaussian uniformly in h. Lemma A.6. Let p 1 and H (h, X) be σ-subgaussian for every h H and H e H(h,x)dπ (h) . (i) Then ln E h ep( ln Z(X)+E[ln Z(X )])i p2σ2. (ii) If f : X n R is ρ-subgaussian then ln E h ep(f(X) E[f(X )] ln Z(X)+E[ln Z(X )])i p2 (ρ + σ)2 . Since the inequalities are given only for p 1 they do not quite imply that ln Z itself is subgaussian. Proof. We only need to prove (ii), which implies (i) by setting f 0. By Jensen s inequality EX h ep(f(X) E[f(X )] ln Z(X)+E[ln Z(X )])i EXX h ep(f(X) f(X ) ln Z(X)+ln Z(X ))i = EX h epf(X)Z (X) pi EX h e pf(X)Z (X)pi . Define a probability measure ν on H by ν (A) = Z 1 ν R A e E[H(h,X)]dπ (h) for A H measurable.with Zν = R H e E[H(h,X)]dπ (h). Then Z (X) p = Eh ν h e H(h,X) E[H(h,X )]i p Zp ν Eh ν h ep(E[H(h,X )] H(h,X))i Zp ν, by Jensen s inequality, since t 7 t 1 is convex. Similarly Z (X)p Eh ν h ep(H(h,X) E[H(h,X )])i Z p ν . Thus the above inequality can be written EX h ep(f(X) E[f(X )] ln Z(X)+E[ln Z(X )])i EX h Eh ν h ep(f(X)+E[H(h,X )] H(h,X))ii EX h Eh ν h ep( f(X)+H(h,X) E[H(h,X )])ii . The first factor can be bounded by Eh ν h EX h ep(f(X)+E[H(h,X )] H(h,X))ii ep E[f(X )]e by the subgaussian assumptions for f and H and Lemma A.5 (ii), and similarly the second factor is bounded by e p E[f(X )]e Putting the two bounds together completes the proof. Theorem A.7 (Restatement of Theorem 3.8). Let Q have Hamiltonian H and assume that h H there is ρ (h) > 0 such that λ R, E h eλ(h(X) E[h(X )])i e Let ρ = suph H ρ (h) and suppose that λ R, k [n] , h H E h eλ(H(h,Sk Xx) E[H(h,Sk X x)])i e λ2σ2 (i) Then for any h H, λ > 0 ln EX µn Eh QX eλ ψλ (h) (λρ (h) / n + 2 nσ)2 and with probability at least 1 δ we have as X µn and h QX that (ii) With probability at least 1 δ we have as X µn and h QX that (h, X) ρ (h) Proof. Let h H be any fixed hypothesis. By assumption and Proposition 3.2 (i) H (h, X) is nσ-subgaussian and λ (h, X) is λρ (h) / n-subgaussian. Using the previous lemma (ii) with p = 1 and f (X) = λ (h, X) + H (h, X) E [H (h, X )], which is centered and λρ (h) / n + nσ-subgaussian, we get that ψλ (h) = ln EX h eλ (h,X)+HQ(h,X) E[HQ(h,X )]i = ln E h ef(X) ln Z(X)+E[ln Z(X )]i (λρ (h) / n + 2 nσ)2 With ρ (g) ρ we get from Proposition 3.1 that with probability at least 1 δ (h, X) (λρ/ n + 2 nσ)2 2λ + ln (1/δ) 2n + 2nσ2 + ln (1/δ) The optimal choice of λ and subadditivity of t (ii) We proceed as in the proof of Theorem 3.5. For HQ (h, X) = H (h, X) ln Z (X) the previous lemma yields with f (X) = H (g, X) E [H (g, X )] and p = 2 that EX h e2(HQ(h,X) E[HQ(h,X )])i e8nσ2. Also EX h e2λ (h,X)i e2λ2ρ(h)2/n. (8nσ2 + ln (1/δ)) and F (h, X) = λ (h) (h, X) λ (h)2 ρ (h)2 /n. Then with Cauchy-Schwarz ψF (h) = ln E h e F (h,X)+HQ(h,X) E[HQ(h,X )]i ln EX h e2λ (h,X)i 1/2 e λ(h)2ρ(h)2/n EX h e2(HQ(h,X) E[HQ(h,X )])i1/2 Proposition 3.1, substitution of λ (h) and subadditivity of t t then give (h, X) λ (h) ρ (h)2 n + 8nσ2 + ln (1/δ) n (32nσ2 + 4 ln (1/δ)) B Remaining proofs for Section 4 B.1 Proof of Theorem 4.1 We need the following Lemma. Lemma B.1. Let w, v Rd and λ [1, ) then Ex N(w,σ2I) h e( λ 2σ2 )( x v 2 x w 2)i = e 2σ2 v w 2 . Proof. We can absorb 2σ in the definition of the norm. Then by translation Ex N(w,I) h e λ( x v 2 x w 2)i = Ex N(0,I) h e λ( x (v w) 2 x 2)i = e λ v w 2Ex N(0,I) h e2λ x,v w i . Rotating v w to v w e1, where e1 is the first basis vector, and using independence of the components gives Ex N(0,I) h e2λ x,v w i = Ex N(0,I) h e2λ v w x,e1 i = 1 e2λ v w t t2/2dt = e2λ2 v w 2 2 dt = e2λ2 v w 2. Combination with the previous identity and taking 2σ back out of the norm completes the proof. Our proof of Theorem 4.1 uses the following exponential concentration inequality, implicit in the proof of Theorem 13 in (Maurer [2006]) and in the proof of Theorem 6.19 in (Boucheron et al. [2013]) Theorem B.2. Let f : X n R and define an operator D2 by f (x) inf y X f Sk yx 2 . If for some a > 0 and all x X n, D2f (x) af (x), then for λ (0, 2/a) ln E h eλ(f(X) E[f(X )])i λ2a E [f (X)] 2 aλ or equivalently ln E h eλf(X)i 2λE [f (X)] Theorem B.3 (Restatement of Theorem 4.1). Let H = Rd with Lebesgue measure π. Suppose Q has Hamiltonian H (h, X) = h A (X) 2 /2σ2, where A has stability coefficient c A. Let δ > 0 and assume that 12nc2 A σ2 and that every h H (as a function on X) has range in [0, 1]. Denote the variance of A by V (A) = E h A (X) E [A (X )] 2i . Then (i) If n > 8 then ln EX h Eh QX h e(n/2)kl(ˆL(h,X),L(h))ii 3 σ2 V (A) + 1 2 ln (2 n) . (ii) If n > 8 then with probability at least 1 δ as X µn Eh QX h kl ˆL (h, X) , L (h) i 6 σ2 V (A) + ln (2 n) + 2 ln (1/δ) (iii) With probability at least 1 δ as X µn and h QX (3/σ2) V (A) + ln (1/δ) (iv) Let v (h) be the variance of h, defined as in Theorem 3.5. Then with probability at least 1 δ as X µn and h QX v (h) (3/σ2) V (A) + ln (1/δ) 3/σ2 V (A) + ln (1/δ) Proof of Theorem 4.1. All Gaussians with covariance σ2I have the same normalizing factors, so the partition function for H (h, x) = h A (x) 2 / 2σ2 is also the normalizing factor of N E [A (X)] , σ2I , whence, using Cauchy-Schwarz, ln EX h Eh QX h e F (h,X)ii = ln EX H e F (h,X)+HQ(h,X)dπ (h) Eh N(E[A(X)],σ2I) e F (h,X) h A(X) 2σ2 2+ h E[A(X)] 2 sup h H ln EX h e2F (h,X)i + 1 Eh N(E[A(X)],σ2I) 2σ2 2 h E[A(X)] The bound on C depends on the respective choice of F and will be treated below. Using Lemma B.1 the second term is equal to 2 ln EX h e 3 σ2 A(X) E[A(X)] 2i . To apply Theorem B.2 to f (x) = A (x) E [A (X)] 2 we fix x X n and k [n], and let y X be a minimizer of A Sk yx E [A (X)] 2. Then f (x) inf y X f Sk yx 2 = A (x) E [A (X)] 2 A Sk yx E [A (X)] 2 2 = A (x) A Sk yx , A (x) E [A (X)] + A Sk yx E [A (X)] 2 A (x) A Sk yx 2 A (x) E [A (X)] + A Sk yx E [A (X)] 2 4c2 Af (x) . Summing over k we get D2f (x) 4nc2 Af (x). Since 12nc2 A/σ2 1 < 2 we can apply Theorem B.2 a = 4nc2 A and λ = 3/σ2 to obtain B = 1 2 ln E h e(3/σ2) A(X) E[A(X )] 2i 3/σ2 E h A (X) E [A (X )] 2i 2 12nc2 A/σ2 3 σ2 E h A (X) E [A (X )] 2i = 3 (i) Let F (h, X) = (n/2) kl ˆL (h, X) , L (h) . Then using (Maurer [2004]) C = suph ln EX e2F (h,X) /2 (1/2) ln (2 n), so from the above ln EX h Eh QX h e(n/2)kl(ˆL(h,X),L(h))ii 3 σ2 V (A) + 1 which is (i). By Jensen s inequality ln EX h e(n/2)Eh QX[kl(ˆL(h,X),L(h))]i ln EX h Eh QX h e(n/2)kl(ˆL(h,X),L(h))ii , so part (ii) then follows from Markov s inequality and division by n/2. (iii) Let F (h, X) = λ (h, X) for λ > 0. Using Proposition 3.2 (ii) we get for all h H that C = (1/2) ln EX e2F (h,X) λ2/ (4n), so ln EX h Eh QX h eλ (h,X)ii = λ2 σ2 V (A) . (8) Markov s inequality gives with probability at least 1 δ as X µn and h QX that 3 σ2 V (A) + ln (1/δ) Optimization of λ gives (iii). (iv) We proceed as in the proof of Theorem 3.5. Let 3 σ2 V (A) + ln (1/δ) 3 σ2 V (A) + ln (1/δ) + q Fλ (h, X) = λ (h) (h, X) λ (h)2 1 λ (h) /n v (h) By Lemma 3.6 2C = ln EX e2Fλ(h,X) 0, so ln EX h Eh QX h e Fλ(h,X)ii 3 and with probability at least 1 δ as as X µn and h QX Pr (h, X) > λ (h) 1 λ (h) /n v (h) 3 σ2 V (A) + ln (1/δ) Inserting the definition of λ (h) completes the proof. B.2 PAC-Bayes bounds with data-dependent priors Theorem B.4 (Restatement of Theorem 4.2). Let F : H X n R be measurable. With probability at least 1 δ in the draw of X µn we have for all P P (H) Eh P [F (h, X)] KL (P, QX) + ln EX h Eh QX h e F (h,X)ii + ln (1/δ) . Proof. Let P P1 (H) be arbitrary. Then Eh P [F (h, X)] KL (P, QX) = ln exp (Eh P [F (h, X) ln (d P/d QX (h))]) ln Eh P [exp (F (h, X) ln (d P/d QX (h)))] = ln Eh P h e F (h,X)(d P/d QX (h)) 1i = ln Eh QX h e F (h,X)i . So we only need to bound the last expression, which is independent of P. But by Markov s inequality (Lemma A.1) with probability at least 1 δ in X µn ln Eh QX h e F (h,X)i ln EX h exp ln Eh QX h e F (h,X)i i + ln (1/δ) = ln EX h Eh QX h e F (h,X)ii + ln (1/δ) . We close with a method to deal with the problem of parameter optimization in the derivation of PACBayesian bounds from our results. We apply it to the case of Gaussian priors centered on the output of stable algorithms, but analogous results can be equally derived for Hamiltonians with bounded differences or sub-gaussian Hamiltonians. Our first result is a PAC-Bayesian bound analogous to part (iii) of Theorem 4.1. Theorem B.5. Under the conditions of Theorem 4.1 we have with probability at least 1 δ in X µn for all P P (H) that Eh P [ (h, X)] > 3 σ2 V (A) + 2KL (P, QX) + ln (2KL (P, QX) /δ) To prove this we first establish the following intermediate result. Proposition B.6. Under the conditions of Theorem 4.1 let K > 0 and δ > 0. Then with probability at least 1 δ as X µn we have for any P P (H) with KL (P, QX) K that Eh P [ (h, X)] 3 σ2 V (A) + K + ln (1/δ) Proof. If KL (P, QX) K we get from Theorem 4.2 and inequality (8) with probability at least 1 δ in X µn Eh P [λ (h, X)] K Eh P [λ (h, X)] KL (P, QX) ln EX h Eh QX h eλ (h,X)ii + ln (1/δ) σ2 V (A) + ln (1/δ) . Bring K to the other side, divide by λ and and optimize λ to complete the proof. To get rid of K we use a model-selection lemma from Anthony and Bartlett [1999]. Lemma B.7. (Lemma 15.6 in Anthony and Bartlett [1999]) Suppose Pr is a probability distribution and {E (α1, α2, δ) : 0 < α1, α2, δ 1} is a set of events, such that (i) For all 0 < α 1 and 0 < δ 1, Pr {E (α, α, δ)} δ. (ii) For all 0 < α1 α α2 1 and 0 < δ1 δ 1 E (α1, α2, δ1) E (α, α, δ) . Then for 0 < a, δ < 1, α (0,1] E (αa, α, δα (1 a)) δ. Proof. Define the events E (α1, α2, δ) := P, KL (P, QX) α 1 2 , Eh P [ (h, X)] > 3 σ2 V (A) + α 1 1 + ln (1/δ) n By Proposition B.6 they satisfy (i) of Lemma B.7 and it is easy to see, that (ii) also holds. If we set a = 1/2 the conclusion of Lemma B.7 becomes Eh P [ (h, X)] > 3 σ2 V (A) + 2KL (P, QX) + ln (2KL (P, QX) /δ) To get a PAC-Bayesian bound analogous to Theorem 4.1 (iv) we proceed similarly and obtain the intermediate bound that for δ > 0 with probability at least 1 δ in X µn for all P such that Eh P [v (h)] V and KL (P, QX) K Eh P [ (h, X)] 2 3 σ2 V (A) + K + ln (1/δ) 3 σ2 V (A) + K + ln (1/δ) Then we proceed as above, except that we have to use Lemma B.7 twice, once with K and once with n V . The result of this mechanical procedure is Theorem B.8. For δ > 0 with probability at least 1 δ in X µn for all P Eh P [ (h, X)] 2 (2Eh P [v (h)] + 1/n) 3 σ2 V (A) + C 3 σ2 V (A) + C C = 2KL (P, QX) + 1 + ln (2 (KL (P, QX) + 1) (2 (n Eh P [v (h)] + 1)) /δ) . C Table of notation X space of data µ probability of data n sample size x generic member (x1, ..., xn) X n X training set X = (X1, ..., Xn) µn H loss class (loss fctn. composed with hypotheses, h : X [0, )) P (H) probability measures on H π nonnegative a-priori measure on H L (h) L (h) = Ex µ [h (x)], expected loss of h H ˆL (h, X) ˆL (h, X) = (1/n) Pn i=1 h (Xi), empirical loss of h H (h, X) L (h) ˆL (h, X), generalization gap Q Q : x X n 7 Qx P (H), stochastic algorithm Qx (h) density w.r.t. π of Qx evaluated at h H, Qx (h) = exp (HQ (h, x)) H H : H X n R, Hamiltonian Z Z : X n R, Z (x) = R H exp (H (h, x)) dπ (h), partition function HQ HQ (h, x) = H (h, x) ln Z (x) = ln Qx (h) Sk y Sk yx = (x1, ..., xk 1, y, xk+1, ..., xn), substitution operator Dk y,y Dk y,y f (x) = f Sk yx f Sk y x , partial difference operator kl (p, q) kl (p, q) = p ln p q + (1 p) ln 1 p 1 q , re. entropy of Bernoulli variables dν dρ, KL-divergence of p.-measures ρ and ν . Euclidean norm on RD. Neur IPS Paper Checklist Question: Do the main claims made in the abstract and introduction accurately reflect the paper s contributions and scope? Answer: [Yes] Justification: Guidelines: The answer NA means that the abstract and introduction do not include the claims made in the paper. The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. 2. Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [NA] Justification: The limitations are implicit in the assumptions stated to derive the results in the paper, but there is no separate "limitations" section. Guidelines: The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. The authors are encouraged to create a separate "Limitations" section in their paper. The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren t acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. 3. Theory Assumptions and Proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: [Yes] Justification: Guidelines: The answer NA means that the paper does not include theoretical results. All the theorems, formulas, and proofs in the paper should be numbered and crossreferenced. All assumptions should be clearly stated or referenced in the statement of any theorems. The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. Theorems and Lemmas that the proof relies upon should be properly referenced. 4. Experimental Result Reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: [NA] Justification: the paper does not include experiments. Guidelines: The answer NA means that the paper does not include experiments. If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. While Neur IPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example (a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. (b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. (c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). (d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. 5. Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [NA] Justification:the paper does not include experiments. Guidelines: The answer NA means that paper does not include experiments requiring code. Please see the Neur IPS code and data submission guidelines (https://nips.cc/ public/guides/Code Submission Policy) for more details. While we encourage the release of code and data, we understand that this might not be possible, so No is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). The instructions should contain the exact command and environment needed to run to reproduce the results. See the Neur IPS code and data submission guidelines (https: //nips.cc/public/guides/Code Submission Policy) for more details. The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. 6. Experimental Setting/Details Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [NA] Justification:the paper does not include experiments. Guidelines: The answer NA means that the paper does not include experiments. The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. The full details can be provided either with the code, in appendix, or as supplemental material. 7. Experiment Statistical Significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: [NA] Justification: the paper does not include experiments. Guidelines: The answer NA means that the paper does not include experiments. The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) The assumptions made should be given (e.g., Normally distributed errors). It should be clear whether the error bar is the standard deviation or the standard error of the mean. It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. 8. Experiments Compute Resources Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: [NA] Justification: the paper does not include experiments. Guidelines: The answer NA means that the paper does not include experiments. The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn t make it into the paper). 9. Code Of Ethics Question: Does the research conducted in the paper conform, in every respect, with the Neur IPS Code of Ethics https://neurips.cc/public/Ethics Guidelines? Answer: [Yes] Justification: Guidelines: The answer NA means that the authors have not reviewed the Neur IPS Code of Ethics. If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). 10. Broader Impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: [No] Justification: It is a theoretical paper whose societal impact depends on the application of the presented results Guidelines: The answer NA means that there is no societal impact of the work performed. If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). 11. Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [NA] Justification: Guidelines: The answer NA means that the paper poses no such risks. Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. 12. Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: [NA] Justification: Guidelines: The answer NA means that the paper does not use existing assets. The authors should cite the original paper that produced the code package or dataset. The authors should state which version of the asset is used and, if possible, include a URL. The name of the license (e.g., CC-BY 4.0) should be included for each asset. For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. If this information is not available online, the authors are encouraged to reach out to the asset s creators. 13. New Assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: [NA] Justification: Guidelines: The answer NA means that the paper does not release new assets. Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. The paper should discuss whether and how consent was obtained from people whose asset is used. At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. 14. Crowdsourcing and Research with Human Subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: [NA] Justification: Guidelines: The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. According to the Neur IPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. 15. Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [NA] Justification: Guidelines: The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the Neur IPS Code of Ethics and the guidelines for their institution. For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.