# on_certified_generalization_in_structured_prediction__1c53f0b1.pdf On Certified Generalization in Structured Prediction Bastian Boll Image & Pattern Analysis Group Heidelberg University bastian.boll@iwr.uni-heidelberg.de Christoph Schnörr Image & Pattern Analysis Group Heidelberg University schnoerr@math.uni-heidelberg.de In structured prediction, target objects have rich internal structure which does not factorize into independent components and violates common i.i.d. assumptions. This challenge becomes apparent through the exponentially large output space in applications such as image segmentation or scene graph generation. We present a novel PAC-Bayesian risk bound for structured prediction wherein the rate of generalization scales not only with the number of structured examples but also with their size. The underlying assumption, conforming to ongoing research on generative models, is that data are generated by the Knothe-Rosenblatt rearrangement of a factorizing reference measure. This allows to explicitly distill the structure between random output variables into a Wasserstein dependency matrix. Our work makes a preliminary step towards leveraging powerful generative models to establish generalization bounds for discriminative downstream tasks in the challenging setting of structured prediction. 1 Introduction 1.1 Overview Structured prediction is the task of predicting an output which itself contains internal structure. As an example, consider the problem of image segmentation. The output to be predicted is an assignment of semantic classes to each image pixel. However, the segmentation problem is not merely a pixelwise classification, because each pixel is not independently assigned a semantic class. If two pixels are adjacent in the image plane, it is more likely that they belong to the same class than to different ones. A way to remedy this problem is to enumerate all possible segmentations of an image and treat the problem as classification. However, the output space of this classification is now exponentially large. This exponential (in the number of pixels) size of the output space indicates a much more difficult problem than the unstructured case of classification. In fact, it also presents an immediate challenge for any statistical learning theory which requires assumptions on the number of output classes [44]. From a practical perspective, structured prediction problems present a challenge of label aquisition. For instance, dense manual segmentation of an image is significantly more labour intensive than manual classification. In turn, one would expect that pixelwise segmentation of an image contains much richer information to be exploited in supervised learning. However, this is not represented in typical statistical learning theory which assumes all data to be independently drawn from the true underlying distribution. In the extreme case of only a single structured example being available for training, these statistical learning theories can not make any meaningful statement on generalization. Addressing this point in particular, [38] presents an analysis of the dependency structure in the output and proves a risk certificate which decays in both the number of structured examples m and their size d. Refering back to the example of image segmentation, this amounts to a high-probability bound on the fraction of misslabeled pixels which decays with the number of labeled pixels observed during training as opposed to merely the number of segmented images. 37th Conference on Neural Information Processing Systems (Neur IPS 2023). At the core of statistical learning theory lies the concentration of measure phenomenon which posits that a stable function of a large number of weakly dependent random variables will take values close to its mean [37, 9]. This is relevant because model risk, the expected loss on unseen data, is the mean of empirical risk under the draw of the sample. Learning theories can thus be built on concentration of measure results. In the present work, we focus on PAC-Bayesian learning theory [51, 43, 42]. For an overview of PAC-Bayesian theory we refer to [13, 27, 1]. In addition to a concentration result such as a moment-generating function (MGF) bound, PACBayesian arguments employ a change of measure, typically via Donsker and Varadhan s variational formula. In particular, stochastic predictors, i.e. distributions over a hypothesis class of models are considered. A core objective is to construct generalization bounds which hold uniformly over all stochastic predictors called PAC-Bayes posteriors. Model complexity is then measured as relative entropy to a reference stochastic predictor called PAC-Bayes prior. As this terminology alludes to, the PAC-Bayes posterior is informed by more data than the PAC-Bayes prior. Note however, that PAC-Bayesian theory generalizes Bayesian theory because prior and posterior are not typically connected via likelihood. PAC-Bayesian theory has gained considerable attention in recent years due to the demonstration of non-vacuous risk certificates in deep learning [23]. Since then, a line of research has succeeded in tightening the risk bounds of deep classifiers [48, 47, 16]. In addition, multiple authors have studied ways to weaken underlying assumptions of bounded loss [29, 28] and i.i.d. data [3]. The majority of recent works in this field focuses on classification or regression. Structured prediction has received comparatively little recent attention, an exception being the work [11] which provides a PAC-Bayesian perspective on the implicit loss embedding framework for structured prediction [15]. 1.2 Related Work Here, we continue a line of research started by [39, 40, 38] which aims to construct PAC-Bayesian risk bounds for structured prediction that account for generalization from a single (but large) structured datum. Instrumental to their analysis is the stability of inference and quantified dependence in the data distribution. The latter is expressed in terms of ϑ-mixing coefficients, the total variation distance between data distributions conditioned on fixed values of a subset of variables. For structured prediction with Hamming loss, a coupling of such conditional measures can be constructed [24] such that ϑ-mixing coefficients yield an upper-bound that allows to invoke concentration of measure arguments [36]. The result is an MGF bound which the authors employ in a subsequent PAC-Bayesian construction achieving generalization from a single datum. The underlying assumption of these previous works is that data are generated by a Markov random field (MRF). This model assumption is somewhat limiting because Markov properties, certain conditional independences, likely do not hold for many real-world data distributions. In addition, MRFs are difficult to work with computationally. Exact inference in MRFs is NP-hard [57] and thus learning, which often contains inference as a subproblem, presents significant computational roadblocks. Even once an MRF has been found which represents data reasonably well, numerical evaluation of the PAC-Bayesian risk certificate proposed in [38] will again be computationally difficult. Nevertheless, we share the sentiment of previous authors that some assumption on the data-generating process is required in structured prediction. This is because conditional data distributions, the distribution of data conditioned on a fixed set of values for a subset of variables, are central to establishing concentration of measure via the martingale method [35]. Consider again the example of image segmentation. Once we have fixed a sufficiently large number of pixels to arbitrary values (and class labels), even a large dataset will not contain an abundance of data which match these values and thus provide statistical power to learn the conditional distribution. This problem is well-known in conditional density estimation [56]. 1.3 Contribution, Organization In this work, we propose to instead assume a triangular and monotone transport, a Knothe-Rosenblatt (KR) rearrangement [32, 49, 12, 8, 41] of a reference measure as data model. This choice is attractive for multiple reasons. First, any data distribution which does not contain atoms can be represented uniquely in this way [6] which should suffice to represent many distributions of practical interest. With regard to conditional distributions, the KR-rearrangement has the convenient property that conditioning on a fixed value for a subset of variables can again be represented by KR-rearrangement. We will use this property in our construction of coupling measures between conditional distributions. Specifically, We present a novel PAC-Bayesian risk bound for structured prediction wherein the rate of generalization scales not only with the number of structured examples but also with their size. Based on data generated by KR-rearrangement of a tractable reference measure, we distill relevant structure of the data distribution into a Wasserstein dependency matrix. Our analysis hinges on state-of-the-art results in concentration theory [35] which serve to bound momentgenerating functions by properties of the Wasserstein dependency matrix. We subsequently invoke a PAC-Bayesian argument to derive the desired risk certificate. We also propose to leverage a construction of bad input data as a computational tool to find entries of the Wasserstein dependency matrix. We stress the fact that many established approaches to generative modelling can be seen as instances of measure transport. For instance, it includes normalizing flows [54, 53, 33, 46, 50], diffusion models [52, 30], generative adversarial networks and variational autoencoders [10, 26]. While most measure transport models which currently enjoy empirical success are not KR-rearrangements, we hope that the methods presented here can lay the foundation of leveraging powerful generative models to build risk certificates for discriminative downstream tasks. Organization The central concepts of concentration theory and measure transport are described in the preliminary Sections 2 and 3. The main results are presented in Section 4. In Section 5 we present a first discussion of computational aspects related to the presented framework. The paper closes on a discussion of limitations in Section 6 and a conclusion in Section 7. Basic Notation For any d N, denote [d] = {1, . . . , d}. If z Zd is a vector, we refer to the subvector of entries with index in a set I [d] as z I. In particular, index sets of interest will be half-open and closed intervals (i, d] [d] and [i, d] [d]. Analogously, we will index the output of vector-valued functions f I and marginal measures µI. For a set B Zd, we denote its complement in Zd by Bc = Zd \ B and for a measure µ on Zd, we denote the conditional measure given Bc as µ|Bc. If Z is a random variable with distribution µ on Zd and I, J [d] are disjoint index sets with I J = [d], we denote the conditional law of ZI given ZJ = z J as µ(dz I|z J). 2 Concentration of Measure and Generalization Let X denote an input space and Y denote an output space. Let µ be a distribution on Zd = (X Y)d. There are two restrictions inherent to this setup. First, an input is always paired with an output and thus the number of inputs needs to match the number of outputs. Second, all structured data will be drawn from µ and thus the size of each structured datum will be the same. Otherwise, X and Y can in principle be arbitrary sets which admit metrics. For concreteness, think of X = [0, 1] R as being a set of gray values and Y = R containing signed distances from a semantic boundary [45] in an image with d pixels. In this case, Zd contains all binary segmentations of grayvalue images. The goal of learning is to find parameters θ which define a predictor ϕθ : X d Yd such that the risk R(θ) = E(X,Y ) µ[L(ϕθ(X), Y )] (1) with respect to some loss function L: Yd Yd R is minimized. However, the true risk (1) is typically intractable because µ is unknown. A related tractable quantity is the empirical risk Rm(θ, Dm) = 1 k [m] L(ϕθ(X(k)), Y (k)) (2) of a sample Dm = (X(k), Y (k))k [m] drawn from µm. We further assume that the loss of structured outputs is the mean of bounded pointwise loss ℓ: Y Y [0, 1] L(ey(k), y(k)) = 1 i [d] ℓ(ey(k) i , y(k) i ) . (3) In PAC-Bayesian constructions, we consider stochastic predictors ζ, i.e. measures on a hypothesis space H of predictors ϕθ : X d Yd as identified with measures on the underlying parameter space from which θ is selected. We then take the above notions of risk and empirical risk in expectation over parameter draws R(ζ) = Eθ ζ[R(θ)], Rm(ζ, Dm) = Eθ ζ[Rm(θ, Dm)] . (4) The expected value of empirical risk with respect to the sample is the true risk. For this reason, a central tool for the study of generalization is the concentration of measure phenomenon. Informally, it states that a stable function of many weakly dependent random variables concentrates on its mean. We will invoke a line of reasoning put forward in [35] and propose a novel approach to structured prediction based on the measure-transport framework outlined in Section 1. To this end, we first define the following formal notions of stability and dependence. Let ρ be a metric such that Z has finite diameter ρ = sup ξ,ξ Z ρ(ξ, ξ ) < (5) and let ρd(z, z ) = P i [d] ρ(zi, z i) denote the corresponding product metric on Zd. Definition 1 (Local oscillation). Let f : Zd R be Lipschitz with respect to ρd. Then the quantities δi(f) = sup z,z Zd,z [d]\{i}=z[d]\{i} |f(z) f(z )| ρ(zi, z i) , i [d] (6) are called the local oscillations of f. The vector of local oscillations gives a granular account of stability. In order to discuss interdependence of data in a probability space (Zd, µ, Σ), define the Markov kernels K(i)(z, dw) = δz[i 1](dw[i 1]) µ[i,d](dw[i,d]|z[i 1]), i [d] (7) as well as K(d+1)(z, dw) = δz(dw) and their action on functions K(i)f(z) = Z f(y)K(i)(z, dw) = Z f(z[i 1]w[i,d])µ[i,d](dw[i,d]|z[i 1]) (8) where in the edge case i = 1, the condition on z[i 1] = z{} is removed. Here K(i)(z, dw) is a Borel measure for every z and (8) computes the expected value of f at z, conditioned on the fixed realization of the subvector z[i 1]. It turns out that the effect of the kernel (7) on local oscillations serves to quantify dependence of data with joint distribution µ. Definition 2 (Wasserstein matrix). For i [d + 1], let K(i) denote the Markov kernel (8). A matrix V (i) Rd d 0 is called a Wasserstein matrix [25] for K(i), if δk(K(i)f) X j [d] V (i) kj δj(f), k [d] (9) for any function f : Zd R which is Lipschitz with respect to ρd. The two concepts defined above will be used in Section 4 to construct a moment-generating function bound via the martingale method. 3 Triangular Measure Transport Suppose a structured output is composed of d > 0 unstructured data in a space Z. Then the target measure µ of interest is a measure on Zd which does not factorize into simpler distributions. A popular method of representing complex joint distributions of interdependent random variables is to define a map T : Zd Zd which transports a tractable factorizing reference measure νd to the target measure µ, i.e. T νd = µ. This abstract framework encompasses many generative models such as normalizing flows [54, 53, 33, 46, 50], diffusion models [52, 30], generative adversarial networks and variational autoencoders [10, 26]. Here, we focus on transport maps T which are monotone and triangular in the sense that T(z)i only depends on the inputs z[i] and each T(z[i 1], )i is an increasing function. Such a map is called a Knothe-Rosenblatt (KR) rearrangement [32, 49, 12, 8, 41]. If both νd and µ have no atoms then the KR rearrangement exists and is unique [6]. In particular, normal distribution νd and any absolutely continuous (with respect to the Lebesgue measure) distribution µ meet these criteria. The KR rearrangement has the useful property that certain conditional distributions have a simple representation. Lemma 3 (Lemma 1 of [41]). Let T : Zd Zd be the KR-rearrangement which satisfies T νd = µ. For arbitrary i [d], let z[i] Zi be fixed. Then µ(dw(i,d]|z[i]) = T (i,d](z[i], ) νd i (10) where z[i] is the unique element of Zi such that T [i](z[i]) = z[i]. Numerical realization of KR rearrangements has recently received attention [5] and more broadly, a variety of triangular transport architectures exists [20, 21]. However, we do not focus on numerical considerations in the present theoretical work. One may wonder if data generated by KR-rearrangement implicitly restricts the choice of possible input and output spaces since the monotonicity requirement on T can only be satisfied if the underlying set Z is ordered. However, many feature spaces of practical interest are still permissible for what follows. In particular, both X and Y may be Euclidean spaces or hypercubes. Inkeeping with the introductory image segmentation example, suppose X = [0, 1]3 contains RGB color values and Y = [ 1, 1] contains signed distance from a semantic boundary in the image plane. Then we can interpret Zd = ([0, 1]3 [ 1, 1])d as a product of compact intervals in R4d which is clearly permissible as underlying space for KR-rearrangement. All presented results hold irrespective of whether Z contains such internal dimensions as long as a natural ordering of the set Z is induced. 4 PAC-Bayesian Risk Certificate In this section we present a novel PAC-Bayesian risk bound for structured prediction which combines three main ingredients. (1) A concentration of measure theorem for dependent data (Theorem 4) which builds on the notion of a Wasserstein dependency matrix; (2) a simple construction of coupling measures between conditional distributions (Lemma 5) which serves to represent the Wasserstein dependency matrix; (3) a PAC-Bayesian argument (Theorem 7) employing Donsker-Varadhan s variational formula in concert with concentration of measure results. The first theorem summarizes key results from [35] on the concentration of measure phenomenon for dependent random variables. We have slightly generalized by augmenting the underlying Doob martingale construction with the inclusion of a set B of bad inputs. For inputs in this set, data stability requirements do not necessarily hold. We call the complement Bc = Zd \ B the set of good inputs. The concept of good and bad inputs as well as related proof techniques were originally proposed by [38]. Here, we incorporate them into the more general concentration of measure formalism of [35]. A full proof of the following theorem is deferred to Appendix A. Theorem 4 (Moment-generating function (MGF) bound for good inputs). Let B Zd be a measurable set of bad inputs. Suppose for each i [d + 1], V (i) is a Wasserstein matrix for the Markov kernel K(i) defined in (7) on the set of good inputs, that is δk(K(i) ef) X j [d] V (i) kj δj( ef), k [d] (11) for all Lipschitz (with respect to ρd) functions ef : Bc R. Define the Wasserstein dependency matrix Γ Rd d, Γij = ρ V (i+1) ij (12) Then for all Lipschitz functions f : Zd R, the following MGF bound holds Ez µ|Bc exp λ(f(z) Eµ|Bcf) exp λ2 8 Γδ(f) 2 2 . (13) An upper bound on the moment generating function will be used in the PAC-Bayesian argument concluding this section. The function f in question will be the loss of a structured datum z. Regarding (13), our goal is to bound the norm Γδ(f) 2 2 through properties of the data distribution. We will use the fact that data is represented by measure transport to establish such a bound after the following preparatory lemma. Lemma 5 (Coupling from transport). Let νd be a reference measure on Zd and F, G: Zd Zd be measurable maps. Define the map (F, G) by (F, G): Zd Zd Zd, z 7 (F(z), G(z)) (14) Then (F, G) νd is a coupling of F νd and G νd. Proof. Let A Zd be measurable, then (F, G) νd(A, Zd) = νd((F, G) 1(A, Zd)) = νd(F 1(A)) = F νd(A) (15) which shows that F νd is the first marginal of (F, G) νd. An analogous argument for the second marginal shows the assertion. By assuming µ to be represented via KR-rearrangement of a factorizing reference measure, Lemma 3 gives an explicit representation of KR-rearrangement for conditional distributions. From there, we invoke Lemma 5 to construct a coupling between conditional distributions and subsequently follow a line of reasoning put forward in [35] to explicitly construct Wasserstein matrices for the kernels (7) which yield a bound on (13) by Theorem 4. This leads to the following proposition. A full proof is deferred to Appendix A. Proposition 6 (Wasserstein dependency matrix from KR-rearrangement). Let (Zd, Σ, µ) be a probability space with µ = T νd for the KR-rearrangement T : Zd Zd and a reference measure νd on Zd. Let each Z be equipped with a metric ρ and have finite diameter ρ < . Let f : Zd R be a Lipschitz function with respect to the product metric ρd. Let B Zd denote a set of bad inputs and define the corresponding set A = T 1(B) Zd. Let b T be the unique KR-rearrangement that satisfies b T νd = νd|Ac and denote e T = T b T. Suppose there exist constants Lij such that for all v, z Bc with v[d]\{i} = z[d]\{i} it holds Eτ ν(i,d] ρ( e T (i,d](bv[i], τ)j, e T (i,d](bz[i], τ)j) Lijρ(vi, zi) (16) where bv[i] and bz[i] are uniquely defined through e T [i](bv[i]) = v[i] and e T [i](bz[i]) = z[i]. Then Γ = ρ d D is a Wasserstein dependency matrix for µ|Bc with 0 if i > j , 1 if i = j , Lij if i < j . (17) We remark that Γ indeed distills the dependency structure of µ. To illustrate this, let µ{i} be independent from µ{j} for some i [d], j (i, d]. Then conditioning on a different value of µ{i} does not change the distribution µ{j}. Thus, e T (i,d](x[i], τ)j = e T (i,d](z[i], τ)j, τ Z(i,d], x[d]\{i} = z[d]\{i}, j (i, d] (18) and the choice Lij = 0 satisfies (16). It directly follows that Γij = 0. The following theorem states the main result of the present work. Theorem 7 (PAC-Bayesian risk certificate for structured prediction). Fix δ (0, exp( e 1)), let µ be a data distribution on Zd with T νd = µ the Knothe-Rosenblatt rearrangement for a reference measure νd on Zd and fix a measurable set B Zd of bad inputs with µ(B) ξ. Fix a PAC-Bayes prior π on a hypothesis class H of functions ϕ: X d Yd and a loss function ℓwhich assumes values in [0, 1]. Define the oscillation vector eδ by eδi = sup h H δi L(h, ) Bc , i [d] (19) where L(h, ) Bc denotes the restriction of L(h, ) to Zd \ B. Suppose all oscillations eδi are finite, suppose the condition (16) is satisfied and denote by D the matrix with entries (17). Then, with probability at least 1 δ over realizations of a training set Dm = (Z(k))m k=1 drawn from (µ|Bc)m it holds for all PAC-Bayes posteriors ζ on H that R(ζ) Rm(ζ, Dm) + 2 ρ δ + KL[ζ : π] 2m + ξ . (20) Note that the generalization gap on the right hand side of (20) decays with d. This accounts for generalization from a single example. In fact, if only m = 1 structured example is available, but d 1, Theorem 7 still certifies risk. This effect can however be negated by the norm Deδ . If structured data contain strong global dependence, then Deδ will not be bounded independently of d and thus, in the worst case of Deδ O(d) the assertion is no stronger than PAC-Bayesian bounds for unstructured data. The same point was observed in [38]. The measure of the bad set under the data distribution µ is assumed to be bounded by ξ. This is to account for a small number of data which contain strong dependence. In order to prevent these bad data from dominating D, thereby negating the decay of the bound in d as described above, it is preferable to exclude them from the sample, reduce the sample size m and pay the penalty ξ in (20). To prove Theorem 7, we broadly follow the argument put forward in [38]. This augments typical PAC-Bayesian constructions in the literature by the inclusion of a set of bad inputs. We first reconcile the data being conditioned on Bc with risk certification for unconditioned data, leading to the addition of ξ on the right hand side of (20). The model complexity term KL[ζ : π] is due to Donsker and Varadhand s variational formula. Subsequently, the moment generating function bound of Theorem 4 is instantiated through the Wasserstein dependency matrix constructed in Proposition 6. Markov s inequality then gives a pointwise risk bound for fixed value of a free parameter. In order to optimize this parameter, the bound is made uniform on a discrete set of values through a union bound. A full proof is presented in Appendix A. In Theorem 7, we combine the PAC-Bayesian construction of [38] with the more general concentration of measure theory of [35]. Crucially, concentration of measure results used in [38] are predicated on the assumption of data generated by a Markov random field [57, 34]. Our work is more flexible in two major ways. (1) Our assumption on the data-generating distribution is likely more representative of real-world data as measure transport models have repeatedly been shown to yield convincing data generators. (2) Markov random fields are difficult to handle computationally because inference in general Markov random fields is NP-hard [57] such that one is forced to learn based on approximate inference procedures [31, 22, 55]. Our work also allows for more general metrics ρ as opposed to the singular choice of Hamming norm required in [38]. Additionally, the key results of [38] are constructed to ensure all data drawn from the unconditioned distribution µ are in the good set. This reduces the probability of correctness 1 δ by mξ. Instead, we assume data drawn from µ|Bc, effectively reducing the number of available samples by a factor of 1 ξ, but keeping the probability of correctness high. This allows the set of bad inputs to be used more effectively as a computational tool in Section 5. Comparing the dependency of (20) on d with the respective result in [38], it first appears as though our bound decays with a faster rate (d instead of d). However, this will not typically be the case in practice because Deδ 2 grows with rate d in most situations. To see this, consider the case of local dependency in the sense that Lij = 1, if j Ni , 0, else (21) for local neighborhoods Ni [d] which contain a fixed number of c elements and let eδi = α for all i [d] and some constant value α > 0. Then Deδ 2 = s X α|Ni| 2 = cα Clearly, if dependence is localized and the oscillations eδ do not decay in d, then Deδ 2 contains a factor that grows with rate d, leading to the same asymptotic rate of (20) already observed in [38]. Note that [38] additionally allows for a set of bad hypotheses BF H which do not conform to stability assumptions. In our construction, this means restricting the bound (19) to oscillations on the set of good hypotheses. We omit this extension for clarity of exposition, but do not expect it to necessitate major changes to the presented proofs. The same applies to the derandomization strategy proposed by [38] which is based on hypothesis stability. Further, [38] considers a large number of applicable orderings for random variables by introducing a filtration of their index set. This notion is not easily compatible with our assumption of KRrearrangement, because triangularity of transport depends on the order of variables. 5 Bounding the Bad Set With regard to numerical risk certificates, a key technical aspect of Section 4 concerns the quantities Lij in (16). Here, we propose a way to use the set of bad inputs as a computational tool to this end. Suppose we assign arbitrary fixed values to Lij and subsequently define B Zd as the set of inputs on which the condition (16) fails. Then we have fulfilled the prerequisites of Proposition 6 by construction and are left with bounding µ(B). Note that µ(B) = PZ µ(Z B) = EZ µ[1{Z B}] (23) and the indicator function 1 assumes values in the bounded set {0, 1}. Therefore, Hoeffding s inequality gives the following. Proposition 8 (Upper bound on the bad set). Let µ be a data distribution and let e Dn µn be a sample of size n. Fix an error probability ϵ (0, 1). Then Z e Dn 1{Z B} + with probability at least 1 ϵ over the sample. Checking the condition Z B requires evaluating (16) which comes down to finding a Lipschitz constant for a one-dimensional function. If this is computationally feasible for the given data model, then the concentration argument (24) can be used to bound µ(B) with high probability. Because (24) decays only in the number of structured examples O( n), it can not be used to show generalization from a single structured example. However, PAC-Bayesian risk certificates are typically dominated by the KL complexity term in (20) which decays with the size of structured examples as well. Thus, Proposition 8 should still be useful in practice. Note that Proposition 8 makes a pointwise statement about a fixed value of Lij which has limited utility for learning Lij from data. To remedy this problem, we can first define a discrete set L = (L(k))k [l] of candidate matrices and select error probabilities ϵk for each of the events µ(B(L(k))) 1 Z e Dn 1{Z B(L(k))} + r such that P k [l] ϵk = ϵ. Then, by a union bound with probability at least 1 ϵ over the sample none of the events (25) occurs. We have thus constructed a uniform bound over the set of candidate matrices which allows us to select the one which minimizes the generalization gap in (20). In order to make this strategy most effictive, domain knowledge on the application at hand should be applied when constructing candidate matrices and assigning error probabilities. For instance, the limited empirical findings of [14] on Image Net [17] indicate that the majority of natural images contain mostly local signal. In image segmentation, this is conducive to concentration, because it can lead to many small values in an optimal Wasserstein dependency matrix. In particular, if dependency decays with distance in the image domain, one should select configurations L(k) in which L(k) ij is small if i is distant from j in the image domain and allowed to assume larger values if i is close to j in the image domain. We give an intuitive interpretation of the relationship between Proposition 8 and Theorem 7 as follows. Suppose the majority of samples from a structured data distribution contain mostly local signal. The locality of signal in samples indicates weak global dependence of random variables which in turn manifests in small entries of a Wasserstein dependency matrix. However, a small number of bad data may contain only weak local signal. For instance, an image in which every pixel has the same value does not give more information to a learner if it is doubled in size. Even worse, a small (but not null) set of bad data will dominate the Wasserstein dependency matrix and prevent generalization that scales with d. Proposition 8 thus estimates an upper bound on the likelihood of bad data under µ which are then excluded from the concentration argument underlying the bound (20). This relationship is further illustrated in a numerical toy example in Appendix C. 6 Limitations We assume that structured data are drawn from a distribution µ which arises from a reference measure through KR-rearrangement. This is motivated by the fact that a large class of data distributions can be represented in this way and that some assumption on the data-generating distribution is required in order to make statements on conditional distributions required to quantify dependence. However, it is an open question how closely a measure transport distribution learned from data approximates the unknown distribution from which the data are drawn. A first step towards this goal is made in the recent work [4], but a non-asymptotic theory of generalization for generative models is left for future work. Accordingly, we do not present any empirical results on real-world data. We describe how dependency in the data distribution which is relevant for generalization of discriminative models can be written as properties of the KR-rearrangement through a Wasserstein dependency matrix. Section 5 further outlines how the construction of bad data can be used as a computational tool to this end. We do not cover how to compute the involved Lipschitz constants Lij in (16). In practice, this will require nontrivial numerical machinery because a deterministic bound on the expected value under the reference distribution appears needed. One possible approach is to employ quasi-Monte-Carlo methods [19, 18] which come with deterministic error bounds and have recently been applied to PAC-Bayesian certification of ordinary (non-structured) classification risk [7]. This entails a fine-grained stability analysis of the involved integrands and is likely to be computationally expensive because deterministic error bounds tend to be pessimistic compared to their stochastic counterparts. 7 Conclusion We have presented a PAC-Bayesian risk certificate for structured prediction. Compared to earlier work, we make the assumption of data generated by KR-rearrangement of a reference measure. This approach is able to represent all atom-free data distributions and yields an explicit quantification of dependence via Wasserstein dependency matrices. It also indicates a way of leveraging powerful generative models to compute risk certificates for downstream discriminative tasks. Acknowledgments and Disclosure of Funding This work is funded by the Deutsche Forschungsgemeinschaft (DFG), grant SCHN 457/17-1, within the priority programme SPP 2298: Theoretical Foundations of Deep Learning . This work is funded by the Deutsche Forschungsgemeinschaft (DFG) under Germany s Excellence Strategy EXC-2181/1 - 390900948 (the Heidelberg STRUCTURES Excellence Cluster). [1] Pierre Alquier. User-friendly introduction to pac-bayes bounds. ar Xiv preprint ar Xiv:2110.11216, 2021. [2] Pierre Alquier. User-friendly introduction to pac-bayes bounds, 2023. [3] Pierre Alquier and Benjamin Guedj. Simpler pac-bayesian bounds for hostile data. Machine Learning, 107(5):887 902, 2018. [4] Ricardo Baptista, Bamdad Hosseini, Nikola B Kovachki, Youssef M Marzouk, and Amir Sagiv. An approximation theory framework for measure-transport sampling algorithms. ar Xiv preprint ar Xiv:2302.13965, 2023. [5] Ricardo Baptista, Youssef Marzouk, and Olivier Zahm. On the representation and learning of monotone triangular transport maps, July 2022. ar Xiv:2009.10303 [cs, math, stat]. [6] V I Bogachev, A V Kolesnikov, and K V Medvedev. Triangular transformations of measures. Sbornik: Mathematics, 196(3):309, apr 2005. [7] B. Boll, A. Zeilmann, S. Petra, and C. Schnörr. Self-Certifying Classification by Linearized Deep Assignment. PAMM: Proc. Appl. Math. Mech., 23(1):e202200169, 2023. [8] N. Bonnotte. From Knothe s Rearrangement to Brenier s Optimal Transport Map. SIAM J. Math. Anal., 45(1):64 87, 2013. [9] Stéphane Boucheron, Gábor Lugosi, and Pascal Massart. Concentration Inequalities: A Nonasymptotic Theory of Independence. Oxford University Press, February 2013. [10] O. Bousquet, S. Gelly, I. Tolstikhin, C.-J Simon-Gabriel, and B. Schölkopf. From optimal transport to generative modeling: the VEGAN cookbook. preprint ar Xiv:1705.07642, 2017. [11] Théophile Cantelobre, Benjamin Guedj, María Pérez-Ortiz, and John Shawe-Taylor. A pacbayesian perspective on structured prediction with implicit loss embeddings. ar Xiv preprint ar Xiv:2012.03780, 2020. [12] G. Carlier, A. Galichon, and F. Santambrogio. From Knothe s Transport to Brenier s Map and a Continuation Method for Optimal Transport. SIAM Journal on Mathematical Analysis, 41(6):2554 2576, 2010. [13] O. Catoni. PAC-Bayesian Supervised Classification: The Thermodynamics of Statistical Learning. Institute of Mathematical Statistics, 2007. [14] Carlo Ciliberto, Francis Bach, and Alessandro Rudi. Localized Structured Prediction. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. [15] Carlo Ciliberto, Lorenzo Rosasco, and Alessandro Rudi. A general framework for consistent structured prediction with implicit loss embeddings. The Journal of Machine Learning Research, 21(1):3852 3918, 2020. [16] Eugenio Clerico, George Deligiannidis, and Arnaud Doucet. Conditionally gaussian pac-bayes. In International Conference on Artificial Intelligence and Statistics, pages 2311 2329. PMLR, 2022. [17] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A largescale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248 255. IEEE, 2009. [18] Josef Dick, Frances Y Kuo, and Ian H Sloan. High-dimensional integration: the quasi-monte carlo way. Acta Numerica, 22:133 288, 2013. [19] Josef Dick and Friedrich Pillichshammer. Digital nets and sequences: discrepancy theory and quasi Monte Carlo integration. Cambridge University Press, 2010. [20] Laurent Dinh, David Krueger, and Yoshua Bengio. NICE: Non-linear Independent Components Estimation, April 2015. ar Xiv:1410.8516 [cs]. [21] Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp, 2017. [22] J. Domke. Learning Graphical Model Parameters with Approximate Marginal Inference. IEEE Trans. Pattern Analysis and Machine Intelligence, 35(10):2454 2467, 2013. [23] Gintare Karolina Dziugaite and Daniel M Roy. Data-dependent pac-bayes priors via differential privacy. In Advances in Neural Information Processing Systems, volume 31 of NIPS. Curran Associates, Inc., 2018. [24] Doris Fiebig. Mixing properties of a class of bernoulli-processes. Transactions of the American Mathematical Society, 338(1):479 493, 1993. [25] H. Föllmer. Tail structure of markov chains on infinite product spaces. Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, 50(3):273 285, Jan 1979. [26] A. Genevay, G. Peyré, and M. Cuturi. GAN and VAE from an Optimal Transport Point of View. preprint ar Xiv:1706.01807, 2017. [27] B. Guedj. A Primer on PAC-Bayesian Learning. Co RR abs/1901.05353, 2019. [28] Benjamin Guedj and Louis Pujol. Still No Free Lunches: The Price to Pay for Tighter PACBayes Bounds. Entropy, 23(11):1529, November 2021. [29] Maxime Haddouche, Benjamin Guedj, Omar Rivasplata, and John Shawe-Taylor. Pac-bayes unleashed: Generalisation bounds with unbounded losses. Entropy, 23(10):1330, 2021. [30] J. Ho, C. Saharia, W. Chan, D. J. Fleet, M. Norouzi, and T. Salimans. Cascaded Diffusion Models for High Fidelity Image Generation. J. Machine Learning Research, 23:1 33, 2022. [31] M. I. Jordan, T. S. Ghahramani, Z. abd Jaakkola, and L. K. Saul. An Introduction to Variational Methods for Graphical Models. Machine Learning, 37:183 233, 1999. [32] Herbert Knothe. Contributions to the theory of convex bodies. Michigan Mathematical Journal, 4(1):39 52, 1957. [33] I. Kobyzev, S.J. D. Prince, and M. A. Brubaker. Normalizing Flows: An Introduction and Review of Current Methods. IEEE Trans. Pattern Anal. Mach. Intell., 43(11):3964 3979, 2021. [34] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT Press, 2009. [35] Aryeh Kontorovich and Maxim Raginsky. Concentration of Measure Without Independence: A Unified Approach Via the Martingale Method. In Eric Carlen, Mokshay Madiman, and Elisabeth M. Werner, editors, Convexity and Concentration, volume 161, pages 183 210. Springer New York, New York, NY, 2017. Series Title: The IMA Volumes in Mathematics and its Applications. [36] Leonid (Aryeh) Kontorovich and Kavita Ramanan. Concentration inequalities for dependent random variables via the martingale method. The Annals of Probability, 36(6):2126 2158, 2008. [37] M. Ledoux. The Concerntration of Measure Phenomenon. Amer. Math. Soc., 2001. [38] Ben London, Bert Huang, and Lise Getoor. Stability and generalization in structured prediction. Journal of Machine Learning Research, 17(221):1 52, 2016. [39] Ben London, Bert Huang, Ben Taskar, and Lise Getoor. Collective stability in structured prediction: Generalization from one example. In International Conference on Machine Learning, pages 828 836. PMLR, 2013. [40] Ben London, Bert Huang, Ben Taskar, and Lise Getoor. Pac-bayesian collective stability. In Artificial Intelligence and Statistics, pages 585 594. PMLR, 2014. [41] Youssef Marzouk, Tarek Moselhy, Matthew Parno, and Alessio Spantini. Sampling via Measure Transport: An Introduction. In Roger Ghanem, David Higdon, and Houman Owhadi, editors, Handbook of Uncertainty Quantification, pages 1 41. Springer International Publishing, Cham, 2016. [42] David A Mc Allester. Pac-bayesian model averaging. In Proceedings of the twelfth annual conference on Computational learning theory, pages 164 170, 1999. [43] David A. Mc Allester. Some pac-bayesian theorems. Machine Learning, 37(3):355 363, Dec 1999. [44] Waleed Mustafa, Yunwen Lei, Antoine Ledent, and Marius Kloft. Fine-grained analysis of structured output prediction. International Joint Conferences on Artificial Intelligence, 2021. [45] Stanley Osher and Ronald Fedkiw. Level set methods and dynamic implicit surfaces. Springer, 2003. [46] G. Papamakarios, E. Nalisnick, D.J. Rezende, S. Mohamed, and B. Lakshminarayanan. Normalizing Flows for Probabilistic Modeling and Inference. J. Machine Learning Research, 22(57):1 64, 2021. [47] Maria Perez-Ortiz, Omar Rivasplata, Benjamin Guedj, Matthew Gleeson, Jingyu Zhang, John Shawe-Taylor, Miroslaw Bober, and Josef Kittler. Learning pac-bayes priors for probabilistic neural networks. ar Xiv preprint ar Xiv:2109.10304, 2021. [48] Marıa Pérez-Ortiz, Omar Rivasplata, John Shawe-Taylor, and Csaba Szepesvári. Tighter Risk Certificates for Neural Networks. Journal of Machine Learning Research, 22(227):1 40, 2021. [49] Murray Rosenblatt. Remarks on a multivariate transformation. The annals of mathematical statistics, 23(3):470 472, 1952. [50] L. Ruthotto and E. Haber. An Introduction to Deep Generative Modeling. GAMM Mitt., 44(2):24 pages, 2021. [51] John Shawe-Taylor and Robert C Williamson. A pac analysis of a bayesian estimator. In Proceedings of the tenth annual conference on Computational learning theory, pages 2 9, 1997. [52] Y. Song, J. Sohl-Dickstein, D. P. Kingma, A. Kumar, S. Ermon, and B. Poole. Score-Based Generative Modeling Through Stochastic Differential Equations. In ICLR, 2021. [53] Esteban G Tabak and Cristina V Turner. A family of nonparametric density estimation algorithms. Communications on Pure and Applied Mathematics, 66(2):145 164, 2013. [54] Esteban G Tabak and Eric Vanden-Eijnden. Density estimation by dual ascent of the loglikelihood. Communications in Mathematical Sciences, 8(1):217 233, 2010. [55] Vera Trajkovska, Paul Swoboda, Freddie Åström, and Stefania Petra. Graphical model parameter learning by inverse linear programming. In François Lauze, Yiqiu Dong, and Anders Bjorholm Dahl, editors, Scale Space and Variational Methods in Computer Vision, pages 323 334, Cham, 2017. Springer International Publishing. [56] Brian L. Trippe and Richard E. Turner. Conditional Density Estimation with Bayesian Normalising Flows, February 2018. ar Xiv:1802.04908 [stat]. [57] M.J. Wainwright and M.I. Jordan. Graphical Models, Exponential Families, and Variational Inference. Found. Trends Mach. Learn., 1(1-2):1 305, 2008. A Full Proofs of Presented Results In this appendix, we present full proofs of all results which are not already complete in the main text. Proof of Theorem 4 For any i [d] and z Zd define M (i) = EZ µ[f(Z)|Bc, Z[i] = z[i]] EZ µ[f(Z)|Bc, Z[i 1] = z[i 1]] (26) with the edge case M (1) = EZ µ[f(Z)|Bc, Z1 = z1] EZ µ[f(Z)|Bc] . (27) Due to EZ µ[f(Z)|Bc, Z = z] = f(z) for z Bc we have f EZ µ[f(Z)|Bc] = i=1 M (i) . (28) Since the conditions Z[i] = z[i] generate a nested sequence of σ-algebras, the quantities K(i+1)f(z) = Eµ[f(Z)|Bc, Z[i] = z[i]] are a Doob martingale and (26) is a martingale difference sequence. In order to bound the moment generating function of f, we will bound every M (i) from above and below and apply the Azuma-Hoeffding theorem 10. We have M (i) = Eµ[f(Z)|Bc, Z[i] = z[i]] Eµ[f(Z)|Bc, Z[i 1] = z[i 1]] (29a) = Eµ[f(Z)|Bc, Z[i] = z[i]] Eµ[Eµ[f(Z)|Bc, Z[i 1] = z[i 1], Zi]|Bc, Z[i 1] = z[i 1]] (29b) = Z f(z[i]w(i,d])µ(dw(i,d]|z[i], Bc) Z Z f(z[i]u(i,d])µ(du(i,d]|z[i 1], wi, Bc) µ(dw[i,d]|z[i 1], Bc) (29c) by the tower property of conditional expectations. Because µ(dw[i,d]|z[i 1], Bc) is a probability measure, it holds Z f(z[i]w(i,d])µ(dw(i,d]|z[i], Bc) = Z Z f(z[i 1]ziu(i,d])µ(du(i,d]|z[i], Bc) µ(dw[i,d]|z[i 1], Bc) (30) and we find M (i) = Z µ(dw[i,d]|z[i 1], Bc) Z f(z[i 1]ziu(i,d])µ(du(i,d]|z[i], Bc) Z f(z[i]u(i,d])µ(du(i,d]|z[i 1], wi, Bc) (31) Now bound A(i) M (i) B(i) almost surely with A(i) = Z µ(dw[i,d]|z[i 1], Bc) inf zi Bc i (z[i 1]) Z f(z[i 1]ziu(i,d])µ(du(i,d]|z[i], Bc) Z f(z[i]u(i,d])µ(du(i,d]|z[i 1], wi, Bc) (32a) B(i) = Z µ(dw[i,d]|z[i 1], Bc) sup zi Bc i (z[i 1]) Z f(z[i 1]ziu(i,d])µ(du(i,d]|z[i], Bc) Z f(z[i]u(i,d])µ(du(i,d]|z[i 1], wi, Bc) (32b) where Bc i (z[i 1]) contains all zi Z such that there exist z(i,d] Zd i with (z[i 1], zi, z(i,d]) Bc. Because every realization of a random variable conditioned on Bc is in the set of good inputs, the difference B(i) A(i) can be written as sup v,z Bc,v[d]\{i}=z[d]\{i} Z f(v[i]u(i,d])µ(du(i,d]|v[i], Bc) Z f(z[i]u(i,d])µ(du(i,d]|z[i], Bc) (33) By seeing this expression in terms of oscillation of the kernel action K(i+1)f, we find B(i) A(i) ρ δi(K(i+1) ef) ρ (V (i+1)δ( ef))i = (Γδ( ef))i (34) where ef : Bc R is the restriction of f to Bc. The assertion then follows from the Azuma-Hoeffding theorem [35, Theorem 4.1] which we recite as Theorem 10 to make this paper self-contained. Proof of Proposition 6 For arbitrary z, z Zd it holds |f(z) f(z )| δj(f)ρ(zj, z j), i [d] (35) and thus, by summing over all indices we get |f(z) f(z )| 1 j [d] δj(f)ρ(zj, z j) (36) Let v, z Zd with v[d]\{i} = z[d]\{i} be given for some i [d]. Recall the action (8) of Markov kernels K(i+1) is an expected value with respect to conditional distributions µ(i,d](dw(i,d]|v[i]). Because νd has no atoms, νd|Ac also has no atoms. Therefore, there is a unique KR-rearrangement b T with b T νd = νd|Ac. Then e T = T b T is a KR-rearrangement with e T νd = µ|Bc (37) by Lemma 9 and we have e T(bv) = v. Lemma 3 implies µ(i,d](dw(i,d]|Bc, v[i]) = e T(bv[i], ) νd i (38) An analogous expression holds for the distribution conditioned on z. We have therefore found two transport functions pushing the reference measure to the respective conditional distributions. By Lemma 5, a coupling of the conditional distributions is then given by P [i] v,z = ( e T (i,d](bv[i], ), e T (i,d](bz[i], )) νd i . (39) Using a change of measure we find K(i+1)f(v) K(i+1)f(z) = Z P [i] v,z(du(i,d], dv(i,d]) f(v[i]u(i,d]) f(z[i]v(i,d]) (40) = Z f(v[i] e T (i,d](bv[i], τ)) f(z[i] e T (i,d](bz[i], τ)) νd i(τ) (41) d ρ(vi, zi) + X Z ρ e T (i,d](bv[i], τ)j, e T (i,d](bz[i], τ)j νd i(τ) (42) d ρ(vi, zi) + X d Lijρ(vi, zi) (43) which shows δi(K(i+1)f) 1 j (i,d] Lijδj(f) (44) for good inputs. We have thus found a Wasserstein matrix V (i+1) for K(i+1) with entries V (i+1) ij = 0 if i > j d 1 if i = j d 1Lij if i < j (45) in row i which shows the assertion. Proof of Theorem 7 For any hypothesis h H, we have R(h) Rm(h, Dm) = EZ µ[L(h, Z) Rm(h, Dm)] (46a) = EZ µ h L(h, Z) Rm(h, Dm) 1{Z / B} i + EZ µ h L(h, Z) Rm(h, Dm) 1{Z B} i (46b) EZ µ h L(h, Z) Rm(h, Dm) 1{Z / B} i + ξ (46c) EZ µ|Bc[L(h, Z)] Rm(h, Dm) + ξ (46d) where in (46c) we have used that pointwise loss is in [0, 1]. Note that the underlying distribution of the risk R(h) is µ, while Dm are drawn from µ|Bc. The above inequality reconciles this such that a concentration argument for the conditional distribution becomes applicable. For any PAC-Bayes posterior distribution ζ and any β > 0, this implies R(ζ) Rm(ζ, Dm) = Eh ζEZ µ[L(h, Z) Rm(h, Dm)] (47a) Eh ζ EZ µ|Bc[L(h, Z)] Rm(h, Dm) + ξ (47b) β Eh ζ β(EZ µ|Bc[L(h, Z)] Rm(h, Dm)) + ξ (47c) β log Eh π h exp β(EZ µ|Bc[L(h, Z)] Rm(h, Dm)) i β KL[ζ : π] + ξ (47d) by Donsker and Varadhan s variational formula [2, Lemma 2.2]. Focusing on the first term, we find exp β(EZ µ|Bc[L(h, Z)] Rm(h, Dm)) = exp β EZ µ|Bc[L(h, Z)] L(h, Z(k)) k [m] exp β m EZ µ|Bc[L(h, Z)] L(h, Z(k)) Each structured datum Z(k) is drawn independently from µ|Bc. By Proposition 6 there exists a Wasserstein dependency matrix Γ = ρ d D for µ|Bc where D has entries (17). Then EDm (µ|Bc)m Y k [m] exp β m EZ µ|Bc[L(h, Z)] L(h, Z(k)) k [m] EZ(k) (µ|Bc) exp β m EZ µ|Bc[L(h, Z)] L(h, Z(k)) k [m] EZ(k) µ|Bc h exp β m EZ µ|Bc[L(h, Z)] L(h, Z(k)) i k [m] exp β2 8m2 Γδ(e L(h, )) 2 2 by Theorem 4 (49c) 8m Γδ(e L(h, )) 2 2 Denote the shorthand U = EDm (µ|Bc)m exp β(EZ µ|Bc[L(h, Z)] Rm(h, Dm)) (50) By Markov s inequality it holds PDm (µ|Bc)m exp β(EZ µ|Bc[L(h, Z)] Rm(h, Dm)) 1 and combining this with (49) we have exp β(EZ µ|Bc[L(h, Z)] Rm(h, Dm)) 1 with probability at least 1 δ over the sample. Using (47) we thus have R(ζ) Rm(ζ, Dm) 1 log Eh π h1 i + KL[ζ : π] + ξ (53a) 8m Γeδ 2 2 + 1 δ + KL[ζ : π] + ξ (53b) Ideally, we would minimize the right hand side with respect to β. However, this would mean to have β depend on ζ and we thus would not have a uniform bound for all posterior distributions. Instead, [38] approaches the problem by defining a sequence of constant (δj, βj)j N0 and bounding the probability that the bound does not hold for any sequence element. Since in the opposite (highprobability) case, the bound holds for all sequence elements, an optimal one can subsequently be chosen dependent on the posterior. For all j N0, define δj = δ2 (j+1), βj = 2j s δ Γeδ 2 2 (54) which are independent of ζ. Now consider the event Ej that exp βj(EZ µ|Bc[ℓ(h, Z)] Rm(h, Dm)) 1 β2 j 8m Γeδ 2 2 By the above argument leading up to (52), the probability for Ej under a random sample of the conditioned data distribution µ|Bc is at most δj. Therefore, the probability that any Ej occurs is bounded by P [ j N0 P(Ej) X j N0 δj = δ (56) Thus, for all posteriors ζ with probability at least 1 δ none of the events (55) occurs. We may therefore select an index j dependent on ζ to obtain a sharper risk certificate which still holds with probability at least 1 δ over the sample conditioned on the good set. For a fixed posterior ζ, the optimizer of (53b) would be δ + KL[ζ : π]) (57) Equating this to (55) and rounding down to the nearest integer gives 2 log2 1 + KL[ζ : π] Denote this number before rounding by r, i.e. j = r . For any real number r it holds r 1 r r. Therefore 1 + KL[ζ : π] δ = 2r 1 2j 2r = 1 + KL[ζ : π] which gives the following bounds on uj δ + KL[ζ : π]) δ + KL[ζ : π]) Γeδ 2 2 (60) Likewise, we bound KL[ζ : π] + log 1 δj = KL[ζ : π] + log 2 δ + j log 2 (61a) KL[ζ : π] + log 2 2 log2 1 + KL[ζ : π] log 2 (61b) = KL[ζ : π] + log 1 2 log 1 + KL[ζ : π] = KL[ζ : π] + log 1 2 log log 1 δ + KL[ζ : π] 1 2 log log 1 The assumption δ exp( e 1) yields log log 1 δ 1 and because x + 1 exp(x) for all x R, we find KL[ζ : π] + log 1 δj KL[ζ : π] + log 1 δ + KL[ζ : π] + 1 (62a) KL[ζ : π] + log 1 δ + KL[ζ : π] (62b) δ + KL[ζ : π] (62c) We can now use the bounds (62c) and (60) in (53b) to bound the expected generalization error R(ζ) Rm(ζ, Dm) uj 8m Γeδ 2 2 + 1 δj + KL[ζ : π] + ξ (63a) 8m Γeδ 2 2 + 3 2uj δ + KL[ζ : π] + ξ (63b) δ + KL[ζ : π] δ + KL[ζ : π] 2m + ξ (63c) δ + KL[ζ : π] 2m + ξ (63d) Note that β would attain the optimal value R(ζ) Rm(ζ, Dm) Γeδ 2 KL[ζ : π] + log 1 δ 2m + ξ (64) which only differs from the above uniform bound by a factor of two. Finally, recall Γ = ρ d D where D has entries (17). B Additional Lemmata Lemma 9. Let T : Ω Ωbe a measurable function on a measurable space (Ω, Σ) and let ν, µ be measures on Ωwith T ν = µ. Let B Σ be a fixed set with µ(B) > 0 and A = T 1(B) its preimage under T. Then T (ν|A) = µ|B . (65) Proof. Let S Σ be arbitrary and let eµ = T (ν|A). Then eµ(S) = (ν|A)(T 1(S)) = ν(T 1(S) A) (µ|B)(S) = µ(S B) µ(B) = ν(T 1(S B)) Note that z T 1(S) T 1(B) T(z) S T(z) B T(z) S B z T 1(S B) (68) thus T 1(S) T 1(B) = T 1(S B) and consequently eµ(S) = (µ|B)(S). Since S was arbitrary, this shows the assertion. The following theorem exists in various forms in the literature. To make this paper self-contained, we recite the version in [35] which is used to bound moment-generating functions in Proposition 4. Note that we only use the MGF bound (69) in our analysis. However, the concentration inequality (70) also holds analogously under the assumptions of Proposition 4 which may be of independent interest. Theorem 10 (Azuma-Hoeffding [35, Theorem 4.1]). Let (M (i))i [m] be a martingale difference sequence with respect to a filtration (Σi)i [m] of sigma algebras. Suppose that for each i [m] there exist Σi 1-measurable random variables A(i), B(i) such that A(i) M (i) B(i) almost surely. Then for all λ R it holds that E h exp λ X i [m] M (i) i exp λ2 i [m] B(i) A(i) 2 (69) and consequently, for any t 0 i [m] M (i) t 2 exp 2t2 P i [m] B(i) A(i) 2 C Numerical Toy Example Figure 1: KR rearrangement T(z) = (T1(z1, z2), T2(z1, z2)) transports a uniform reference measure ν2 on the unit cube [0, 1]2 to a multimodal distribution µ (right). 0.4 0.6 0.8 1.0 largest L12 bad set prob. ξ Figure 2: Construction of a good set in toy example. Left: L12 for all inputs. Center: size of an excluded bad set under the data distribution for corresponding largest values of L12. Right: L12 for good inputs. To illustrate the concept of transport stability discussed in Proposition 6, as well as the bad set construction of Section 5, we devised a toy example of two-dimensional transport. As a reference distribution, we choose the uniform distribution on the unit cube [0, 1]2, i.e. Z = [0, 1]. The metric ρd, d = 2 is chosen as the ℓ1-distance ρ2(z, z ) = z z 1 = |z1 z 1| + |z2 z 2|. The KR transport map has component functions T(z1, z2) = T1(z1), T2(z1, z2) . We make a polynomial ansatz, defining i [5] βi B5 i (τ)dτ, T2(z1, z2) = Z eβi(z1)B2 i (τ)dτ (71) where Bm i denotes the i-th Bernstein polynomial of degree m, βi 0 are parameters and (positive) coefficient functions eβi(z1) are again a parameterized combination of Bernstein polynomials (degree 8). Because Bernstein polynomials assume non-negative values on [0, 1], the components in (71) define a valid KR-rearrangement. The resulting measure transport is illustrated in Figure 1. In order to evaluate the bound of Theorem 7, we need to compute the Lipschitz constant L12 according to equation (16) Eτ ν2[|T2(z1, τ) T2(z 1, τ)|] L12|z1 z 1| (72) for all good inputs z1, z 1. Figure 2 (left) shows the values of L12 satisfying (72) for each input (z1, z 1) [0, 1]2. The sought Lipschitz constant is the largest of these values. In order to avoid regions with large values, Theorem 7 allows for the exclusion of bad inputs. The probability ξ of this bad set under the data distribution is incurred as a penalty in equation (20). We thus construct bad sets to exclude large values and consider their probability under the data distribution Figure 2 (middle). Finally, the exclusion of such a bad set is shown on the right of Figure 2.