# improved_sampling_via_learned_diffusions__4a582e7a.pdf Published as a conference paper at ICLR 2024 IMPROVED SAMPLING VIA LEARNED DIFFUSIONS Lorenz Richter Zuse Institute Berlin dida Datenschmiede Gmb H richter@zib.de Julius Berner Caltech jberner@caltech.edu Recently, a series of papers proposed deep learning-based approaches to sample from target distributions using controlled diffusion processes, being trained only on the unnormalized target densities without access to samples. Building on previous work, we identify these approaches as special cases of a generalized Schr odinger bridge problem, seeking a stochastic evolution between a given prior distribution and the specified target. We further generalize this framework by introducing a variational formulation based on divergences between path space measures of timereversed diffusion processes. This abstract perspective leads to practical losses that can be optimized by gradient-based algorithms and includes previous objectives as special cases. At the same time, it allows us to consider divergences other than the reverse Kullback-Leibler divergence that is known to suffer from mode collapse. In particular, we propose the so-called log-variance loss, which exhibits favorable numerical properties and leads to significantly improved performance across all considered approaches. 1 INTRODUCTION Given a function ρ: Rd [0, ), we consider the task of sampling from the density ptarget := ρ Z with Z := Z Rd ρ(x) dx, where the normalizing constant Z is typically intractable. This represents a crucial and challenging problem in various scientific fields, such as Bayesian statistics, computational physics, chemistry, or biology, see, e.g., Liu & Liu (2001); Stoltz et al. (2010). Fueled by the success of denoising diffusion probabilistic modeling (Song & Ermon, 2020; Ho et al., 2020; Kingma et al., 2021; Vahdat et al., 2021; Nichol & Dhariwal, 2021) and deep learning approaches to the Schr odinger bridge (SB) problem (De Bortoli et al., 2021; Chen et al., 2021a; Koshizuka & Sato, 2022), there is a significant interest in tackling the sampling problem by using stochastic differential equations (SDEs) which are controlled with learned neural networks to transport a given prior density pprior to the target ptarget. Recent works include the Path Integral Sampler (PIS) and variations thereof (Tzen & Raginsky, 2019; Richter, 2021; Zhang & Chen, 2022; Vargas et al., 2023b), the Time-Reversed Diffusion Sampler (DIS) (Berner et al., 2024), as well as the Denoising Diffusion Sampler (DDS) (Vargas et al., 2023a). While the ideas for such sampling approaches based on controlled diffusion processes date back to earlier work, see, e.g., Dai Pra (1991); Pavon (1989), the development of corresponding numerical methods based on deep learning has become popular in the last few years. However, up to now, more focus has been put on generative modeling, where samples from ptarget are available. As a consequence, it seems that for the classical sampling problem, i.e., having only an analytical expression for ρ ptarget, but no samples, diffusion-based methods cannot reach state-of-the-art performance yet. Potential drawbacks might be stability issues during training, the need to differentiate through SDE solvers, or mode collapse due to the usage of objectives based on reverse Kullback-Leibler (KL) divergences, see, e.g., Zhang & Chen (2022); Vargas et al. (2023a). In this work, we overcome these issues and advance the potential of sampling via learned diffusion processes toward more challenging problems. Our contributions can be summarized as follows: Equal contribution (the author order was determined by numpy.random.rand(1)). Published as a conference paper at ICLR 2024 We provide a unifying framework for sampling based on learned diffusions from the perspective of measures on path space and time-reversals of controlled diffusions, which for the first time connects methods such as SB, DIS, DDS, and PIS. This path space perspective, in consequence, allows us to consider arbitrary divergences for the optimization objective, whereas existing methods solely rely on minimizing a reverse KL divergence, which is prone to mode collapse. In particular, we propose the log-variance divergence that avoids differentiation through the SDE solver and allows to balance exploration and exploitation, resulting in significantly improved numerical stability and performance, see Figure 1. Figure 1: Improved convergence of our proposed log-variance loss for a double well problem, see Section 4 for further details. 1.1 RELATED WORK There exists a myriad of Monte Carlo-based methods for sampling from unnormalized densities, e.g. Annealed Importance Sampling (AIS) (Neal, 2001), Sequential Monte Carlo (Del Moral et al., 2006; Doucet et al., 2009) (SMC), or Markov chain Monte Carlo (MCMC) (Kass et al., 1998). Note, however, that MCMC methods are usually only guaranteed to reach the target density asymptotically, and the convergence speed might be too slow in many practical settings (Robert et al., 1999). Variational methods such as mean-field approximations (Wainwright et al., 2008) and normalizing flows (Papamakarios et al., 2021; Wu et al., 2020; Midgley et al., 2023; Vaitl et al., 2024) provide an alternative. Similar to our setting, the problem of density estimation is cast into an optimization problem by fitting a parametric family of tractable distributions to the target density. We build our theoretical foundation on the variational formulation of bridge problems proposed by Chen et al. (2021a). We recall that the underlying ideas were established decades ago (Nelson, 1967; Anderson, 1982; Haussmann & Pardoux, 1986; F ollmer, 1988), however, only recently applied to diffusion models (Song et al., 2020) and SBs (Vargas, 2021; Liu et al., 2022). While the numerical treatment of SB problems has classically been approached via iterative nested schemes, the approach in Chen et al. (2021a) uses backward SDEs (BSDEs) to arrive at a single objective based on a KL divergence. This objective includes the (continuous-time) ELBO of diffusion models (Huang et al., 2021) as a special case, which can also be approached from the perspective of optimal control (Berner et al., 2024). For additional previous work on optimal control in the context of generative modeling, we refer to De Bortoli et al. (2021); Tzen & Raginsky (2019); Pavon (2022); Holdijk et al. (2022). Crucially, we note that our path space perspective on the variational formulation of bridges has not been known before. Our novel derivation only relies on time-reversals of diffusion processes and shows that, in general, corresponding losses (in particular the one in Chen et al. (2021a)) do not have a unique solution as they lack the entropy constraint of classical SB problems. However, in special cases, we recover unique objectives corresponding to recently developed sampling methods, e.g., DIS, DDS, and PIS. Moreover, the path space perspective allows us to extend the variational formulation of bridges to different divergences, in particular to the log-variance divergence that has originally been introduced in N usken & Richter (2021). Variants of this loss have previously only been analyzed in the context of variational inference (Richter et al., 2020) and neural solvers for partial differential equations (PDEs) (Richter & Berner, 2022). Extending these works, we prove that the beneficial properties of the log-variance loss also hold for the general bridge objective, which incorporates two instead of only one controlled stochastic process. Finally, we refer to Vargas et al. (2024) for concurrent work on the path space perspective on diffusion-based sampling. 1.2 OUTLINE OF THE ARTICLE The rest of the article is organized as follows. In Section 2, we provide an introduction to diffusionbased sampling from the perspective of path space measures and time-reversed SDEs. This can be understood as a unifying framework allowing for generalizations to divergences other than the KL divergence. We propose the log-variance divergence and prove that it exhibits superior properties. In Section 3, we will subsequently outline multiple novel connections to known methods, such as SBs in Section 3.1, diffusion-based generative modeling (i.e., DIS) in Section 3.2, and approaches based on reference processes (i.e., PIS and DDS) in Section 3.3. For all considered methods, we can find compelling numerical evidence for the superiority of the log-variance divergence, see Section 4. Published as a conference paper at ICLR 2024 2 DIFFUSION-BASED SAMPLING In this section, we will reformulate our sampling problem as a time-reversal of diffusion processes from the perspective of measures on path space. Let us first define our setting and notation. 2.1 NOTATION AND SETTING We denote the density of a random variable X by p X. For a suitable Rd-valued stochastic process X = (Xt)t [0,T ] we define its density p X w.r.t. to the Lebesgue measure by p X( , t) := p Xt for t [0, T]. For suitable functions f C(Rd [0, T], R) and w C(Rd [0, T], Rd), we further define the deterministic and stochastic integrals Rf(X) := Z T 0 f(Xs, s) ds and Sw(X) := Z T 0 w(Xs, s) d Ws, (1) where W is a standard d-dimensional Brownian motion. We denote by P the set of all probability measures on the space of continuous functions C([0, T], Rd) and define the path space measure PX P as the law of X. For a time-dependent function µ, we denote by µ the time-reversal given by µ(t) := µ(T t). We refer to Appendix A.1 for technical assumptions. 2.2 SAMPLING AS TIME-REVERSAL PROBLEM The goal of diffusion-based sampling is to sample from the density ptarget = ρ Z by transporting a prior density pprior via controlled stochastic processes. We consider two processes given by the SDEs d Xu s = (µ + σu)(Xu s , s) ds + σ(s) d Ws, Xu 0 pprior, (2) d Y v s = ( v)(Y v s , s) ds + σ(s) d Ws, Y v 0 ptarget, (3) where we aim to identify control functions u, v U in a suitable space of admissible controls U C(Rd [0, T], Rd) in order to achieve Xu T ptarget and Y v T pprior. Specifically, we seek controls satisfying pprior Xu Y v ptarget in the sense that Y v is the time-reversed process of Xu and vice versa, i.e., p Xu = p Y v. In this context, we recall the following well-known result on the time-reversal of stochastic processes (Nelson, 1967; Anderson, 1982; Haussmann & Pardoux, 1986; F ollmer, 1988). Lemma 2.1 (Time-reversed SDEs). The time-reversed process Y v, given by the SDE Y v s = µ σv + σσ log Y v s, s) ds + σ(s) d Ws, Y v 0 Y v T , (4) satisfies that p Proof. The result can be derived by comparing the Fokker-Planck equations governing p Y v and p Y v, see, e.g., Chen et al. (2021a); Huang et al. (2021); Berner et al. (2024). Let us now define the problem of identifying the desired control functions u and v from the perspective of path space measures on the space of trajectories C([0, T], Rd), as detailed in the sequel. Problem 2.2 (Time-reversal). Let PXu be the path space measure of the process Xu defined in (2) and let P Y v be the path space measure of Y v, the time-reversal of Y v, given in (4). Further, let D : P P R 0 be a divergence, i.e., a non-negative function satisfying that D(P, Q) = 0 if and only if P = Q. We aim to find optimal controls u , v s.t. u , v arg min u,v U U D PXu|P We note that Problem 2.2 aims to reverse the processes Xu and Y v with respect to each other while obeying the respective initial values Xu 0 pprior and Y v 0 ptarget. For the actual computation of suitable divergences, we derive the following fundamental identity. Published as a conference paper at ICLR 2024 Proposition 2.3 (Likelihood of path measures). Let Xw be a process as defined in (2) with u being replaced by w U and let S and R be as in (1). We can compute the Radon-Nikodym derivative as Y v (Xw) = Z exp Rf Bridge u,v,w + Su+v + B (Xw) (6) with B(Xw) := log pprior(Xw 0 ) ρ(Xw T ) and f Bridge u,v,w := (u+v) w+ v u Proof. The proof combines Girsanov s theorem, Itˆo s lemma, the HJB equation governing log p Y v, and the fact that ρ = Zptarget, see Appendix A.2. Note that we can remove the divergence (σv µ) in (6) by resorting to backward stochastic integrals, see Remark A.1 in the appendix. Using the path space perspective and the representation of the Radon-Nikodym derivative in Proposition 2.3, we may now, in principle, choose any suitable divergence D in order to approach Problem 2.2. Using our path space formulation, we are, to the best of our knowledge, the first to study this problem in such generality. In the following, we demonstrate that this general framework unifies previous approaches and allows us to derive new methods easily. 2.3 COMPARISON OF THE KL AND LOG-VARIANCE DIVERGENCE Most works in the context of diffusion-based sampling rely on the KL divergence. Choosing D = DKL, which implies w = u in (6), we can readily compute Y v) = E h Rf Bridge u,v,u + B (Xu) i + log Z with f Bridge u,v,u = u+v 2 2 + (σv µ) , where we used the fact that the stochastic integral Su+v has vanishing expectation. Note that in practice we minimize the objective LKL(u, v) := DKL(PXu|P Y v) log Z. (7) This objective is analogous to the one derived in Chen et al. (2021a) for the bridge problem, see also Section 3.1 and Appendix A.4. Unfortunately, however, the KL divergence is known to have some evident drawbacks, such as mode collapse (Minka et al., 2005; Midgley et al., 2023) or a potentially high variance of Monte Carlo estimators (Roeder et al., 2017). To address those issues, we propose another divergence that has been originally suggested in N usken & Richter (2021) and extend it to the setting of two controlled stochastic processes. Definition 2.4 (Log-variance divergence). Let e P be a reference measure. The log-variance divergence between the measures P and Q w.r.t. the reference e P is defined as D e P LV(P, Q) := Ve P Note that the log-variance divergence is symmetric in P and Q and actually corresponds to a family of divergences, parametrized by the reference measure e P. Obvious choices in our setting are e P := PXw, P := PXu, and Q := P Y v, resulting in the log-variance loss Lw LV(u, v) := DPXw LV (PXu, P Y v) = V h Rf Bridge u,v,w + Su+v + B (Xw) i . (8) Since the variance is shift-invariant, we can omit log Z in the above objective. Compared to the KL-based loss (7), the log-variances loss (8) exhibits the following beneficial properties. First, by the choice of the reference measure PXw, one can balance exploitation and exploration. To exploit the current control u, one can set w = u, but one can also choose another control or another initial condition Xw 0 . We can leverage this to counteract mode collapse by optimizing the loss in (8) along (sub-)trajectories where PXu has low probability, see Appendix A.7. Next, note that the log-variance loss in (8) does not require the derivative of the process Xw w.r.t. the control w (which, for the case w = u, is implemented by detaching or stopping the gradient, see Appendix A.6). While we still need to simulate the process Xw, we can rely on any (black-box) SDE solver and do not need to track the computation of Xw for automatic differentiation. This Published as a conference paper at ICLR 2024 implies that the log-variance loss does not require derivatives1 of the unnormalized target density ρ, which is crucial for problems where the target is only available as a black box. In contrast, the KL-based loss in (7) demands to differentiate Xu w.r.t. u, requiring to differentiate through the SDE solver and resulting in higher computational costs. Particularly interesting is the following property, sometimes referred to as sticking-the-landing (Roeder et al., 2017). It states that the gradients of the log-variance loss have zero variance at the optimal solution. This property does, in general, not hold for the KL-based loss, such that variants of gradient descent might oscillate around the optimum. Proposition 2.5 (Robustness at the solution). Let b LLV be the Monte Carlo estimator of the logvariance loss in (8) and let the controls u = uθ and v = vγ be parametrized by θ and γ. The variances of the respective derivatives vanish at the optimal solution (u , v ) = (uθ , vγ ), i.e. V h θ b Lw LV(uθ , vγ ) i = 0 and V h γ b Lw LV(uθ , vγ ) i = 0, for all w U. For the estimator b LKL of the KL-based loss (7) the variances do not vanish. Proof. The proof is based on a technical calculation and Proposition 2.3, see Appendix A.2. For the case w = u, we can further interpret the log-variance loss as a control variate version of the KL-based loss, see Remark A.2 in the appendix. We can empirically observe the variance reduction for the loss and its gradient in Figure 5 in the appendix. 3 CONNECTIONS AND EQUIVALENCES OF DIFFUSION-BASED SAMPLING APPROACHES In general, there are infinitely many solutions to Problem 2.2 and, in particular, to our objectives in (7) and (8). In fact, Girsanov s theorem shows that the objectives only enforce Nelson s identity (Nelson, 1967), i.e., u + v = σ log p Xu = σ log p Y v , (9) see also the proof of Proposition 2.3. In this section, we show how our setting generalizes existing diffusion-based sampling approaches, which in turn ensure unique solutions to Problem 2.2. Moreover, with our framework, we can readily derive the corresponding versions of the log-variance loss (8). We refer to Appendix A.3 for a high-level overview of previous diffusion-based sampling methods. 3.1 SCHR ODINGER BRIDGE PROBLEM (SB) Out of all solutions u fulfilling (9), the Schr odinger bridge problem considers the solution u that minimizes the KL divergence DKL(PXu |PXr) to a given reference process Xr, defined as in (2) with u replaced by r U, see Appendix A.4 for further details. Traditionally, r = 0, i.e., the uncontrolled process X0 is chosen. Defining f ref u,r,w := (u r) w u + r Girsanov s theorem shows that d PXu d PXr (Xw) = exp Rf ref u,r,w + Su r (Xw), which implies that DKL(PXu|PXr) = E h Rf ref u,r,u(Xu) i , (11) see, e.g., N usken & Richter (2021, Lemma A.1) and the proof of Proposition 2.3. The SB objective can thus be written as min u U E h Rf ref u,r,u(Xu) Xu T ptarget i , (12) see De Bortoli et al. (2021); Caluya & Halder (2021); Pavon & Wakolbinger (1991); Benamou & Brenier (2000); Chen et al. (2021b); Bernton et al. (2019). We note that the above can also be interpreted as an entropy-regularized optimal transport problem (L eonard, 2014). The entropy constraint in (11) could now be combined with our objective in (5) by considering, for instance, min u,v U U n E h Rf ref u,r,u(Xu) i + λD PXu|P 1While, by default, the samplers presented in the following use log ρ in their parametrization of the control u, we present promising results for the derivative-free regime in Appendix A.7. Published as a conference paper at ICLR 2024 where λ (0, ) is a sufficiently large Lagrange multiplier. In Appendix A.4 we show how the SB problem (12) can be reformulated as a system of coupled PDEs or BSDEs, which can alternatively be used to regularize Problem 2.2, see also Liu et al. (2022); Koshizuka & Sato (2022). Interestingly, the BSDE system recovers our KL-based objective in (7), as originally derived in Chen et al. (2021a). Note that via Nelson s identity (9), an optimal solution u to the SB problem uniquely defines an optimal control v and vice versa. For special cases of SBs, we can calculate such v or an approximation v v . Fixing v = v in (5) and only optimizing for u appearing in the generative process (2) then allows us to attain unique solutions to (an approximation of) Problem 2.2. We note that the approximation v v incurs an irreducible loss given by Y v (Xw) = d P Y v (Xw), (13) thus requiring an informed choice of v and pprior, such that Y v Y v . We will consider two such settings in the following sections. 3.2 DIFFUSION-BASED GENERATIVE MODELING (DIS) We may set v := 0, which can be interpreted as a SB with u = r = σ log p Y 0 and pprior = p Y 0 T , such that the entropy constraint (11) can be minimized to zero. Note, though, that this only leads to feasible sampling approaches if the functions µ and σ in the SDEs are chosen such that the distribution of p Y 0 T is (approximately) known and such that we can easily sample from it. In practice, one chooses functions µ and σ such that p Y 0 T pprior := N(0, ν2I), see Appendix A.6. Related approaches are often called diffusion-based generative modeling or denoising diffusion probabilistic modeling since the (optimally controlled) generative process Xu can be understood as the time-reversal of the process Y 0 that moves samples from the target density to Gaussian noise (Ho et al., 2020; Pavon, 1989; Huang et al., 2021; Song et al., 2020). Let us recall the notation from Proposition 2.3 and define f DIS u,w := f Bridge u,0,w = u w u 2 2 µ. Setting v = 0 in (7), we readily get the loss LKL(u) = E h Rf DIS u,u + B (Xu) i , which corresponds to the Time-Reversed Diffusion Sampler (DIS) derived in Berner et al. (2024). Analogously, our path space perspective and (8) yield the corresponding log-variance loss Lw LV(u) = V h Rf DIS u,w + Su + B (Xw) i . (14) 3.3 TIME-REVERSAL OF REFERENCE PROCESSES (PIS & DDS) In general, we may also set v := σ log p Xr r. Via Lemma 2.1 this implies that PXr = P Y v,ref, where Y v,ref is the process Y v as in (3), however, with ptarget replaced by pref := p Xr T , i.e., d Y v,ref = ( v)(Y v,ref, s) ds + σ(s) d Ws, Y v,ref pref. In other words, Y v,ref is the time-reversal of the reference process Xr. Using (6) with pref instead of ptarget = ρ Z , we thus obtain that 1 = d PXr d P Y v,ref (Xw) = pprior(Xw 0 ) pref(Xw T ) exp Rf Bridge r, v,w + Sr+ v (Xw). (15) This identity leads to the following alternative representation of Proposition 2.3. Lemma 3.1 (Likelihood w.r.t. reference process). Assuming PXr = P Y v,ref, it holds that Y v (Xw) = Z exp Rf ref u,r,w + Su r + Bref (Xw), where f ref u,r,w is defined as in (10) and Bref(Xw) := log pref Proof. The result follows from dividing d PXu Y v (Xw) in (6) by d PXr d P Y v,ref (Xw) in (15). We also refer to Remark A.3 for an alternative derivation that does not rely on the concept of time-reversals. Published as a conference paper at ICLR 2024 Table 1: Comparison of the objectives with Rf, Su, B, Bref, f Bridge u,v,w , f ref u,r,w as defined in the text. LKL Lw LV (ours) pprior v r Bridge E h Rf Bridge u,v,u + B (Xu) i V h Rf Bridge u,v,w + Su+v + B (Xw) i arbitrary learned DIS E h Rf Bridge u,0,u + B (Xu) i V h Rf Bridge u,0,w + Su + B (Xw) i p Y 0 T 0 PIS E h Rf ref u,0,u + Bref (Xu) i V h Rf ref u,0,w + Su + Bref (Xw) i δx0 σ log p X0 0 DDS E h Rf ref u,r,u + Bref (Xu) i V h Rf ref u,r,w + Su r + Bref (Xw) i p Y 0,ref T 0 σ log Note that computing the Radon-Nikodym derivative in Lemma 3.1 requires to choose r, pprior, µ, and σ such that pref = p Xr T is tractable2. For suitable choices of r (see below), one can, for instance, use the SDEs with tractable densities stated in Appendix A.5 with pprior = δx0, pprior = N(0, ν2I), or a mixture of such distributions. Recalling (13) and the choice v := σ log p Xr r, we also need to guarantee that Y v Y v . Let us outline two such cases in the following. PIS: We first consider the case r := 0. Lemma 3.1 and taking D = DKL in Problem 2.2 then yields LKL(u) = DKL(PXu|P Y v) log Z = E h Rf ref u,0,u + Bref (Xu) i . This objective has previously been considered by Tzen & Raginsky (2019); Dai Pra (1991) and corresponding numerical algorithms, referred to as Path Integral Sampler (PIS) in Zhang & Chen (2022), have been independently presented in Richter (2021); Zhang & Chen (2022); Vargas et al. (2023b). Choosing D = DLV, we get the corresponding log-variance loss Lw LV(u) = V h Rf ref u,0,w + Su + Bref (Xw) i , which has already been stated by Richter (2021, Example 7.1). Typically, the objectives are used with pprior := δx0, since Doob s h-transform guarantees that v = v , i.e., we can solve the SB exactly, see Rogers & Williams (2000) and also Appendix A.4.1. In this special case, the SB is often referred to as a Schr odinger half-bridge. DDS: Next, we consider the choices r := σ log p Y 0,ref, v := 0, and pprior := p Y 0,ref T , which in turn yields a special case of the setting from Section 3.2. Using Lemma 3.1, we obtain the objective LKL(u) = E h Rf ref u,r,u + Bref (Xu) i . This corresponds to the Denoising Diffusion Sampler (DDS) objective stated by Vargas et al. (2023a) when choosing µ and σ such that Y 0 is a VP SDE, see Appendix A.5. Choosing the invariant distribution pref := N(0, ν2I) of the VP SDE, see (26) in the appendix, we have that p Xr( , t) = p Y 0,ref( , t) = pref = pprior for t [0, T], and, in particular, r(x, t) = σ x ν2 . Finally, with our general framework, the corresponding log-variance loss can now readily be computed as Lw LV(u) = V h Rf ref u,r,w + Su r + Bref (Xw) i . We refer to Table 1 for a comparison of all our different objectives. 4 NUMERICAL EXPERIMENTS In this section, we compare the KL-based loss with the log-variance loss on the three different approaches, i.e., the general bridge, PIS, and DIS, introduced in Sections 2.3, 3.2, and 3.3. As DDS can be seen as a special case of DIS (both with v = 0, see also Berner et al., 2024, Appendix A.10.1), we do not consider it separately. We can demonstrate that the appealing properties of the log-variance 2In general, it suffices to be able to compute p Xr T up to its normalizing constant. Published as a conference paper at ICLR 2024 Groundtruth LV-PIS (ours) LV-DIS (ours) Figure 2: KDE plots of (1) samples from the groundtruth distribution, (2 & 3) PIS with KL divergence and log-variance loss, and (4 & 5) DIS with KL divergence and log-variance loss for the GMM problem (from left to right). One can see that the log-variance loss does not suffer from mode collapse such as the reverse KL divergence, which only recovers the mode of pprior = N(0, I). LV-PIS (ours) LV-DIS (ours) Marginal Histogram of Xu Figure 3: Marginals of the first coordinate of samples from PIS and DIS (left and right) for the DW problem with d = 5, m = 5, δ = 4. Again, one observes the mode coverage of the log-variance loss as compared to the reverse KL divergence. Similar behavior can also be observed for the other marginals (see Figure 6) and higher-dimensional settings (see Figure 7 for an example in d = 1000). loss can indeed lead to remarkable performance improvements for all considered approaches. Note that we always compare the same settings, in particular, the same number of target evaluations, for both the log-variance and KL-based losses and use sufficiently many gradient steps to reach convergence. See Appendix A.6 and Algorithm 1 for computational details3. Still, we observe that qualitative differences between the two losses are consistent across various hyperparameter settings. We refer to Appendix A.8 for additional experiments. 4.1 BENCHMARK PROBLEMS We evaluate the different methods on the following three numerical benchmark examples. Gaussian mixture model (GMM): We consider ρ(x) = 1 m Pm i=1 N(x; µi, Σi) and choose m = 9, Σi = 0.3 I, (µi)9 i=1 = { 5, 0, 5} { 5, 0, 5} R2 to obtain well-separated modes, see Figure 2. Funnel: The 10-dimensional Funnel distribution (Neal, 2003) is a challenging example often used to test MCMC methods. It is given by the density ρ(x) = ptarget(x) = N(x1; 0, η2) Qd i=2 N(xi; 0, ex1) for x = (xi)10 i=1 R10 with η = 3. Double well (DW): A typical problem in molecular dynamics considers sampling from the stationary distribution of a Langevin dynamics. In our example we consider a d-dimensional double well potential, corresponding to the (unnormalized) density ρ(x) = exp Pm i=1(x2 i δ)2 1 2 Pd i=m+1 x2 i with m N combined double wells (i.e., 2m modes) and a separation parameter δ (0, ), see also Wu et al. (2020) and Figure 3. We choose a large value of δ to make sampling particularly challenging due to high energy barriers. Since ρ factorizes in the dimensions, we obtain reference solutions by numerical integration and ground truth samples using rejection sampling with a Gaussian mixture proposal distribution, see also Midgley et al. (2023). 4.2 RESULTS Let us start with the bridge approach and the general losses in (7) and (8). Table 5 (in the appendix) shows that the log-variance loss can improve our considered metrics significantly. However, the general bridge framework still suffers from reduced efficiency and numerical instabilities. For high-dimensional problems, it can be prohibitive to compute the divergence of v using automatic 3The repository can be found at https://github.com/juliusberner/sde_sampler. Published as a conference paper at ICLR 2024 Table 2: PIS and DIS metrics for the benchmark problems of various dimensions d. We report the median over five independent runs, see Figure 9 for a corresponding boxplot. Specifically, we report errors for estimating the log-normalizing constant log Z as well the standard deviations std of the marginals. Furthermore, we report the normalized effective sample size ESS and the Sinkhorn distance W2 γ (Cuturi, 2013), see Appendix A.6 for details Max-Joseph-Schule. The arrows and indicate whether we want to maximize or minimize a given metric. Problem Method Loss log Z W2 γ ESS std GMM (d = 2) PIS KL (Zhang & Chen, 2022) 1.094 0.467 0.0051 1.937 LV (ours) 0.046 0.020 0.9093 0.023 DIS KL (Berner et al., 2024) 1.551 0.064 0.0226 2.522 LV (ours) 0.056 0.020 0.8660 0.004 Funnel (d = 10) PIS KL (Zhang & Chen, 2022) 0.288 5.639 0.1333 6.921 LV (ours) 0.277 5.593 0.0746 6.850 DIS KL (Berner et al., 2024) 0.433 5.120 0.1383 5.254 LV (ours) 0.430 5.062 0.2261 5.220 DW (d = 5, m = 5, δ = 4) PIS KL (Zhang & Chen, 2022) 3.567 1.699 0.0004 1.409 LV (ours) 0.214 0.121 0.6744 0.001 DIS KL (Berner et al., 2024) 1.462 1.175 0.0012 0.431 LV (ours) 0.375 0.120 0.4519 0.001 DW (d = 50, m = 5, δ = 2) PIS KL (Zhang & Chen, 2022) 0.101 6.821 0.8172 0.001 LV (ours) 0.087 6.823 0.8453 0.000 DIS KL (Berner et al., 2024) 1.785 6.854 0.0225 0.009 LV (ours) 1.783 6.855 0.0227 0.009 differentiation, and relying on Hutchinson s trace estimator introduces additional variance. We refer to Remark A.1 for further discussion. The instabilities might be rooted in the non-uniqueness of the optimal control (which follows from our analysis, cf. Section 3). Furthermore, such issues are also commonly observed in the context of SBs (De Bortoli et al., 2021; Chen et al., 2021a; Fernandes et al., 2021), where two controls need to be optimized. Therefore, for the more challenging problems, we focus on DIS and PIS, which do not incur the described pathologies. We observe that the log-variance loss significantly improves both DIS and PIS across our considered benchmark problems and metrics, see Table 2. The improvements are especially remarkable considering that we only replaced the KL-based loss LKL by the log-variance loss LLV without tuning the hyperparameter for the latter loss. In the few cases where the KL divergence performs better, the difference seems rather insignificant. In particular, Figures 2 and 3 show that the log-variance loss successfully counteracts mode collapse, leading to quite substantial improvements. The benefit of the log-variance loss can also be observed for the benchmark posed in Wu et al. (2020), which aims to sample a target distribution resembling a picture of a Labrador, see Figure 8 in the appendix. In Appendix A.8, we present results for further (high-dimensional) targets, showing that diffusion-based samplers with log-variance loss are competitive with other state-of-the-art sampling methods. 5 CONCLUSION In this work, we provide a unifying perspective on diffusion-based generative modeling that is based on path space measures of time-reversed diffusion processes and that, for the first time, connects methods such as SB, DIS, PIS, and DDS. Our novel framework also allows us to consider arbitrary divergences between path measures as objectives for the corresponding task of interest. While the KL divergence yields known methods, we find that choosing the log-variance divergence leads to novel algorithms that are particularly useful for the task of sampling from (unnormalized) densities. Specifically, this divergence exhibits beneficial properties, such as lower variance, computational efficiency, and exploration-exploitation trade-offs. We can demonstrate in multiple numerical examples that the log-variance loss greatly improves sampling quality across a range of metrics. We believe that problem and approach-specific finetuning might further enhance the performance of the log-variance loss, thereby paving the way for competitive diffusion-based sampling approaches. Published as a conference paper at ICLR 2024 ACKNOWLEDGMENTS We thank Guan-Horng Liu for many useful discussions. The research of L.R. was funded by Deutsche Forschungsgemeinschaft (DFG) through the grant CRC 1114 Scaling Cascades in Complex Systems (project A05, project number 235221301). J.B. acknowledges support from the Wally Baer and Jeri Weiss Postdoctoral Fellowship. Brian DO Anderson. Reverse-time diffusion equation models. Stochastic Processes and their Applications, 12(3):313 326, 1982. Michael Arbel, Alex Matthews, and Arnaud Doucet. Annealed flow transport Monte Carlo. In International Conference on Machine Learning, pp. 318 330. PMLR, 2021. Ludwig Arnold. Stochastic Differential Equations: Theory and Applications. A Wiley-Interscience publication. Wiley, 1974. P. Baldi. Stochastic Calculus: An Introduction Through Theory and Exercises. Universitext. Springer International Publishing, 2017. Jean-David Benamou and Yann Brenier. A computational fluid mechanics solution to the Monge Kantorovich mass transfer problem. Numerische Mathematik, 84(3):375 393, 2000. Julius Berner, Lorenz Richter, and Karen Ullrich. An optimal control perspective on diffusion-based generative modeling. Transactions on Machine Learning Research, 2024. Espen Bernton, Jeremy Heng, Arnaud Doucet, and Pierre E Jacob. Schr odinger Bridge Samplers. ar Xiv preprint ar Xiv:1912.13170, 2019. Kenneth F Caluya and Abhishek Halder. Wasserstein proximal algorithms for the Schr odinger bridge problem: Density control with nonlinear drift. IEEE Transactions on Automatic Control, 67(3): 1163 1178, 2021. Tianrong Chen, Guan-Horng Liu, and Evangelos A Theodorou. Likelihood training of Schr odinger Bridge using Forward-Backward SDEs theory. ar Xiv preprint ar Xiv:2110.11291, 2021a. Yongxin Chen, Tryphon T Georgiou, and Michele Pavon. Stochastic control liaisons: Richard Sinkhorn meets Gaspard Monge on a Schr odinger bridge. SIAM Review, 63(2):249 313, 2021b. Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. Advances in neural information processing systems, pp. 2292 2300, 2013. Paolo Dai Pra. A stochastic control approach to reciprocal diffusion processes. Applied mathematics and Optimization, 23(1):313 329, 1991. Valentin De Bortoli, James Thornton, Jeremy Heng, and Arnaud Doucet. Diffusion Schr odinger bridge with applications to score-based generative modeling. Advances in Neural Information Processing Systems, 34:17695 17709, 2021. Pierre Del Moral, Arnaud Doucet, and Ajay Jasra. Sequential Monte Carlo samplers. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(3):411 436, 2006. Arnaud Doucet, Adam M Johansen, et al. A tutorial on particle filtering and smoothing: Fifteen years later. Handbook of nonlinear filtering, 12(656-704):3, 2009. David Lopes Fernandes, Francisco Vargas, Carl Henrik Ek, and Neill DF Campbell. Shooting Schr odinger s cat. In Fourth Symposium on Advances in Approximate Bayesian Inference, 2021. Wendell H Fleming and Halil Mete Soner. Controlled Markov processes and viscosity solutions, volume 25. Springer Science & Business Media, 2006. Hans F ollmer. Random fields and diffusion processes. In Ecole d Et e de Probabilit es de Saint-Flour XV XVII, 1985 87, pp. 101 203. Springer, 1988. Published as a conference paper at ICLR 2024 Carsten Hartmann, Lorenz Richter, Christof Sch utte, and Wei Zhang. Variational characterization of free energy: Theory and algorithms. Entropy, 19(11):626, 2017. Ulrich G Haussmann and Etienne Pardoux. Time reversal of diffusions. The Annals of Probability, pp. 1188 1205, 1986. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840 6851, 2020. Lars Holdijk, Yuanqi Du, Ferry Hooft, Priyank Jaini, Bernd Ensing, and Max Welling. Path integral stochastic optimal control for sampling transition paths. ar Xiv preprint ar Xiv:2207.02149, 2022. Chin-Wei Huang, Jae Hyun Lim, and Aaron C Courville. A variational perspective on diffusion-based generative models and score matching. Advances in Neural Information Processing Systems, 34, 2021. Robert E Kass, Bradley P Carlin, Andrew Gelman, and Radford M Neal. Markov chain Monte Carlo in practice: a roundtable discussion. The American Statistician, 52(2):93 100, 1998. Diederik Kingma, Tim Salimans, Ben Poole, and Jonathan Ho. Variational diffusion models. Advances in Neural Information Processing Systems, 34:21696 21707, 2021. Takeshi Koshizuka and Issei Sato. Neural Lagrangian Schr odinger bridge. ar Xiv preprint ar Xiv:2204.04853, 2022. Hiroshi Kunita. Stochastic flows and jump-diffusions. Springer, 2019. Christian L eonard. Some properties of path measures. In S eminaire de Probabilit es XLVI, pp. 207 230. Springer, 2014. Guan-Horng Liu, Tianrong Chen, Oswin So, and Evangelos A Theodorou. Deep generalized Schr odinger bridge. ar Xiv preprint ar Xiv:2209.09893, 2022. Jun S Liu and Jun S Liu. Monte Carlo strategies in scientific computing, volume 10. Springer, 2001. Nikolay Malkin, Moksh Jain, Emmanuel Bengio, Chen Sun, and Yoshua Bengio. Trajectory balance: Improved credit assignment in gflownets. Advances in Neural Information Processing Systems, 35: 5955 5967, 2022a. Nikolay Malkin, Salem Lahlou, Tristan Deleu, Xu Ji, Edward Hu, Katie Everett, Dinghuai Zhang, and Yoshua Bengio. GFlow Nets and variational inference. ar Xiv preprint ar Xiv:2210.00580, 2022b. Laurence Illing Midgley, Vincent Stimper, Gregor N. C. Simm, Bernhard Sch olkopf, and Jos e Miguel Hern andez-Lobato. Flow annealed importance sampling bootstrap. In The Eleventh International Conference on Learning Representations, 2023. Tom Minka et al. Divergence measures and message passing. Technical report, Citeseer, 2005. Radford M Neal. Annealed importance sampling. Statistics and computing, 11(2):125 139, 2001. Radford M Neal. Slice sampling. The Annals of Statistics, 31(3):705 767, 2003. E Nelson. Dynamical theories of Brownian motion. Press, Princeton, NJ, 1967. Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In International Conference on Machine Learning, pp. 8162 8171. PMLR, 2021. Nikolas N usken and Lorenz Richter. Solving high-dimensional Hamilton Jacobi Bellman PDEs using neural networks: perspectives from the theory of controlled diffusions and measures on path space. Partial Differential Equations and Applications, 2(4):1 48, 2021. Bernt Øksendal and Bernt Øksendal. Stochastic differential equations. Springer, 2003. George Papamakarios, Eric T Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan. Normalizing flows for probabilistic modeling and inference. J. Mach. Learn. Res., 22(57):1 64, 2021. Published as a conference paper at ICLR 2024 Michele Pavon. Stochastic control and nonequilibrium thermodynamical systems. Applied Mathematics and Optimization, 19(1):187 202, 1989. Michele Pavon. On local entropy, stochastic control and deep neural networks. ar Xiv preprint ar Xiv:2204.13049, 2022. Michele Pavon and Anton Wakolbinger. On free energy, stochastic control, and Schr odinger processes. In Modeling, Estimation and Control of Systems with Uncertainty, pp. 334 348. Springer, 1991. H. Pham. Continuous-time Stochastic Control and Optimization with Financial Applications. Stochastic Modelling and Applied Probability. Springer Berlin Heidelberg, 2009. Lorenz Richter. Solving high-dimensional PDEs, approximation of path space measures and importance sampling of diffusions. Ph D thesis, BTU Cottbus-Senftenberg, 2021. Lorenz Richter and Julius Berner. Robust SDE-based variational formulations for solving linear PDEs via deep learning. In International Conference on Machine Learning, pp. 18649 18666. PMLR, 2022. Lorenz Richter, Ayman Boustati, Nikolas N usken, Francisco Ruiz, and Omer Deniz Akyildiz. Var Grad: low-variance gradient estimator for variational inference. Advances in Neural Information Processing Systems, 33:13481 13492, 2020. Christian P Robert, George Casella, and George Casella. Monte Carlo statistical methods, volume 2. Springer, 1999. Geoffrey Roeder, Yuhuai Wu, and David K Duvenaud. Sticking the landing: Simple, lower-variance gradient estimators for variational inference. Advances in Neural Information Processing Systems, 30, 2017. L Chris G Rogers and David Williams. Diffusions, Markov processes and martingales: Volume 2, Itˆo calculus, volume 2. Cambridge university press, 2000. Abul Hasan Siddiqi and Sudarsan Nanda. Functional analysis with applications. Springer, 1986. Yang Song and Stefano Ermon. Improved techniques for training score-based generative models. Advances in neural information processing systems, 33:12438 12448, 2020. Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2020. Vincent Stimper, Bernhard Sch olkopf, and Jos e Miguel Hern andez-Lobato. Resampling base distributions of normalizing flows. In International Conference on Artificial Intelligence and Statistics, pp. 4915 4936, 2022. Gabriel Stoltz, Mathias Rousset, et al. Free energy computations: A mathematical perspective. World Scientific, 2010. Belinda Tzen and Maxim Raginsky. Theoretical guarantees for sampling and inference in generative models with latent diffusions. In Conference on Learning Theory, pp. 3084 3114. PMLR, 2019. Arash Vahdat, Karsten Kreis, and Jan Kautz. Score-based generative modeling in latent space. Advances in Neural Information Processing Systems, 34:11287 11302, 2021. Lorenz Vaitl, Ludwig Winkler, Lorenz Richter, and Pan Kessel. Fast and unified path gradient estimators for normalizing flows. In The Twelfth International Conference on Learning Representations, 2024. Francisco Vargas. Machine-learning approaches for the empirical Schr odinger bridge problem. Technical report, University of Cambridge, Computer Laboratory, 2021. Francisco Vargas, Will Grathwohl, and Arnaud Doucet. Denoising diffusion samplers. In International Conference on Learning Representations, 2023a. Published as a conference paper at ICLR 2024 Francisco Vargas, Andrius Ovsianas, David Fernandes, Mark Girolami, Neil D Lawrence, and Nikolas N usken. Bayesian learning via neural Schr odinger F ollmer flows. Statistics and Computing, 33 (1):1 22, 2023b. Francisco Vargas, Shreyas Padhy, Denis Blessing, and N N usken. Transport meets variational inference: Controlled Monte Carlo diffusions. In The Twelfth International Conference on Learning Representations, 2024. Martin J Wainwright, Michael I Jordan, et al. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1 2):1 305, 2008. Hao Wu, Jonas K ohler, and Frank No e. Stochastic normalizing flows. Advances in Neural Information Processing Systems, 33:5933 5944, 2020. Dinghuai Zhang, Ricky Tian Qi Chen, Cheng-Hao Liu, Aaron Courville, and Yoshua Bengio. Diffusion generative flow samplers: Improving learning signals through partial trajectory optimization. ar Xiv preprint ar Xiv:2310.02679, 2023. Qinsheng Zhang and Yongxin Chen. Path Integral Sampler: a stochastic control approach for sampling. In International Conference on Learning Representations, 2022. Published as a conference paper at ICLR 2024 A.1 ASSUMPTIONS In our proofs, we assume that the coefficient functions of all appearing SDEs are sufficiently regular such that Novikov s condition is satisfied and such that the SDEs admit unique strong solutions with smooth and strictly positive densities p Xt for t (0, T), see, for instance, Arnold (1974); Øksendal & Øksendal (2003); Baldi (2017). Proof of Proposition 2.3. Let us define the path space measures PXu,x and P Y v,x as the measures of Xu and Y v conditioned on Xu 0 = x and Y v 0 = x with x Rd, respectively. We can then compute Y v (Xw) = log d PXu,x Y v,x (Xw) + log d PXu 0 d P Y v 0 (Xw 0 ) = log d PXu,x Y v,x (Xw) + log pprior(Xw 0 ) p Y v(Xw 0 , 0). (16) We follow Liu et al. (2022) and first note that the time-reversal of the process Y v defined in (3) is given by d Y v s = (µ + σσ g σv)( Y v s, s) ds + σ(s) d Ws, where we abbreviate g := log p Y v, see Lemma 2.1. Let us further define the short-hand notations h := u + v σ g and b := µ + σ(u h). Then, we can write the SDEs in (2) and (3) as ( d Xu s = (b + σh)(Xu s , s) ds + σ(s) d Ws, d Y v s, s) ds + σ(s) d Ws. We can now apply Girsanov s theorem (see, e.g., N usken & Richter, 2021, Lemma A.1) to rewrite the logarithm of the Radon-Nikodym derivative R := log d PXu,x Y v,x (Xw) in (16) as σ h (Xw s , s) d Xw s Z T σ 1b h (Xw s , s) ds 1 0 h(Xw s , s) 2 ds (w u) h + 1 2 h 2 (Xw s , s) ds + Sh(Xw) (w u) u + v σ g + 1 2 u + v σ g 2 (Xw s , s) ds + Sh(Xw) = Rf Bridge u,v,w Z T (σv µ) + (v + w) σ g 1 2 σ g 2 (Xw s , s) ds + Sh(Xw). Further, we may apply Itˆo s lemma to the function g to get g(Xw T , T) g(Xw 0 , 0) = Z T sg+ g (µ+σw)+ 1 2 Tr σσ 2g (Xw s , s) ds+Sσ g(Xw). Noting that g = log p Y v fulfills the Hamilton-Jacobi-Bellman equation (see, e.g., Berner et al., 2024) 2 Tr σσ 2g + (σv µ) g + (σv µ) 1 g(Xw T , T) g(Xw 0 , 0) = Z T (σv µ)+(v+w) σ g 1 2 σ g 2 (Xw s , s)+Sσ g(Xw). Finally, combining this with (16) and (17) and noting that g(Xw T , T) = log p Y v(Xw T , T) = log p Y v(Xw T , 0) = ptarget(Xw T ), yields the desired expression. Published as a conference paper at ICLR 2024 Remark A.1 (Divergence-free objectives). One can remove the divergence from the Radon-Nikodym derivative (6) and thus from corresponding losses by noting the identity Z T 0 (σv µ)(Xw s , s) ds = Z T v σ 1µ (Xw s , s) d Ws d Ws , where for a suitable function φ C(Rd [0, T], Rd) the backward integration w.r.t. Brownian motion is defined as Z T 0 φ(Xw s , s) d Ws := lim t 0 n=1 φ(Xw tn+1, tn+1) Wtn+1 Wtn , which, in contrast to the definition of the usual Itˆo integral, Z T 0 φ(Xw s , s) d Ws := lim t 0 n=1 φ(Xw tn, tn) Wtn+1 Wtn , considers the right endpoint when discretizing the integral on a time grid 0 = t0 < t1 < < t N = T with step size t := tn+1 tn. The above definitions via refined partitions readily bring implementation schemes for both integrals when choosing a fixed step size t > 0. Divergence-free objectives might be particularly beneficial in higher dimensions, where it is typically expensive to compute the divergence using automatic differentiation. For further details, we refer to Vargas et al. (2024) and Kunita (2019). Proof of Proposition 2.5. Let us first recall the notion of Gˆateaux derivatives, see Siddiqi & Nanda (1986, Section 5.2). We say that L: U U R 0 is Gˆateaux differentiable at u U if for all v, ϕ U the mapping ε 7 L(u + εϕ, v) is differentiable at ε = 0. The Gˆateaux derivative of L w.r.t. u in direction ϕ is then defined as δ δu L(u, v; ϕ) := d ε=0L(u + εϕ, v). The derivative of L w.r.t. v is defined analogously. Let now u = uθ and v = vγ be parametrized4 by θ Rp and γ Rp. Relating the Gˆateaux derivatives to partial derivatives w.r.t. θ and γ, respectively, let us note that we are particularly interested in the directions ϕ = θiuθ and ϕ = γivγ for i {1, . . . , p}. This choice is motivated by the chain rule of the Gˆateaux derivative, which, under suitable assumptions, states that θi L(uθ, vγ) = δ u=uθ L (u, vγ; θiuθ) and γi L(uθ, vγ) = δ v=vγL (uθ, v; γivγ) . Analogous to the computations in N usken & Richter (2021), the Gˆateaux derivatives of the Monte Carlo estimator b Lw LV of the log-variance loss Lw LV in (8) with K N samples is given by δ δu b Lw LV(u, v; ϕ) = 2 k=1 A(k) u,v,w B(k) u,w,ϕ 1 i=1 B(i) u,w,ϕ where the superscript (k) denotes the index of the k-th i.i.d. sample in the Monte Carlo estimator b Lw LV and we define the short-hand notations A(k) u,v,w := Rf Bridge u,v,w + S(k) u+v + B (Xw,(k)) + log Z and B(k) u,w,ϕ := Rf gen u,w,ϕ + S(k) ϕ (Xw,(k)) with f gen u,w,ϕ = (w u) ϕ. Now, note that the definition of the log-variance loss and Proposition 2.3 imply that for the optimal choices u = u , v = v it holds that A(k) u ,v ,w = 0 4We only assume that θ and γ are in the same space Rp for notational simplicity. Published as a conference paper at ICLR 2024 almost surely for every k {1, . . . , K} and w U. This readily implies the statement for the derivative w.r.t. the control uγ. The analogous statement holds true for the derivative w.r.t. vγ, as we can compute δ δv b Lw LV(u, v; ϕ) = 2 k=1 A(k) u,v,w C(k) v,w,ϕ 1 i=1 C(i) v,w,ϕ C(k) v,w,ϕ = Rf inf v,w,ϕ + S(k) ϕ (Xw,(k)) with f inf v,w,ϕ = (v + w) ϕ + (σϕ). For the derivative of the Monte Carlo version of the loss LKL as defined in (7) w.r.t. to v we may compute δ δv b LKL(u, v; ϕ) = 1 0 ((u + v) ϕ + (σϕ)) (Xu,(k) s , s) ds. We note that even for u = u and v = v we can usually not expect the variance of the corresponding Monte Carlo estimator to be zero. For the computation of the derivative w.r.t. u we refer to N usken & Richter (2021, Proposition 5.3). Remark A.2 (Control variate interpretation). For the gradient of the loss LKL w.r.t. to u we may compute δ δu LKL(u, v; ϕ) = E 0 ((u + v) ϕ) (Xu s , s) ds + Rf Bridge u,v,u (Xu) + B(Xu) Sϕ(Xu) Au,v,u Sϕ(Xu) where we used Girsanov s theorem and the Itˆo isometry. Comparing with (18), we realize that the derivative of LLV w.r.t. u for the choice w = u can be interpreted as a control variate version of the derivative of LKL, thereby promising reduced variance of the corresponding Monte Carlo estimators, cf. N usken & Richter (2021); Richter et al. (2020). In the context of reinforcement learning, such a control variate is also known as local baseline. As an alternative, global baselines have been proposed, where the batch-dependent scaling of the local baseline is replaced by an exponentially moving average. This corresponds to replacing the variance in the loss with the second moment and additionally optimizing an approximation of the log-normalizing constant (with a specific learning rate), see Malkin et al. (2022b). The resulting loss is then known as (second) moment loss (N usken & Richter, 2021; Richter et al., 2020) or trajectory balance objective (Malkin et al., 2022a). Remark A.3 (Alternative derivations of Lemma 3.1). The expression in Lemma 3.1 can also be derived via Y v (Xw) = d PXu d PXr (Xw) d P Y v (Xw) = d PXu d PXr (Xw) p Xr T ptarget (Xw T ), where the first factor can be computed via recalling d PXu d PXr (Xw) = exp Rf ref u,r,w + Su r (Xw). (19) Yet another viewpoint is based on importance sampling in path space, see, e.g., Hartmann et al. (2017). Since our goal is to find an optimal control u such that we get samples Xu T ptarget, we may define our target path space measure PXu via d PXu d PXr (Xw) = ptarget p Xr T (Xw T ). We can then compute d PXu d PXu (Xw) = d PXu d PXr (Xw) d PXr d PXu (Xw), which, together with (19), is equivalent to the expression in Lemma 3.1. Note that in the importance sampling perspective we do not need the concept of time-reversals. Published as a conference paper at ICLR 2024 A.3 SAMPLING VIA LEARNED DIFFUSIONS In the following, we provide a high-level overview of sampling methods based on controlled diffusion processes. We base our explanation on the general KL-based loss stated in (7) since most previous methods are special cases of this formulation, see Table 1. Let us recall that we want to learn a control u such that Xu T ptarget. We first observe that the terminal costs B(Xu) or Bref(Xu) contain the term log ρ = log ptarget+log Z, which penalizes Xu T for ending up at regions with low probability w.r.t. the target density. The other terms of the terminal cost, together with the running costs Rf Bridge u,v,u , are enforcing additional constraints on the trajectories of our process Xu. In our formulation, they generally enforce Xu to be the time-reversal of Y v. For special choices of v, this yields the following settings: For the PIS method, we minimize the reverse KL divergence of the controlled process Xu to the uncontrolled process X0, promoting u to be as close to zero as possible. This corresponds to a classical Schr odinger bridge problem, see Appendix A.4, which, for the simple initial condition X0 0 δx0, can be solved without sequential optimization routines, see also Section 1.1 and Appendix A.4.1. The DIS and DDS methods are motivated by diffusion-based generative modeling (Ho et al., 2020; Kingma et al., 2021; Nichol & Dhariwal, 2021; Vahdat et al., 2021; Song & Ermon, 2020). In particular, they minimize the reverse KL divergence to the time-reversed noising process Y 0. In other words, Xu is enforced to denoise the samples Y 0 T in order to yield samples from Y 0 0 ptarget. While we base our unifying framework in Section 2 on the perspective of path measures, the respective methods for the KL divergence can also be derived from the underlying PDEs or BSDE systems, see Appendix A.4 and Berner et al. (2024). A.4 THE SCHR ODINGER BRIDGE PROBLEM In this section, we provide some background information on the classical Schr odinger bridge problem. Recall from Section 3.1 that out of all solutions u fulfilling the general bridge problem stated in Problem 2.2, which can be characterized by Nelson s identity in (9), the Schr odinger bridge problem considers the solution u that minimizes the KL divergence DKL(PXu |PXr) to a given reference process Xr, defined as in (2) with u replaced by r U, i.e. d Xr s = (µ + σr)(Xr s, s) ds + σ(s) d Ws, Xr 0 pprior. Traditionally, the uncontrolled process X0 with r = 0 is chosen, i.e., d X0 s = µ(X0 s, s) ds + σ(s) d Ws, X0 0 pprior. In the following, we will formulate optimality conditions for the Schr odinger bridge problem defined in (12) for this standard case r = 0. Moreover, we outline how the associated BSDE system leads to the same losses as given in (7) and (8), respectively. The ideas are based on Chen et al. (2021a); Vargas (2021); Liu et al. (2022); Caluya & Halder (2021). First, we can define the ϕ(x, t) := min u U E t u(Xu s , s) 2 ds Xu t = x, Xu T ptarget By the dynamic programming principle it holds that ϕ solves the Hamilton-Jacobi-Bellman (HJB) equation 2 Tr σσ 2ϕ + 1 (with unknown boundary conditions) and that the optimal control satisfies u = σ ϕ. Together with the corresponding Fokker-Planck equation for Xu , this yields necessary and sufficient conditions for the solution to (11). Now, we can transform the Fokker-Planck equation and the HJB equation (20) into a system of linear equations, using the exponential transform ψ := exp( ϕ) and bψ := p Xu exp(ϕ) = p Xu Published as a conference paper at ICLR 2024 often referred to as the Hopf-Cole transform. This yields the following well-known optimality conditions of the Schr odinger bridge problem defined in (12). Theorem A.4 (Optimality PDEs). The solution u to the Schr odinger bridge problem (12) is equivalently given by 1. u := σ ϕ, where p Xu and ϕ are the unique solutions to the coupled PDEs ( tp Xu = p Xu (µ σσ ϕ) + 1 2 Tr σσ 2p Xu 2 Tr σσ 2ϕ + 1 with boundary conditions p Xu ( , 0) = pprior, p Xu ( , T) = ptarget. 2. u := σ log ψ, where ψ and bψ are the unique solutions to the PDEs ( tψ = ψ µ 1 2 Tr σσ 2ψ , t bψ = bψµ + 1 2 Tr σσ 2 bψ , (22) with coupled boundary conditions ( ψ( , 0) bψ( , 0) = pprior, ψ( , T) bψ( , T) = ptarget. (23) The optimal control v is given by Nelson s identity (9), i.e., v = σ log p Xu u = σ log bψ. (24) Using Itˆo s lemma, we now derive a BSDE system corresponding to the PDE system in (22). Proposition A.5 (BSDEs for the SB problem). Let us assume ψ and bψ fulfill the PDEs (22) with boundary conditions (23) and let us define the processes Yw s = log ψ(Xw s , s), b Yw s = log bψ(Xw s , s), Zw s = σ log ψ(Xw s , s) = u (Xw s , s), b Zw s = σ log bψ(Xw s , s) = v (Xw s , s), where the process Xw is given by d Xw s = (µ + σw)(Xw s , s) ds + σ(s) d Ws with w U being an arbitrary control function. We then get the BSDE system ( d Yw s = Zw s w(Xw s , s) 1 2 Zw s 2 ds + Zw s d Ws, d b Yw s = 1 2 b Zw s 2 + (σ b Zw s µ(Xw s , s)) + b Zw s w(Xw s , s) ds + b Zw s d Ws. Furthermore, it holds Yw s + b Yw s = log p Xu (Xw s , s) = log p Y v (Xw s , s). (25) Proof. The proof is similar to the one in Chen et al. (2021a). For brevity, we define D = 1 2σσ . We can apply Itˆo s lemma to the stochastic process Yw s = log ψ(Xw s , s) and get d Yw s = s log ψ + log ψ (µ + σw) + Tr D 2 log ψ (Xw s , s) ds+σ log ψ(Xw s , s) d Ws. Further, via (22) it holds s log ψ = 1 ψ ψ µ Tr D 2ψ = log ψ µ Tr D 2ψ Published as a conference paper at ICLR 2024 and we note the identity 2 log ψ = 2ψ Combining the previous three equations, we get σ log ψ w Tr (Xw s , s) ds + σ log ψ(Xw s , s) d Ws = Zw s w(Xw s , s) 1 2 Zw s 2 ds + Zw s d Ws. Similarly, we may apply Itˆo s lemma to b Yw s = log bψ(Xw s , s) and get d b Yw s = s log bψ + log bψ (µ + σw) + Tr D 2 log bψ (Xw s , s) ds + b Zw s d Ws. Now, via (22) it holds that s log bψ = 1 bψµ + Tr D 2 bψ = log bψ µ µ + Tr Combining the previous two equations, we get bψ + D 2 log bψ µ + σ log bψ w (Xw s , s) ds + b Zw s d Ws. Now, noting the identity bψ + D 2 log bψ 2 σ log bψ 2 2 σ log bψ 2 + σσ log bψ , we can get the relation d b Yw s = 1 2 σ log bψ 2 + σσ log bψ µ + σ log bψ w (Xw s , s) ds + b Zw s d Ws 2 b Zw s 2 + (σ b Zw s µ) + b Zw s w (Xw s , s) ds + b Zw s d Ws, which concludes the proof. Note that the BSDE system is slightly more general than the one introduced in Chen et al. (2021a), which can be recovered with the choice w(Xw s , s) = Zw s . Also, the roles of pprior and ptarget are interchanged in Chen et al. (2021a) since they consider generative modeling instead of sampling from densities. A valid loss can now be derived by adding the two BSDEs and recalling relation (25), which yields B(Xw) log(Z) = log ptarget(Xw T ) pprior(Xw 0 ) = Yw T + b Yw T Yw 0 + b Yw 0 = Rf Bridge u ,v ,w + Su +v (Xw) almost surely. Analogous to Berner et al. (2024); Huang et al. (2021) in generative modeling, the above equality suggests a parameterized lower bound of the log-likelihood log pprior when replacing the optimal controls in Zw s = u (Xw, s) and b Zw s = v (Xw s , s) with their approximations u and v, see Chen et al. (2021a). This lower bound exactly recovers the loss given in (7). Further, note that the variance of the left-hand minus the right-hand side is zero, which readily yields our log-variance loss as defined in (8). Published as a conference paper at ICLR 2024 A.4.1 SCHR ODINGER HALF-BRIDGES (PIS) For the Schr odinger half-bridge, also referred to as PIS, introduced in Section 3.3, we can find an alternative derivation, motivated by the PDE perspective outlined in Appendix A.4. For this derivation it is crucial that we assume the prior density to be concentrated at a single point, i.e., pprior := δx0 for some x0 Rd (typically x0 = 0), see Tzen & Raginsky (2019); Dai Pra (1991). We can recover the corresponding objectives by noting that, in the case pprior = δx0, the system of PDEs in (22) can be decoupled. More precisely, we observe that the second equation in (22) is the Fokker-Planck equation of X0 and we have that bψ = p Xu exp(ϕ) = p X0 and bψ( , 0) = p X0 0 = δx0. In view of (24), we note that this defines v = σ log p X0. By (21), we observe that ψ = p Xu p X0 , which yields the boundary condition ϕ( , T) = log ψ( , T) = log p X0 T ptarget = log Zp X0 T ρ to the HJB equation in (20). By the verification theorem (Dai Pra, 1991; Pavon, 1989; N usken & Richter, 2021; Fleming & Soner, 2006; Pham, 2009), we thus obtain the PIS objective 0 u(Xu s , s) 2 ds + log p X0 T (Xu T ) = E h Rf ref u,0,u + Bref (Xu) i . Moreover, the optimal control is given by u = σ ϕ = σ log ψ. We can also derive this objective from the BSDE system in Proposition A.5. Since bψ( , 0) = δx0, we may focus on the process Yw s = log ψ(Xw s , s) only, and get Yw T Yw 0 = Z T 0 Zw s w(Xw s , s) 1 2 Zw s 2 ds + Z T 0 Zw s d Ws. The PIS objective now follows by choosing w(Xw s , s) = Zw s and noting that Yw T = log ψ(Xw T , T) = log ptarget p X0 T (Xw T ). Recalling our notation in (1), this also shows that the log-variance loss can be written as Lw LV(u) = V h Rf ref u,0,w + Su + Bref (Xw) i . A.5 TRACTABLE SDES Let us present some commonly used SDEs of the form d Xu s = µ(Xu s , s) ds + σ(s) d Ws with affine drifts that have tractable marginals conditioned on their initial value, see Song et al. (2020). For notational convenience, let us define α(t) := Z t with suitable β C([0, T], (0, )). Variance-preserving (VP) SDE: This Ornstein-Uhlenbeck process is given by σ(t) := ν p 2β(t) I and µ(x, t) := β(t)x. with ν (0, ). Then, we have that Xt|X0 N e α(t)X0, ν2 1 e 2α(t) I . This shows that for α(T) sufficiently large it holds that XT N 0, ν2I . For X0 N(m, Σ), we further have that Xt N e α(t)m, e 2α(t) Σ ν2I + ν2I . (26) Published as a conference paper at ICLR 2024 Algorithm 1 Training of a generalized time-reversed diffusion sampler Input: neural networks uθ, vγ with initial parameters θ, γ, optimizer method step for updating the parameters, number of steps K, batch size m Output: optimized parameters θ, γ for k 1, . . . , K do L choose KL-based loss LKL in (7) or log-variance loss LLV in (8) Setup if L = LKL then w uθ p pprior else w choose (detached) control for the forward process p choose initial distribution for the forward process end if for i = 1, . . . , m do Approximate cost (batched in practice) x sample from p (W, Xw) simulate discretizations of Brownian motion W and SDE Xw with Xw 0 = x (Rf Bridge uθ,vγ ,w, B) compute approximations of the running and terminal costs using Xw rndi Rf Bridge uθ,vγ ,w + B if L = LLV then Suθ+vγ compute approximation of the stochastic integral using W rndi rndi + Suθ+vγ end if end for m Pm i=1 rndi Compute loss if L = LKL then b L mean else b L 1 m 1 Pm i=1(rndi mean)2 θ step θ, θ b L Gradient descent γ step γ, γ b L Variance-exploding (VE) SDE / scaled Brownian motion: This SDE is given by a scaled Brownian motion, i.e., µ := 0 and σ as defined above. It holds that Xt|X0 N X0, 2ν2α(t)I . For X0 N(m, Σ), we thus have that Xt N m, 2ν2α(t)I + Σ . A.6 COMPUTATIONAL DETAILS For convenience, we first outline our method in Algorithm 1. Recall that the methods DIS, PIS, and DDS can be recovered when making particular choices for v, r, and pprior, see Table 1. We specify the corresponding setting and further computational details in the following. General setting: Every experiment is executed on a single GPU and, in our Py Torch implementation, we generally follow the settings and hyperparameters of DIS and PIS as presented in Berner et al. (2024), which itself is based on the implementation of Zhang & Chen (2022). In particular, we use the Fourier MLPs of Zhang & Chen (2022), a batch size of 2048, and the Adam optimizer. To facilitate the comparisons, we use a fixed number of 200 steps for the Euler-Maruyama scheme. A difference to Berner et al. (2024) is that we observed better performance (for all considered methods Published as a conference paper at ICLR 2024 and losses) by using an exponentially decaying learning rate starting at 0.005 and decaying every 100 steps to a final learning rate of 10 4. We use 60000 gradient steps for the experiments with d 10 and 120000 gradient steps otherwise to approximately achieve convergence. However, we observed that the differences between the losses are already visible before convergence, see, e.g., Figure 1. PIS: We follow Zhang & Chen (2022) and use a Brownian motion starting at δ0 for the uncontrolled SDE X0. Furthermore, we also leverage the score of the target density log ρ (typically given in closed-form or evaluated via automatic differentiation) for the parametrization of the control u, see Zhang & Chen (2022); Berner et al. (2024). DIS: We use the VP-SDE in Song et al. (2020) for the SDE Y 0. Specifically, we use ν := 1 and β(t) := (1 t) βmin + tβmax, t [0, 1], with βmin = 0.05 and βmax = 5, see Appendix A.5. Moreover, we employ a linear interpolation of log ρ and log pprior for the parametrization of the control u, see Berner et al. (2024). Bridge: For the general bridge, we consider the loss (7), which corresponds to the setting in Chen et al. (2021a) adapted to unnormalized densities. We use an analogous setting to DIS; however, we additionally employ a Fourier MLP to control the process Y v. Since Y 0 T is already close to pprior by construction of the VP-SDE, we use a lower initial learning rate of 10 4 for the control v. While these choices already provide better results for the KL divergence, see Table 5, we note that more sophisticated, potentially problem-specific choices might be investigated in future studies. In particular, for the general bridge, we would be free to choose the prior density pprior as well as the drift function µ in the SDEs (2) and (3). Log-variance loss: For the log-variance loss, we only change the objective from LKL to Lw LV, where we used the default choice of w := u, i.e., Xw := Xu. We emphasize that we do not need to differentiate w.r.t. w, which results in reduced training times, see Figure 9. In practice, we can thus detach Xw from the computational graph without introducing any bias. This can be achieved by the detach and stop gradient operations in Py Torch and Tensor Flow, respectively. We leave other choices of w to future research and anticipate that choosing noisy versions of u in the initial phase of training might lead to even better exploration and performance. Furthermore, we use the same hyperparameters for the log-variance loss as for the KL-based loss. As these settings originate form Berner et al. (2024) and have been tuned for the KL-based loss, we suspect that optimizing the hyperparameters for the log-variance loss can lead to further improvements. Evaluation: To evaluate our metrics, we consider n = 105 samples (x(i))n i=1 and use the ELBO as an approximation to the log-normalizing constant log Z, see Appendix A.6.1. We further compute the (normalized) effective sample size Pn i=1 w(i) 2 n Pn i=1 w(i) 2 , where (w(i))n i=1 are the importance weights of the samples (x(i))n i=1 in path space. Finally, we estimate the Sinkhorn distance5 W2 γ (Cuturi, 2013) and report the error for estimating the average standard deviation across the marginals, i.e., V[Gk], where G ptarget. A.6.1 COMPUTATION OF LOG-NORMALIZING CONSTANTS For the computation of the log-normalizing constant log Z in the general bridge setting, we note that for any u, v U it holds that d PXu (Xu) = 1. 5Our implementation of the Sinkhorn distance is based on https://github.com/fwilliams/ scalable-pytorch-sinkhorn with the default parameters. Published as a conference paper at ICLR 2024 Together with Proposition 2.3, this shows that log Z = log E h exp Rf Bridge u,v,u + Su+v + B (Xu) i . (27) If u = u and v = v , the expression in the expectation is almost surely constant, which implies log Z = Rf Bridge u ,v ,u + Su +v + B (Xu ). (28) If we only have approximations of u and v , Jensen s inequality shows that the right-hand side in (28) yields a lower bound to log Z. For PIS and DIS, the log-normalizing constants can be computed analogously, see Zhang & Chen (2022); Berner et al. (2024). If not further specified, we use the lower bound as an estimator for log Z in our experiments. A.7 PARTIAL TRAJECTORY OPTIMIZATION In this section, we present a method that does not need the simulation of entire trajectories but can rely on subtrajectories only. On the one hand, this promises faster computations and, on the other hand, it can be used for exploration strategies since subtrajectories can be started at arbitrary prescribed locations, independent of the control u. Crucially, this strategy only works for the log-variance loss and not for the KL-based loss. Let us recall that the log-variance loss in (8) is defined for any process Xw. In particular, in addition to the control w, we can also freely choose the initial condition of Xw, see the proof of Proposition 2.3. Motivated by Zhang et al. (2023), we can leverage this fact to train the DIS or DDS methods on smaller time-intervals [t, T]. Specifically, recall that the log-variance loss for the DIS method in (14) is given by Lw LV(u) = V h Rf DIS u,w + Su + B (Xw) i , (29) where the optimal control is defined by p Y 0, (30) see Section 3.2. Also, recall that B(Xw) = log pprior(Xw 0 ) ρ(Xw T ) , (31) where pprior p Y 0 T . Now, assuming an approximation Φ( , t) log p Y 0 t for t [0, T], we can replace log pprior in (31) by Φ( , t), and consider the corresponding sub-problems on time intervals [t, T]. For the log-variance loss, we can then sample t Unif([0, T]), choose an arbitrary initial condition Xw t , and minimize the loss (29) on the time interval [t, T]. In order to obtain an approximation Φ( , t) log p Y 0 t , we could train a separate network, e.g., by using the underlying optimality PDEs in Appendix A.4. However, for the DIS method, this is not needed if we parametrize the control u as where Φ is a neural network. Based on (30) we can then use Φ( , t) as an approximation of log p Yt during training. Therefore, we can therefore optimize the loss Lw LV,sub(Φ) := V h Rf DIS σ Φ + BΦ,sub (Xw) i , w.r.t. the function Φ. In the above, we can pick t Unif([0, T]), Xw t ν with ν being a suitable probability measure on Rd, and BΦ,sub(Xw) := Φ(Xw t , t) log ρ(Xw T ), where (with slight abuse of notation) the integrals R and S defined in (1) now run from t to T. The subtrajectory training procedure has three potential benefits. First, training may be accelerated since we consider smaller time-intervals [t, T], leading to faster simulation of the SDEs. Second, we can choose Xw t in a suitable way to prevent mode collapse, e.g., we can sample Xw t from a Published as a conference paper at ICLR 2024 Table 3: We compare the performance of DIS without using the target information log ρ in the parametrization. In this case, the performance of the KL-based loss is generally decreasing, as also observed in Zhang & Chen (2022). For the log-variance loss, we can counteract this decrease by relying on sub-trajectories starting at random t Unif([0, T]) and x Unif([ a, a]d) (for sufficiently large a (0, )) in order to facilitate exploration, see Appendix A.7. This allows to obtain competitive results without using any gradient information of the target. Problem Loss log Z W2 γ ESS std GMM (d = 2) KL-DIS (Berner et al., 2024) 2.291 3.661 0.8089 3.566 LV-DIS-Subtraj. (ours) 0.059 0.020 0.8613 0.008 DW (d = 5, m = 5, δ = 4) KL-DIS (Berner et al., 2024) 3.983 5.517 0.3430 1.795 LV-DIS-Subtraj. (ours) 0.394 0.121 0.4378 0.002 40 20 0 20 40 x2 Prior pprior 40 20 0 20 40 x2 40 20 0 20 40 x2 LV-DIS (ours) 40 20 0 20 40 x2 LV-DIS-Subtraj (ours) Figure 4: Contour plots of a Gaussian mixture model ptarget with 40 modes analogous to the problem proposed in Midgley et al. (2023). We plot samples of the prior pprior = N(0, I) (left) and the DIS method trained with the KL-based loss, the log-variance loss, and partial trajectory optimization (from left to right), see Appendix A.7. For all methods, we use T = 2 to guarantee p Y 0 T pprior. Using the setting from Table 3, subtrajectory training can recover all modes without gradient information from the target, whereas other methods suffer from mode collapse despite making use of log ρ. Midgley et al. (2023) report mode collapse on this benchmark for several state-of-the-art methods. We remark that LV-DIS (unlike KL-DIS) recovers all modes when slightly increasing the prior variance. distribution ν with sufficiently large variance. Third, in (32), we consider a parametrization of u that does not rely on log ρ, which may not be available or expensive to compute. In addition to these benefits, we show in Table 3 that subtrajectory training can achieve competitive performance compared to the results in Table 2. On the contrary, we show that the performance of DIS with the KL-based loss gets worse when not using parametrizations that contain the term log ρ. Note that subtrajectory training cannot be used with the KL-based loss since, for this loss, we need to sample the initial condition according to Xu t . In Figure 4, we compare the performance on the benchmark proposed in Midgley et al. (2023). We show that partial trajectory optimization can identify all 40 modes of the Gaussian mixture model, significantly outperforming the DIS method, even when using the log-variance loss and the derivative of the target in the parametrization. We note that Midgley et al. (2023) report mode collapse on this benchmark for other state-of-the-art methods, such as Stochastic Normalizing Flows (Wu et al., 2020), Continual Repeated Annealed Flow Transport Monte Carlo (Arbel et al., 2021), and flows with Resampled Base Distributions (Stimper et al., 2022). A.8 FURTHER EXPERIMENTS AND COMPARISONS In this section, we present further results and ablation studies. In Table 4, we show that the logvariance loss also leads to improvements for smaller batch sizes. This can be motivated by its variance-reducing effect, see Proposition 2.5, Remark A.2, and Figure 5. In Figures 6 and 7, we Published as a conference paper at ICLR 2024 Table 4: The same setting as in Table 2 is considered, however, with a smaller batch size of 512 instead of 2048. We again observe that the log-variance loss consistently yields better performance. This can be motivated by the inherent variance reduction of its gradient estimators (particularly helpful for smaller batch sizes), see Proposition 2.5, Remark A.2, and Figure 5. Problem Method Loss log Z W2 γ ESS std GMM (d = 2) PIS KL (Zhang & Chen, 2022) 2.201 2.708 0.0002 3.576 LV (ours) 2.200 2.629 0.0002 3.564 DIS KL (Berner et al., 2024) 1.725 0.088 0.0045 2.711 LV (ours) 0.063 0.020 0.8517 0.004 DW (d = 5, m = 5, δ = 4) PIS KL (Zhang & Chen, 2022) 3.693 4.949 0.0001 1.793 LV (ours) 0.285 0.124 0.5957 0.008 DIS KL (Berner et al., 2024) 4.047 5.068 0.0015 1.797 LV (ours) 0.447 0.121 0.3917 0.002 0 20000 40000 60000 gradient steps loss stddev. 0 20000 40000 60000 gradient steps avg. gradient stddev. KL-DIS LV-DIS (ours) Figure 5: We compare the standard deviations of the loss and (average) gradient estimators using either the KL-based loss or the log-variance loss. Each standard deviation is computed over 40 simulations of the loss without updating the parameters. We show results for the DIS method on the 5-dimensional DW target. As predicted by our theory, the log-variance loss exhibits significantly smaller standard deviations for both the loss and its gradient. show that the log-variance loss can counteract mode collapse in both moderate as well as very high dimensions. Moreover, we present the results for the general bridge approach in Table 5, and we consider a problem from Wu et al. (2020) in Figure 8. In Figure 9, we present boxplots to show that our results from Table 2 are robust w.r.t. different seeds. Finally, in Table 6, we show that our methods are competitive to other state-of-the-art sampling baselines. However, we want to emphasize that the focus of our work is not to extensively compare against MCMC methods or normalizing flows. Our goal is to show that recently developed methods, such as PIS, DDS, and DIS, can be unified under a common framework, which enables the usage of different divergences. We then propose the log-variance divergence, which makes diffusion-based samplers even more competitive and mitigates potential downsides compared to other methods. The fact that there are general trade-offs between the considered diffusion-based samplers and variants of MCMC has already been discussed and numerically analyzed in the papers introducing the respective methods, see Zhang & Chen (2022); Vargas et al. (2023a). Published as a conference paper at ICLR 2024 LV-PIS (ours) LV-DIS (ours) Marginal Histogram of Xu Figure 6: Marginals of samples from PIS and DIS (left and right) for the DW problem with d = 5, m = 5, and δ = 4. The mode coverage of the log-variance loss is superior to the KL-based loss for all marginals (from top to bottom). LV-DIS (ours) Histogram of Xu Figure 7: Marginals of the first four coordinates of samples from DIS for a high-dimensional shifted double well problem in dimension d = 1000 with m = 3 and δ = 2 (see Section 4.1), using the KL or the log-variance loss, respectively. Again one observes the better mode coverage of the log-variance loss as compared to the reverse KL divergence. Published as a conference paper at ICLR 2024 Table 5: Metrics of the general bridge approach in Section 2.3 for selected benchmark problems. We observe a clear improvement using the log-variance loss. Moreover, for the KL divergence, we note that the general bridge framework can obtain better results than DIS or PIS, see Table 2. As in Table 2, we report the median over five independent runs. We show errors for estimating the log-normalizing constant log Z as well the standard deviations std of the marginals. Furthermore, we report the normalized effective sample size ESS and the Sinkhorn distance W2 γ (Cuturi, 2013). The arrows and indicate whether we want to maximize or minimize a given metric. Problem Method Loss log Z W2 γ ESS std GMM (d = 2) Bridge KL (Chen et al., 2021a) 0.328 0.393 0.0180 0.698 LV (ours) 0.084 0.020 0.9669 0.010 DW (d = 5, m = 5, δ = 4) Bridge KL (Chen et al., 2021a) 0.872 0.132 0.0561 0.099 LV (ours) 0.215 0.119 0.5940 0.002 Groundtruth KL-PIS LV-PIS (ours) KL-DIS LV-DIS (ours) Figure 8: Comparison of samples for the target in Wu et al. (2020). For the KL-based losses, a large fraction of the samples (PIS: 86%, DIS: 67%) lies outside of the domain despite the low density values. On the other hand, the log-variance loss significantly improves performance, yielding competitive performance compared to stochastic normalizing flows presented in Wu et al. (2020). Table 6: We compare our methods to Continual Repeated Annealed Flow Transport Monte Carlo (CRAFT), see Arbel et al. (2021). We adapt the proposed configurations6 to use the same batch size and number of iterations as our methods and evaluate all methods using 105 samples. We see that diffusion-based sampling, in combination with the log-variance loss, can provide competitive performance across a range of metrics. We report the median over five independent runs and compare the log-normalizing constant (using the reweighted estimator in (27)), the Sinkhorn distance to ground truth samples, and the error in estimating the average standard deviation. Problem Method log Z (rw) W2 γ std GMM (d = 2) CRAFT (Arbel et al., 2021) 0.012 0.020 0.019 LV-PIS (ours) 0.001 0.020 0.023 LV-DIS (ours) 0.017 0.020 0.004 Funnel (d = 10) CRAFT (Arbel et al., 2021) 0.123 5.517 6.139 LV-PIS (ours) 0.097 5.593 6.852 LV-DIS (ours) 0.028 5.075 5.224 DW (d = 5, m = 5, δ = 4) CRAFT (Arbel et al., 2021) 0.001 0.118 0.000 LV-PIS (ours) 0.000 0.121 0.001 LV-DIS (ours) 0.043 0.120 0.001 DW (d = 50, m = 5, δ = 2) CRAFT (Arbel et al., 2021) 0.000 6.821 0.001 LV-PIS (ours) 0.001 6.823 0.000 LV-DIS (ours) 0.422 6.855 0.009 6The configuration files can be found at https://github.com/deepmind/annealed_flow_ transport/blob/master/configs. Published as a conference paper at ICLR 2024 GMM (d = 2) DW (d = 5, m = 5, = 4) DW (d = 50, m = 5, = 2) time per gradient step [s] PIS DIS 0.42 KL LV (ours) Figure 9: Boxplots for five independent runs for each problem and method (KL-PIS, LV-PIS (ours), KL-DIS, LV-DIS (ours) from left to right in each plot) in the settings of Table 2 and corresponding ground truth or optimal values (dashed lines). It can be seen that the performance improvements of the log-variance loss are robust across different seeds. At the same time, the log-variance loss reduces the average time per gradient step by circumventing differentiation through the SDE solver.