# generalized_schrödinger_bridge_matching__9ee88f7b.pdf Published as a conference paper at ICLR 2024 GENERALIZED SCHR ODINGER BRIDGE MATCHING Guan-Horng Liu1 , Yaron Lipman2,3, Maximilian Nickel3, Brian Karrer3, Evangelos A. Theodorou1, Ricky T. Q. Chen3 1Georgia Institute of Technology 2Weizmann Institute of Science 3FAIR, Meta {ghliu, evangelos.theodorou}@gatech.edu {ylipman, maxn, briankarrer, rtqichen}@meta.com Modern distribution matching algorithms for training diffusion or flow models directly prescribe the time evolution of the marginal distributions between two boundary distributions. In this work, we consider a generalized distribution matching setup, where these marginals are only implicitly described as a solution to some task-specific objective function. The problem setup, known as the Generalized Schr odinger Bridge (GSB), appears prevalently in many scientific areas both within and without machine learning. We propose Generalized Schr odinger Bridge Matching (GSBM), a new matching algorithm inspired by recent advances, generalizing them beyond kinetic energy minimization and to account for task-specific state costs. We show that such a generalization can be cast as solving conditional stochastic optimal control, for which efficient variational approximations can be used, and further debiased with the aid of path integral theory. Compared to prior methods for solving GSB problems, our GSBM algorithm better preserves a feasible transport map between the boundary distributions throughout training, thereby enabling stable convergence and significantly improved scalability. We empirically validate our claims on an extensive suite of experimental setups, including crowd navigation, opinion depolarization, Li DAR manifolds, and image domain transfer. Our work brings new algorithmic opportunities for training diffusion models enhanced with task-specific optimality structures. 1 INTRODUCTION The distribution matching problem of learning transport maps that match specific distributions is a ubiquitous problem setup that appears in many areas of machine learning, with extensive applications in optimal transport (Peyr e & Cuturi, 2017), domain adaptation (Wang & Deng, 2018), and generative modeling (Sohl-Dickstein et al., 2015; Chen et al., 2018). The tasks are often cast as learning mappings, X0 7 X1, such that X0 µ X1 ν follow the (unknown) laws of two distributions µ, ν. For instance, diffusion models1 construct the mapping as the solution to a stochastic differential equation (SDE) whose drift uθ t(Xt) : Rd [0, 1] Rd is parameterized by θ: d Xt = uθ t(Xt)dt + σd Wt, X0 µ, X1 ν. (1) The marginal density pt induced by (1) evolves as the Fokker Plank equation (FPE; Risken (1996)), tpt(Xt) = (uθ t (Xt) pt(Xt)) + σ2 2 pt(Xt), p0 = µ, p1 = ν, (2) and prescribing pt with fixed σ uniquely determines a family of parametric SDEs in (1). Indeed, modern successes of diffusion models in synthesizing high-fidelity data (Song et al., 2021; Dhariwal & Nichol, 2021) are attributed, partly, to constructing pt as a mixture of tractable conditional probability paths (Liu et al., 2023b; Albergo et al., 2023; Tong et al., 2023). These tractabilities enable scalable algorithms that match uθ t given pt , hereafter referred to as matching algorithms. Alternatively, one may also specify pt implicitly as the optimal solution to some objective function, with examples such as optimal transport (OT; Villani et al. (2009)), or the Schr odinger Bridge Work done in part as a research intern at FAIR, Meta. 1We adopt constant σ R throughout the paper but note that all analysis generalize to time-dependent σt. Published as a conference paper at ICLR 2024 problem (SB; Schr odinger (1931); Fortet (1940); De Bortoli et al. (2021)). SB generalizes standard diffusion models to arbitrary µ and ν with fully nonlinear stochastic processes and, among all possible SDEs that match between µ and ν, seeks the unique ut that minimizes the kinetic energy. While finding the transport with minimal kinetic energy can be motivated from statistical physics or entropic optimal transport (Peyr e & Cuturi, 2019; Vargas et al., 2021), with kinetic energy often being correlated with sampling efficiency in generative modeling (Chen et al., 2022; Shaul et al., 2023), it nevertheless limits the flexibility in the design of optimality . Indeed, kinetic energy corresponds to the squared-Euclidean cost in OT, which, despite its popularity, is merely one among numerous alternatives available for deployment (Di Marino et al., 2017). On one hand, it remains debatable whether the ℓ2 distance defined in the original data space (e.g., pixel space for images), as opposed to other metrics, are best suited for quantifying the optimality of transport maps. On the other hand, distribution matching in general scientific domains, such as population modeling (Ruthotto et al., 2020), robot navigation (Liu et al., 2018), or molecule simulation (No e et al., 2020), often involves more complex optimality conditions that require more general algorithms to handle. To this end, we advocate a generalized setup for distribution matching, previously introduced as the Generalized Schr odinger Bridge problem (GSB; Chen et al. (2015); Chen (2023); Liu et al. (2022)): 2 uθ t (Xt) 2 + Vt(Xt) dt subject to (1) or, equivalently, (2). (3) GSB is a distribution matching problem as it still seeks a diffusion model (1) that transports µ to ν. Yet, in contrast to standard SB, the objective of GSB involves an additional state cost Vt which affects the solution by quantifying a penalty or equivalently a reward similar to general decisionmaking problems. This state cost can also include distributional properties of the random variable Xt. Examples of Vt include, e.g., the mean-field interaction in opinion propagation (Gaitonde et al., 2021), quantum potential (Philippidis et al., 1979), or a geometric prior. Solving GSB problems involves addressing two distinct aspects: optimality (3) and feasibility (2). Within the set of feasible solutions that satisfy (2), GSB considers the one with the lowest objective in (3) to be optimal. Therefore, it is essential to develop algorithms that search within the feasible set for the optimal solution. Unfortunately, existing methods that approximate solutions to (3) either require relaxing feasibility (Koshizuka & Sato, 2023), or, following the design of Sinkhorn algorithms (Cuturi, 2013), prioritize optimality over feasibility. While an exciting line of new matching algorithms (Peluchetti, 2023; Shi et al., 2023) has been developed for SB, i.e., when Vt := 0, to ensure that the solutions are proximity to the feasible set throughout training, it remains unclear whether, or how, these SB Matching (SBM) algorithms can be extended to handle nontrivial Vt. We propose Generalized Schr odinger Bridge Matching (GSBM), a new matching algorithm that generalizes SBM to nontrivial Vt. We discuss how such a generalization can be tied to a conditional stochastic optimal control (Cond SOC) problem, from which existing SBM algorithms can be derived as special cases. We develop scalable solvers to the Cond SOC using Gaussian path approximation, further debiased with path integral theory (Kappen, 2005). GSBM inherits a similar algorithmic characterization to its ancestors (Liu et al., 2023b; Shi et al., 2023), in that, during the optimization process, the learned pθ 0 and pθ 1 induced by the subsequent solutions of uθ t remain close to the boundary marginals (µ, ν) and preserve them exactly under stricter theoretical conditions; see Sec. 3.1 for details. This distinguishes GSBM from prior methods (e.g., Liu et al. (2022)) that learn approximate solutions to the same problem (3) but whose subsequent solutions only approach (µ, ν) after final convergence. This further results in a framework that relies solely on samples from µ, ν without knowing their densities and enjoys stable convergence, making it suitable for high dimensional applications. A note on the connection to stochastic optimal control and related works (Liu et al., 2022) can be found in Appendix A. Summarizing, we present the following contributions: We propose GSBM, a new matching algorithm for learning diffusion models between two distributions that also respect some task-specific optimality structures in (3) via specifying Vt. GSBM casts recent matching methods as conditional stochastic optimal control problems, from which nontrivial Vt can be incorporated and solved at scale via Gaussian path approximation. Compared to prior methods, e.g., Liu et al. (2022), that also provide approximate solutions to (3), GSBM enjoys stable convergence, improved scalability, and, crucially, maintains a transport map that remains much closer to the distribution boundaries throughout the entire training. Published as a conference paper at ICLR 2024 Algorithm 1 match (implicit) Require: pt s.t. p0 = µ, p1 = ν Sample X0 p0, Xt pt, X1 p1 Take gradient step w.r.t. Limplicit(θ) until converges return sθ t Algorithm 2 match (explicit) Require: pt := Ep0,1[pt|0,1] s.t. p0=µ, p1=ν, ut|0,1 Sample X0, X1 p0,1, Xt pt|0,1 Compute ut|0,1 given (X0, Xt, X1) Take gradient step w.r.t. Lexplicit(θ) until converges return uθ t Through extensive experiments, we showcase GSBM s remarkable capabilities across a variety of distribution matching problems, ranging from standard crowd navigation and 3D navigation over Li DAR manifolds, to high-dimensional opinion modeling and unpaired image translation. 2 PRELIMINARIES: MATCHING DIFFUSION MODELS GIVEN PROB. PATHS As mentioned in Sec. 1, the goal of matching algorithms is to learn a SDE parametrized with uθ t such that its FPE (2) matches some prescribed marginal pt for all t [0, 1]. In this section, we review two classes of matching algorithms that will play crucial roles in the development of our GSBM. Entropic Action Matching (implicit). This is a recently proposed matching method (Neklyudov et al., 2023) that learns the unique gradient field governing an FPE prescribed by pt. Specifically, let sθ t(Xt) : Rd [0, 1] R be a parametrized function, then the unique gradient field sθ t (Xt) by which the probability density of the pushforward from µ at time 0 to t matches pt minimizes Limplicit(θ) := Eµ sθ 0(X0) Eν sθ 1(X1) + Z 1 2 sθ t 2 + σ2 Neklyudov et al. (2023) showed that Limplicit(θ) implicitly matches a unique gradient field s t with least kinetic energy, i.e., it is equivalent to minimizing R 1 0 Ept 1 2 sθ t s t 2 dt, where s t = arg min ut 2 ut(Xt) 2 dt subject to (2). Bridge & Flow Matching (explicit). If the pt can be factorized into pt := Ep0,1[pt(Xt|x0, x1)], where the conditional density pt(Xt|x0, x1) denoted with the shorthand pt|0,1 is associated with an SDE, d Xt = ut(Xt|x0, x1)dt + σd Wt, then it can be shown (Lipman et al., 2023; Albergo & Vanden-Eijnden, 2023; Liu et al., 2023a) that the minimizer of (see Appendix C.1 for the derivation) Lexplicit(θ) := Z 1 0 Ep0,1Ept|0,1 2 uθ t (Xt) ut(Xt|x0, x1) 2 dt, (5) similar to sθ t , also satisfies the FPE prescribed by pt. In other words, uθ t preserves pt for all t. Implicit vs. explicit matching losses. While Entropic Action Matching presents a general matching method with the least assumptions, its implicit matching loss Limplicit (4) scales unfavorably to high-dimensional applications, due to the need to approximate the Laplacian (using e.g., Hutchinson (1989)), and also introduces unquantifiable bias when optimizing over a restricted family of functions such as deep neural networks. The explicit matching loss Lexplicit (5) offers a computationally efficient alternative but requires additional information, namely ut(Xt|x0, x1) ut|0,1. Remarkably, in both cases, the minimizers preserve the prescribed marginal pt. Hence, if p0 = µ and p1 = ν, these matching algorithms shall always return a feasible solution a diffusion model that matches between µ and ν. We summarize the aforementioned two methods in Alg. 1 and 2. 3 GENERALIZED SCHR ODINGER BRIDGE MATCHING (GSBM) We propose Generalized Schr odinger Bridge Matching (GSBM), a novel matching algorithm that, in contrast to those in Sec. 2 assuming prescribed pt, concurrently optimizes pt to minimize (3) subject to the feasibility constraint (2). All proofs can be found in Appendix B. Published as a conference paper at ICLR 2024 3.1 ALTERNATING OPTIMIZATION SCHEME Let us revisit the GSB problem (3), particularly its FPE constraint in (2). Recent advances in dynamic optimal transport (Liu et al., 2023b) and SB (Peluchetti, 2023) propose a decomposition of this dynamical constraint into two components: the marginal pt, t (0, 1), and the joint coupling between boundaries p0,1, and employ alternating optimization between uθ t and pt. Specifically, these optimization methods, which largely inspired recent variants of SB Matching (Shi et al., 2023) and our GSBM, generally obey the following recipe, alternating between two stages: Stage 1: Optimize the objective, in our case, (3) w.r.t. the drift uθ t given fixed pt [0,1]. Stage 2: Optimize (3) w.r.t. the marginals pt (0,1) given the coupling pθ 0,1 defined by uθ t . Notice particularly that the optimization posed in Stage 1 resembles the matching algorithms in Sec. 2. We make the connection concrete in the following proposition: Proposition 1 (Stage 1). The unique minimizer to Stage 1 coincides with s t (Xt). This may seem counter-intuitive at first glance, both the kinetic energy and Vt show up in (3). This is due to the fact that uθ t no longer affects the value of Ept[Vt(Xt)] once pt is fixed, and out of all uθ t whose probability density matches pt, the gradient field is the unique minimizer of kinetic energy.2 We emphasize that the role of Stage 1 is to learn a uθ t given the prescribed pt and provides a better coupling pθ 0,1 which is then used to refine pt in Stage 2. Therefore, the explicit matching loss (5) provides just the same algorithmic purpose. Though it only upper-bounds the objective of Stage 1, its solution often converges stably from a measure-theoretic perspective (see Appendix C.1 for more explanations). In practice, we find that Lexplicit performs as effectively as Limplicit, while exhibiting significantly improved scalability, hence better suited for high-dimensional applications. We now present our main result, which shows that Stage 2 of GSBM can be cast as a variational problem, where Vt appears through optimizing a conditional controlled process. Proposition 2 (Stage 2; Conditional stochastic optimal control; Cond SOC). Let the minimizer to Stage 2 be factorized by pt(Xt) = R pt(Xt|x0, x1)pθ 0,1(x0, x1)dx0dx1 where pθ 0,1 is the boundary coupling induced by solving (1) with uθ t. Then, pt(Xt|x0, x1) pt|0,1 solves : min ut|0,1 J := Z 1 0 Ept(Xt|x0,x1) 2 ut(Xt|x0, x1) 2 + Vt(Xt) dt (6a) s.t. d Xt = ut(Xt|x0, x1)dt + σd Wt, X0 = x0, X1 = x1 (6b) Table 1: Solutions to (6) w.r.t. different Vt, and how they link to different methods, including Rectified flow (Liu et al., 2023b), DSBM (Shi et al., 2023), and our GSBM. Method Vt(x) pt(Xt|x0, x1) Rec Flow (σ=0) 0 straight line (Lemma 3 with α, σ 0) DSBM (σ>0) 0 Brownian bridge (Lemma 3 with α 0) GSBM (σ 0) quadratic Lemma 3 arbitrary Sec. 3.2 Note that (6) differs from (3) only in the boundary conditions, where the original distributions µ, ν are replaced by the two end-points (x0, x1) drawn from the coupling induced by uθ t. Generally, the solution to (6) is not known in closed form, except in special cases. In Lemma 3, we show such a case when V (x) is quadratic and σ>0. Note that as V (x) vanishes, pt|0,1 in Lemma 3 collapses to the Brownian bridge and GSBM recovers the matching algorithm appearing in prior works (Liu, 2022; Shi et al., 2023) that approximate OT/SB, as summarized in Table 1. This suggests that our Prop. 2 directly generalizes them to nontrivial Vt. Lemma 3 (Analytic solution to (6) for quadratic V and σ > 0). Let V (x) := α σx 2, α, σ > 0, then the optimal solution to (6) follows a Gaussian path X t N(ctx0 + etx1, γ2 t Id), where ct = sinh(η(1 t)) sinh η , et = cosh(η(1 t)) ct cosh η, γt = σ sinh(η(1 t)) η et, η = σ These coefficients recover Brownian bridges as α 0 and, if further σ 0, straight lines. 2Notably, the absence of Vt in optimization was previously viewed as an issue for Sinkhorn methods aimed at solving (3) (Liu et al., 2022). Yet, in GSBM, it appears naturally from how the optimization is decomposed. Published as a conference paper at ICLR 2024 Figure 1: Example of spline optimization (Alg. 3) for µt R2, γt R, and the resulting Cond SOC (6) solution. Algorithm 3 Spline Opt Require: x0, x1, {Xtk} where 00. Then, the optimal, path-integral solution to (6) can be obtained via p ( X|x0, x1) = 1 Z ω( X|x0, x1)r( X|x0, x1), (10) where Z is the normalization constant and ω is the importance weight: ω( X|x0, x1) := exp Z 1 2 vt(Xt) 2 dt Z 1 1 σ vt(Xt) d Wt Algorithm 4 Impt Sample Require: pt|0,1, µt, γt, σ > 0 Sample Xi t [0,1] from (6b) given (8) Compute ωi with (11) and Z = P i ωi return Resample pt|0,1 with (10). In practice, r( X|x0, x1) can be any distribution, but the closer it is to the optimal p , the lower the variance of the importance weights ω (Kappen & Ruiz, 2016). It is therefore natural to consider using the aforementioned Gaussian probability paths as r( X|x0, x1), properly optimized with Alg. 3, then resampled following proportional to (10) this is equivalent in expectation to performing self-normalized importance sampling but algorithmically simpler and helps reduce variance when many samples have low weight values. While path integral resampling may make Lexplicit less suitable due to the change in the conditional drift governing p ( X|x0, x1) from (8), we still observe empirical, sometimes significant, improvement (see Sec. 4.4). Overall, we propose path integral resampling as an optional step in our GSBM algorithm, as it requires sequential simulation.4 In practice, we also find empirically that the Gaussian probability paths alone perform sufficiently well and is easy to use at scale. 3.3 ALGORITHM OUTLINE & CONVERGENCE ANALYSIS We summarize our GSBM in Alg. 5, which, as previously sketched in Sec. 3.1, alternates between Stage 1 (line 3) and Stage 2 (lines 4-9). In contrast to DSBM (Shi et al., 2023), which constructs analytic pt|0,1 and ut|0,1 when Vt = 0, our GSBM solves the underlying Cond SOC (6) with nontrivial Vt. We stress that these computations are easily parallizable and admit fast converge due to our efficient parameterization, hence inducing little computational overhead (see Sec. 4.4). We note that GSBM (Alg. 5) remains functional even when σ=0, although the GSB problem (3) was originally stated with σ>0 (Chen et al., 2015). In such cases, the implicit and explicit matching still preserve pt and correspond, respectively, to action (Neklyudov et al., 2023) and flow matching (Lipman et al., 2023), and Cond SOC corresponds to a generalized geodesic path accounting for Vt. Finally, we provide convergence analysis in the following theorems. Theorem 5 (Local convergence). Let the intermediate result of Stage 1 and 2, after repeating n times, be θn. Then, its objective value in (3), L(θn), is monotonically non-increasing as n grows: L(θn) L(θn+1). Theorem 6. The optimal solution to GSB problem (3) is a fixed point of GSBM, Alg. 5. 4We note that simulation of trajectories X Xt [0,1] from (6a,8) can be done efficiently by computing the covariance function, which requires merely solving an 1D ODE; see Appendix C.2 for a detailed explanation. Published as a conference paper at ICLR 2024 Figure 2: Feasibility vs. optimality on three crowd navigation tasks with mean-field cost. Figure 3: Simulation of SDEs with the uθ t after long training. Notice how Deep GSB diverges drastically from our GSBM, which satisfies feasibility at all time. Li DAR surface & a tangent plane GSBM; objective value (3) = 6209 (left: 3D view, right: 2D view) Deep GSB; obj.=7747 (2D view) p0 and p1 (2D view) Figure 4: Crowd navigation over a Li DAR surface. Height is denoted by the grayscale color. 4 EXPERIMENT We test out GSBM on a variety of distribution matching tasks, each entailing its own state cost Vt(x). By default, we use the explicit matching loss (5) without path integral resampling, mainly due to its scalability, but ablate their relative performances in Sec. 4.4. Our GSBM is compared primarily to Deep GSB (Liu et al., 2022), a Sinkhorn-inspired machine learning method that outperforms existing deep methods (Ruthotto et al., 2020; Lin et al., 2021). Other details are in Appendix D. 4.1 CROWD NAVIGATION WITH MEAN-FIELD AND GEOMETRIC STATE COSTS We first validate our GSBM in solving crowd navigation, a canonical example for the GSB problem (3). Specifically, we consider the following two classes of tasks (see Appendix D.1 for details): Mean-field interactions. These are synthetic dataset in R2 introduced in Deep GSB, where the state cost Vt consists of two components: an obstacle cost that assesses the physical constraints, and an mean-field interaction cost between individual agents and the population density pt: Vt(x) = Lobstacle(x) + Linteract(x; pt), Linteract(x; pt) = ( log pt(x) (entropy) Ey pt[ 2 x y 2+1] (congestion) . (12) Both entropy and congestion costs are fundamental elements of mean-field games. They measure the costs incurred for individuals to stay in densely crowded regions with high population density. Geometric surfaces defined by Li DAR. A more realistic scenario involves navigation through a complex geometric surface. In particular, we consider surfaces observed through Li DAR scans of Mt. Rainier (Legg & Anderson, 2013), thinned to 34,183 points (see Fig. 4). We adopt the state cost V (x) = Lmanifold(x)+Lheight(x), Lmanifold(x) = π(x) x 2, Lheight(x) = exp π(z)(x) , (13) where π(x) projects x to an approximate tangent plane fitted by a k-nearest neighbors (see Fig. 4 and Appendix D.3) and π(z)(x) refers to the z-coordinate of π(x), i.e., its height. Figure 2 tracks the feasibility and optimality, measured by W2(pθ 1, ν) and (3), of three mean-field tasks (Stunnel, Vneck, GMM). On all tasks, our GSBM maintains feasible solutions throughout training while gradually improving optimality. In contrast, due to the lack of convergence analysis, training Deep GSB exhibits relative instability and occasional divergence (see Fig. 3). As for the geometric state costs, Fig. 4 demonstrates how our GSBM faithfully recovers the desired multi-modal distribution: It successfully identify two viable pathways with low state cost, one of which bypasses the saddle point. In constrast, Deep GSB generates only uni-modal distributions with samples scattered over tall mountain regions with high state cost, yielding a higher objective value (3). Published as a conference paper at ICLR 2024 Figure 5: Comparison between DSBM (Shi et al., 2023) and our GSBM on: (leftmost 5 columns) the generation processes and (rightmost 3 columns) their couplings pθ(X1|X0) during training. By constructing Vt via a latent space, GSBM exhibits faster convergence and yields better couplings. Figure 6: Mean of pt(Xt|x0, x1). Instead of using linear interpolation as in DSBM, GSBM optimizes pt|0,1 w.r.t. (6) where Vt is defined via a latent space, thereby exhibiting more semantically meaningful interpolations. Table 2: FID values of dog cat for DSBM and our GSBM. 14.16 12.39 4.2 IMAGE DOMAIN INTERPOLATION AND UNPAIRED TRANSLATION Next, we consider unpaired translation between dogs and cats from AFHQ (Choi et al., 2020). We aim to explore how appropriate choices of state cost Vt can help encourage more natural interpolations and more semantically meaningful couplings. While the design of Vt itself is an interesting open question, here we exploit the geometry of a learned latent space. To this end, we use a pretrained variational autoencoder (Kingma & Welling, 2014), then define Vt conditioned on an interpolation of two end points (notice that x and z respectively belong to image and latent spaces): Vt(x|x0, x1) = x Decoder(zt) 2, zt := I(t, Encoder(x0), Encoder(x1)). (14) Though I(t, z0, z1) can be any appropriate interpolation, we find the spherical linear interpolation (Shoemake, 1985) to be particularly effective, due to the Gaussian geometry in latent space. As the high dimensionality greatly impedes Deep GSB, we mainly compare with DSBM (Shi et al., 2023), the special case of our GSBM when Vt is degenerate. As in DSBM, we resize images to 64 64. Figure 5 reports the qualitative comparison between the generation processes of DSBM and our GSBM, along with their coupling during training. It is clear that, with the aid of a semantically meaningful Vt, GSBM typically converges to near-optimal coupling early in training, and, as expected, yields more interpretable generation processes. Interestingly, despite being subject to the same noise level (σ=0.5 in this case), the GSBM s generation processes are generally less noisy than DSBM. This is due to the optimization of the Cond SOC problem (6) with our specific choice of Vt. As shown in Fig. 6, the conditional density pt(Xt|x0, x1) used in GSBM appears to eliminate unnatural artifacts observed in Brownian bridges, which rely on simple linear interpolation in pixel space. Quantitatively, our GSBM also achieves lower FID value as a measure of feasibility, as shown in Table 2. Finally, we note that the inclusion of Vt and the solving of Cond SOC increase wallclock time by a mere 0.5% compared to DSBM (see Table 5). 4.3 HIGH-DIMENSIONAL OPINION DEPOLARIZATION Figure 7: Distribution of terminal opinion X1 R1000 and their directional similarities (DS; see Appendix D.2). Finally, we consider high-dimensional opinion depolarization, initially introduced in Deep GSB, where an opinion Xt R1000 is influenced by a polarizing effect (Schweighofer et al., 2020) when evolving through interactions with the population (see Appendix D.2): d Xt = fpolarize(Xt; pt)dt + σd Wt (15) Published as a conference paper at ICLR 2024 Figure 8: Our GSBM reveals how the optimal solution drastically changes w.r.t. the levels of noise (σ) when navigating through a narrow passway surrounded by the obstacles (shown in gray). Table 3: Relative runtime between combination of matching losses and PI resampling on solving Stunnel, relative to Lexplicit. Lexplicit Limplicit without PI 100% 276% with PI 108% 284% Table 4: How the objective value (3) changes when enabling PI resampling on each matching loss and σ in Stunnel. Performance is improved by deceasing ( ) the objective values. σ = 0.5 1.0 2.0 Lexplicit 44.6 5.9 5.2 2.6 1.5 3.5 Limplicit 112.4 53.7 2.0 8.0 0.7 6.0 Table 5: Percentage of time spent in different stages of Alg. 5, measured on the AFHQ task. match Simulate uθ t Solve (6) (line 3) (line 4) (lines 5-6) 64.3% 35.2% 0.5% Without any intervention, the opinion dynamics in (15) tend to segregate into groups with diametrically opposed views (first column of Fig. 7), as opposed to the desired unimodal distribution p1 (second column of Fig. 7). To adapt our GSBM for this task, we treat (15) as a base drift, specifically defining uθ t (Xt) := fpolarize(Xt; pt)+vθ t (Xt), and then solving the Cond SOC (6) by replacing the kinetic energy with R vθ t 2dt. Similar to Deep GSB, we consider the same congestion cost Vt defined in (12). As shown in Fig. 7, both Deep GSB and our GSBM demonstrate the capability to mitigate opinion segregation. However, our GSBM achieves closer proximity to the target p1, indicating stronger feasibility, and achieve almost half the objective value (3) relative to Deep GSB. 4.4 DISCUSSIONS Effect of noise (σ). In the stochastic control setting (Theodorou et al., 2010), the task-specific value of σ plays a crucial role in representing the uncertainty from environment or the error in executing control. The optimal control thus changes drastically depending on σ. Figure 8 demonstrates how our GSBM correctly resolves this phenomenon, on an example of the famous drunken spider problem discussed in Kappen (2005). In the absence of noise (σ=0), it is very easy to steer through the narrow passage. When large amounts of noise is present (σ=1.0), there is a high chance of colliding with the obstacles, so the optimal solution is to completely steer around the obstacles. Ablation study on path integral (PI) resampling. In Table 4, we ablate how the objective value changes when enabling PI resampling on different matching losses and noise σ and report their performance (averaged over 5 independent trails) on the Stunnel task. We observe that PI resampling tends to enhance overall performance, particularly in low noise conditions, at the expense of a slightly increased runtime of 8%. Meanwhile, as shown in Table 3, implicit matching (4) typically requires longer time overall ( 2.7 even in two dimensions), compared to its explicit counterpart. Profiling GSBM. The primary algorithmic distinction between our GSBM and previous SB matching methods (Shi et al., 2023; Peluchetti, 2023) lies in how pt|0,1 and ut|0,1 are computed (see Lemma 3), where GSBM involves solving an additional Cond SOC problem, i.e., lines 5-6 in Alg. 5. Table 5 suggests that these computations, uniquely attached to GSBM, induce little computational overhead compared to other components in Alg. 5. This computational efficiency is due to our efficient variational approximation admitting simulation-free, parallelizable optimization. 5 CONCLUSION AND LIMITATION We developed GSBM, a new matching algorithm that provides approximate solutions to the Generalized Schr odinger Bridge (GSB) problems. We demonstrated strong capabilities of GSBM over prior methods in solving crowd navigation, opinion modeling, and interpretable domain transfer. It should be noticed that GSBM requires differentiability of Vt and relies on the quadratic control cost to establish its convergence analysis, which, despite notably improving over prior methods, remains as necessary conditions. We acknowledge these limitations and leave them for future works. Published as a conference paper at ICLR 2024 ACKNOWLEDGEMENTS We acknowledge the Python community (Van Rossum & Drake Jr, 1995; Oliphant, 2007) and the core set of tools that enabled this work, including Py Torch (Paszke et al., 2019), functorch (Horace He, 2021), torchdiffeq (Chen, 2018), JAX (Bradbury et al., 2018), Flax (Heek et al., 2020), Hydra (Yadan, 2019), Jupyter (Kluyver et al., 2016), Matplotlib (Hunter, 2007), numpy (Oliphant, 2006; Van Der Walt et al., 2011), and Sci Py (Jones et al., 2014). Michael S Albergo and Eric Vanden-Eijnden. Building normalizing flows with stochastic interpolants. In International Conference on Learning Representations (ICLR), 2023. Michael S Albergo, Nicholas M Boffi, and Eric Vanden-Eijnden. Stochastic interpolants: A unifying framework for flows and diffusions. ar Xiv preprint ar Xiv:2303.08797, 2023. Luigi Ambrosio, Nicola Gigli, and Giuseppe Savar e. Gradient flows: In metric spaces and in the space of probability measures. Springer Science & Business Media, 2008. Brian DO Anderson. Reverse-time diffusion equation models. Stochastic Processes and their Applications, 12(3):313 326, 1982. James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake Vander Plas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+Num Py programs, 2018. URL http: //github.com/google/jax. Ricky T. Q. Chen. torchdiffeq, 2018. URL https://github.com/rtqichen/ torchdiffeq. Ricky T. Q. Chen and Yaron Lipman. Riemannian flow matching on general geometries. ar Xiv preprint ar Xiv:2302.03660, 2023. Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. Neural ordinary differential equations. In Advances in Neural Information Processing Systems (Neur IPS), 2018. Tianrong Chen, Guan-Horng Liu, and Evangelos A Theodorou. Likelihood training of Schr odinger bridge using forward-backward SDEs theory. In International Conference on Learning Representations (ICLR), 2022. Yongxin Chen. Density control of interacting agent systems. IEEE Transactions on Automatic Control, 2023. Yongxin Chen and Tryphon Georgiou. Stochastic bridges of linear systems. IEEE Transactions on Automatic Control, 61(2):526 531, 2015. Yongxin Chen, Tryphon Georgiou, and Michele Pavon. Optimal steering of inertial particles diffusing anisotropically with losses. In American Control Conference (ACC). IEEE, 2015. Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha. Stargan v2: Diverse image synthesis for multiple domains. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in Neural Information Processing Systems (Neur IPS), 2013. Valentin De Bortoli, James Thornton, Jeremy Heng, and Arnaud Doucet. Diffusion Schr odinger bridge with applications to score-based generative modeling. In Advances in Neural Information Processing Systems (Neur IPS), 2021. Prafulla Dhariwal and Alex Nichol. Diffusion models beat GANs on image synthesis. In Advances in Neural Information Processing Systems (Neur IPS), 2021. Published as a conference paper at ICLR 2024 Simone Di Marino, Augusto Gerolin, and Luca Nenna. Optimal transportation theory with repulsive costs. Topological optimization and optimal transport, 17:204 256, 2017. Robert Fortet. R esolution d un syst eme d equations de M. Schr odinger. Journal de Math ematiques Pures et Appliqu ees, 19(1-4):83 105, 1940. Jason Gaitonde, Jon Kleinberg, and Eva Tardos. Polarization in geometric opinion dynamics. In ACM Conference on Economics and Computation, pp. 499 519, 2021. Jonathan Heek, Anselm Levskaya, Avital Oliver, Marvin Ritter, Bertrand Rondepierre, Andreas Steiner, and Marc van Zee. Flax: A neural network library and ecosystem for JAX, 2020. URL http://github.com/google/flax. Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. β-vae: Learning basic visual concepts with a constrained variational framework. In International Conference on Learning Representations (ICLR), 2016. Richard Zou Horace He. functorch: Jax-like composable function transforms for pytorch. https: //github.com/pytorch/functorch, 2021. John D Hunter. Matplotlib: A 2d graphics environment. Computing in science & engineering, 9(3): 90, 2007. Michael F Hutchinson. A stochastic estimator of the trace of the influence matrix for Laplacian smoothing splines. Communications in Statistics-Simulation and Computation, 18(3):1059 1076, 1989. Kiyosi Itˆo. On stochastic differential equations, volume 4. American Mathematical Soc., 1951. Eric Jones, Travis Oliphant, and Pearu Peterson. {Sci Py}: Open source scientific tools for {Python}. 2014. Hilbert J Kappen. Path integrals and symmetry breaking for optimal control theory. Journal of Statistical Mechanics: Theory and Experiment, 2005(11):P11011, 2005. Hilbert Johan Kappen and Hans Christian Ruiz. Adaptive importance sampling for control and inference. Journal of Statistical Physics, 162(5):1244 1266, 2016. Diederik P Kingma and Max Welling. Auto-Encoding variational bayes. In International Conference on Learning Representations (ICLR), 2014. Thomas Kluyver, Benjamin Ragan-Kelley, Fernando P erez, Brian E Granger, Matthias Bussonnier, Jonathan Frederic, Kyle Kelley, Jessica B Hamrick, Jason Grout, Sylvain Corlay, et al. Jupyter notebooks-a publishing format for reproducible computational workflows. In ELPUB, pp. 87 90, 2016. Takeshi Koshizuka and Issei Sato. Neural Lagrangian Schr odinger bridge: Diffusion modeling for population dynamics. In International Conference on Learning Representations (ICLR), 2023. Nicholas Legg and Scott Anderson. Southwest Flank of Mt.Rainier, WA, 2013. URL https: //opentopography.org/meta/OT.052013.26910.1. Accessed on 2023-09-12. Christian L eonard. A survey of the Schr odinger problem and some of its connections with optimal transport. Discrete and Continuous Dynamical Systems, 2013. Christian L eonard, Sylvie Rœlly, and Jean-Claude Zambrini. Reciprocal processes. A measuretheoretical point of view. Probability Surveys, 2014. David Levin. The approximation power of moving least-squares. Mathematics of Computation, 67 (224):1517 1531, 1998. Sergey Levine. Reinforcement learning and control as probabilistic inference: Tutorial and review. ar Xiv preprint ar Xiv:1805.00909, 2018. Published as a conference paper at ICLR 2024 Xuechen Li, Ting-Kam Leonard Wong, Ricky T. Q. Chen, and David Duvenaud. Scalable gradients for stochastic differential equations. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2020. Alex Tong Lin, Samy Wu Fung, Wuchen Li, Levon Nurbekyan, and Stanley J Osher. Alternating the population and control neural networks to solve high-dimensional stochastic mean-field games. Proceedings of the National Academy of Sciences, 118(31), 2021. Yaron Lipman, Ricky T. Q. Chen, Heli Ben-Hamu, Maximilian Nickel, and Matt Le. Flow matching for generative modeling. In International Conference on Learning Representations (ICLR), 2023. Guan-Horng Liu, Tianrong Chen, Oswin So, and Evangelos A Theodorou. Deep generalized Schr odinger bridge. In Advances in Neural Information Processing Systems (Neur IPS), 2022. Guan-Horng Liu, Arash Vahdat, De-An Huang, Evangelos A Theodorou, Weili Nie, and Anima Anandkumar. I2SB: Image-to-Image Schr odinger bridge. In International Conference on Machine Learning (ICML), 2023a. Qiang Liu. Rectified flow: A marginal preserving approach to optimal transport. ar Xiv preprint ar Xiv:2209.14577, 2022. Xingchao Liu, Chengyue Gong, and Qiang Liu. Flow straight and fast: Learning to generate and transfer data with rectified flow. In International Conference on Learning Representations (ICLR), 2023b. Zhiyu Liu, Bo Wu, and Hai Lin. A mean field game approach to swarming robots control. In American Control Conference (ACC). IEEE, 2018. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations (ICLR), 2019. Alain Mazzolo and C ecile Monthus. Conditioning diffusion processes with killing rates. Journal of Statistical Mechanics: Theory and Experiment, 2022(8):083207, 2022. Kirill Neklyudov, Daniel Severo, and Alireza Makhzani. Action matching: A variational method for learning stochastic dynamics from samples. In International Conference on Machine Learning (ICML), 2023. Frank No e, Alexandre Tkatchenko, Klaus-Robert M uller, and Cecilia Clementi. Machine learning for molecular simulation. Annual Review of Physical Chemistry, 71:361 390, 2020. Masashi Okada and Tadahiro Taniguchi. Variational inference mpc for bayesian model-based reinforcement learning. In Conference on Robot Learning (Co RL), 2020. Travis E Oliphant. A guide to Num Py, volume 1. Trelgol Publishing USA, 2006. Travis E Oliphant. Python for scientific computing. Computing in Science & Engineering, 9(3): 10 20, 2007. Etienne Pardoux and Shige Peng. Backward stochastic differential equations and quasilinear parabolic partial differential equations. In Stochastic Partial Differential Equations and Their Applications. Springer, 2005. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, highperformance deep learning library. In Advances in neural information processing systems, pp. 8026 8037, 2019. Stefano Peluchetti. Non-Denoising forward-time diffusions, 2022. URL https:// openreview.net/forum?id=o Vf IKuhqf C. Stefano Peluchetti. Diffusion bridge mixture transports, Schr odinger bridge problems and generative modeling. ar Xiv preprint ar Xiv:2304.00917, 2023. Published as a conference paper at ICLR 2024 Gabriel Peyr e and Marco Cuturi. Computational optimal transport. Center for Research in Economics and Statistics Working Papers, 2017. Gabriel Peyr e and Marco Cuturi. Computational optimal transport: With applications to data science. Foundations and Trends in Machine Learning, 11(5-6):355 607, 2019. Chris Philippidis, Chris Dewdney, and Basil J Hiley. Quantum interference and the quantum potential. Nuovo Cimento B, 52(1):15 28, 1979. Konrad Rawlik, Marc Toussaint, and Sethu Vijayakumar. On stochastic optimal control and reinforcement learning by approximate inference. In International Joint Conference on Artificial Intelligence (IJCAI), 2013. Hannes Risken. Fokker-Planck equation. In The Fokker-Planck Equation, pp. 63 95. Springer, 1996. Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-Net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-assisted Intervention. Springer, 2015. Lars Ruthotto, Stanley J Osher, Wuchen Li, Levon Nurbekyan, and Samy Wu Fung. A machine learning framework for solving high-dimensional mean field game and mean field control problems. Proceedings of the National Academy of Sciences, 117(17):9183 9193, 2020. Simo S arkk a and Arno Solin. Applied stochastic differential equations, volume 10. Cambridge University Press, 2019. Erwin Schr odinger. Uber die Umkehrung der Naturgesetze, volume IX. Sitzungsberichte der Preuss Akad. Wissen. Phys. Math. Klasse, Sonderausgabe, 1931. Simon Schweighofer, David Garcia, and Frank Schweitzer. An agent-based model of multidimensional opinion dynamics and opinion alignment. Chaos: An Interdisciplinary Journal of Nonlinear Science, 30(9):093139, 2020. Neta Shaul, Ricky T. Q. Chen, Maximilian Nickel, Matthew Le, and Yaron Lipman. On kinetic optimal probability paths for generative models. In International Conference on Machine Learning (ICML), 2023. Yuyang Shi, Valentin De Bortoli, Andrew Campbell, and Arnaud Doucet. Diffusion Schr odinger bridge matching. In Advances in Neural Information Processing Systems (Neur IPS), 2023. Ken Shoemake. Animating rotation with quaternion curves. In Conference on Computer Graphics and Interactive Techniques, 1985. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning (ICML), 2015. Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations (ICLR), 2021. Evangelos Theodorou. Nonlinear stochastic control and information theoretic dualities: Connections, interdependencies and thermodynamic interpretations. Entropy, 17(5):3352 3375, 2015. Evangelos Theodorou, Jonas Buchli, and Stefan Schaal. A generalized path integral control approach to reinforcement learning. Journal of Machine Learning Research (JMLR), 11(Nov): 3137 3181, 2010. Emanuel Todorov. Linearly-solvable Markov decision problems. In Advances in Neural Information Processing Systems (Neur IPS), 2007. Published as a conference paper at ICLR 2024 Alexander Tong, Nikolay Malkin, Guillaume Huguet, Yanlei Zhang, Jarrid Rector-Brooks, Kilian Fatras, Guy Wolf, and Yoshua Bengio. Improving and generalizing flow-based generative models with minibatch optimal transport. In International Conference on Machine Learning (ICML), Workshop Track, 2023. Stefan Van Der Walt, S Chris Colbert, and Gael Varoquaux. The numpy array: a structure for efficient numerical computation. Computing in Science & Engineering, 13(2):22, 2011. Guido Van Rossum and Fred L Drake Jr. Python reference manual. Centrum voor Wiskunde en Informatica Amsterdam, 1995. Francisco Vargas, Pierre Thodoroff, Neil D Lawrence, and Austen Lamacraft. Solving Schr odinger bridges via maximum likelihood. Entropy, 2021. C edric Villani et al. Optimal transport: Old and new, volume 338. Springer, 2009. Mei Wang and Weihong Deng. Deep visual domain adaptation: A survey. Neurocomputing, 312: 135 153, 2018. Holger Wendland. Scattered data approximation, volume 17. Cambridge university press, 2004. Omry Yadan. Hydra - a framework for elegantly configuring complex applications. Github, 2019. URL https://github.com/facebookresearch/hydra. Jiongmin Yong and Xun Yu Zhou. Stochastic controls: Hamiltonian systems and HJB equations, volume 43. Springer Science & Business Media, 1999. Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pp. 2223 2232, 2017. Published as a conference paper at ICLR 2024 A REVIEWS ON STOCHASTIC OPTIMAL CONTROL AND RELATED WORKS Solving Generalized Schr odinger bridge by reframing as stochastic optimal control The solution to a Generalized Schr odinger Bridge (GSB) problem (3) can also be expressed as the solution a stochastic optimal control (SOC) problem, typically structured as 2 ut(Xt) 2 + Vt(Xt) dt + Ep1 [ϕ(X1)] , (16a) s.t. d Xt = ut(Xt)dt + σd Wt, X0 µ, (16b) where we see that the terminal distribution hard constraint in GSB (3) is instead relaxed into a soft terminal cost ϕ( ) : Rd R. Problems with the forms of either (16) or (3) are known to be tied to linearly-solvable Markov decision processes (Todorov, 2007; Rawlik et al., 2013), corresponding to a tractable class of SOC problems whose optimality conditions the Hamilton Jacobi Bellman equations admit efficient approximation. This is attributed to the presence of the ℓ2-norm control cost, which can be interpreted as the KL divergence between controlled and uncontrolled processes. The interpretation bridges the SOC problems to probabilistic inference (Levine, 2018; Okada & Taniguchi, 2020), from which machine learning algorithms, such as our GSBM, can be developed. However, na ıvely transforming GSB problems (3) into SOC problems can introduce many potential issues. The design of the terminal cost is extremely important, and in many cases, it is an intractable cost function as we do not have access to the densities µ and ν. Prior works have mainly stuck to simple terminal costs (Ruthotto et al., 2020), using biased approximations based on batch estimates (Koshizuka & Sato, 2023), or using an adversarial approach to learn the cost function (Zhu et al., 2017; Lin et al., 2021). Furthermore, this approach will necessitate differentiating through an SDE simulation, requiring high memory usage. Though memory-efficient adjoint methods have been developed (Chen et al., 2018; Li et al., 2020), they remain computationally expensive to use at scale. In contrast, our GBSM approach only requires samples from µ and ν. By enforcing boundary distributions as a hard constraint instead of a soft one, our algorithm finds solutions that satisfy feasibility much better in practice. This further allows us to consider higher dimensional problems without the need to introduce additional hyperparameters for tuning a terminal cost. Finally, our algorithm drastically reduces the number of SDE simulations required, as both the matching algorithm (Stage 1) and the Cond SOC variational formulation (Stage 2) of GSBM can be done simulation-free. Comparison to related works Table 6 compares to prior learning based methods that also provide approximate solutions to (3) to our GSBM. Meanwhile, Fig. 9 summarizes different classes of matching algorithms. Table 6: Our GSBM features better feasibility, requires only samples from µ, ν, has local convergence analysis, and exhibits much better scalability. Feasibility (2) Requirement from distributions µ, ν Convergence analysis Dimension d NLSB (Koshizuka & Sato, 2023) (relaxed) samples & densities (local) 5 Deep GSB (Liu et al., 2022) F only in limit samples & densities 1000 GSBM (this work) F only samples (local) 12K Figure 9: Summary of different matching algorithms. Notice that OT and SB are w.r.t. ℓ2 costs. Published as a conference paper at ICLR 2024 Proposition 1 (Stage 1). The unique minimizer to Stage 1 coincides with s t (Xt). Proof. By keeping pt fixed in (3), the value of Ept[Vt(Xt)] no longer relies on uθ t, rendering (3) the same optimization problem as in the implicit matching; see (4) and Sec. 2. As the implicit matching (4) admits a unique minimizer s t , we conclude the proof. Proposition 2 (Stage 2; Conditional stochastic optimal control; Cond SOC). Let the minimizer to Stage 2 be factorized by pt(Xt) = R pt(Xt|x0, x1)pθ 0,1(x0, x1)dx0dx1 where pθ 0,1 is the boundary coupling induced by solving (1) with uθ t. Then, pt(Xt|x0, x1) pt|0,1 solves : min ut|0,1 J := Z 1 0 Ept(Xt|x0,x1) 2 ut(Xt|x0, x1) 2 + Vt(Xt) dt (6a) s.t. d Xt = ut(Xt|x0, x1)dt + σd Wt, X0 = x0, X1 = x1 (6b) Proof. Let us recall the GSB problem (3) in the form of FPE constraint (2): 2 ut(Xt) 2 + Vt(Xt) dt (17a) tpt(Xt) = (ut(Xt) pt(Xt)) + σ2 2 pt(Xt), p0 = µ, p1 = ν. (17b) Under mild regularity assumptions (Anderson, 1982; Yong & Zhou, 1999) such that Leibniz rule and Fubini s Theorem apply, we can separate p0,1 as it is assumed to be fixed out of the marginal pt. Specifically, the objective value (17a) can be decomposed into Z 1 2 ut 2 + Vt 2 ut 2 + Vt dt, by law of total expectation 2 ut 2 + Vt dt , by Fubini s Theorem which recovers (6a). Similarly, each term in the FPE (17b) can be decomposed into Z pt(Xt|x0, x1)p0,1(x0, x1)dx0,1 = Ep0,1 (ut(Xt) pt(Xt)) = ut(Xt) Z pt(Xt|x0, x1)p0,1(x0, x1)dx0,1 = Z ut(Xt)pt(Xt|x0, x1)p0,1(x0, x1)dx0,1 = Ep0,1 ut pt|0,1 , pt(Xt) = ( pt(Xt)) = Z pt(Xt|x0, x1)p0,1(x0, x1)dx0,1 = Ep0,1 pt|0,1 = Ep0,1 pt|0,1 , and, finally, p0(x0) = Ep0,1[δx0(x)] and p1(x1) = Ep0,1[δx1(x)]. Collecting all related terms yields: tpt|0,1 = ut pt|0,1 + pt|0,1, p0 = δx0, p1 = δx1, which is equivalent to the conditional SDE in (6b). Note that we denote ut as ut(Xt|x0, x1) to emphasize the fact that when pt|0,1(Xt|x0, x1) is factorized out of pt(Xt), the GSB problem (3) can be factorized into a mixture of SOC problems, each with an end-point constraint (x0, x1). Published as a conference paper at ICLR 2024 Remark (PDE interpretation of Proposition 2 and (6)). The optimal control to (6) is given by u t (Xt|x0, x1) = σ2 log Ψt(Xt|x1), where the time-varying potential Ψt(Xt|x1) solves a partial differential equation (PDE) known as the Hamilton-Jacobi-Bellman (HJB) PDE: tΨt(x|x1) = 1 2σ2 Ψt(x|x1) + Vt(x)Ψt(x|x1), Ψ1(x|x1) = δx1(x). (18) In general, (18) lacks closed-form solutions, except in specific instances. For example, when V is quadratic, the solution is provided in (19). Additionally, when V is degenerate, (18) simplifies to a heat kernel, and its solution corresponds to the drift of the Brownian bridge utilized in DSBM (Shi et al., 2023). Otherwise, one can approximate its solution with the aid of path-integral theory, as shown in Prop. 4). Lemma 3 (Analytic solution to (6) for quadratic V and σ > 0). Let V (x) := α σx 2, α, σ > 0, then the optimal solution to (6) follows a Gaussian path X t N(ctx0 + etx1, γ2 t Id), where ct = sinh(η(1 t)) sinh η , et = cosh(η(1 t)) ct cosh η, γt = σ sinh(η(1 t)) η et, η = σ These coefficients recover Brownian bridges as α 0 and, if further σ 0, straight lines. Proof. This is a direct consequence of conditioned diffusion processes with quadratic killing rates (Mazzolo & Monthus, 2022). Specifically, Mazzolo & Monthus (2022, Eq. (84)) give the analytic expression to the optimal control of (6), when Vt := α σx 2: ut(Xt|x0, x1) = η sinh(η(1 t))x1 η tanh(η(1 t))Xt, η := σ As (19) suggests a linear SDE, its mean µt Rd solves an ODE (S arkk a & Solin, 2019): dµt dt = η sinh(η(1 t))x1 η tanh(η(1 t))µt whose analytic solution is given by µt = x1 R t 0 ηgτ sinh(η(1 τ)) dτ + K where K depends on the initial condition and gt := exp Z t η tanh(η(1 τ))dτ ( ) = exp [ln vτ]sinh η sinh(η(1 t)) = sinh η sinh(η(1 t)). Note that ( ) is due to change of variable vτ := sinh(η(1 τ)). Substituting gt back to (20), and noticing that x0 = µ0 = x1 0+K 1 = K, yields the desired coefficients: µt = ctx0 + etx1, ct = sinh(η(1 t)) sinh η , et = cosh(η(1 t)) ct cosh η. (21) Similarly, the variance Σt Rd d of the SDE in (19) solves an ODE dΣt dt = 2η tanh(η(1 t))Σt + σ2Id. Repeating similar derivation, as in solving µt, leads to η sinh2(η(1 t)) (coth(η(1 t)) coth η) Id = σ2 η sinh(η(1 t))et Id. (22) which gives the desired γt since Σt = γ2 t Id. We now show how these coefficients (21) and (22) recover Brownian bridge as α or, equivalently, η := σ 2α approaches the zero limit. lim η 0 ct = lim η 0 eη(1 t) e η(1 t) ( ) = 1 + η(1 t) (1 η(1 t)) 1 + η (1 η) = 1 t, lim η 0 et = lim η 0 (cosh(η(1 t)) ct cosh η) = 1 (1 t) 1 = t, lim η 0 γt = lim η 0 σ sinh(η(1 t)) η et ( ) = σ p Published as a conference paper at ICLR 2024 where ( ) and ( ) are respectively due to exp(x) 1 + x and lim η 0 sinh(η(1 t)) η = lim η 0 eη(1 t) e η(1 t) ( ) = 1 + η(1 t) (1 η(1 t)) Hence, we recover the analytic solution to Brownian bridge. It can be readily seen that, if further σ 0, the solution collapses to linear interpolation between x0 and x1. Remark (Lemma 3 satisfies the boundary conditions in (6b)). One can verify that c0 = 1, e0 = 0, and γ0 = 0. Furthermore, we have c1 = 0, e1 = 1, and γ1 = 0. Hence, as expected, Lemma 3 satisfies the boundary condition in (6b). Proposition 4 (Path integral solution to (6)). Let r( X|x0, x1) be a distribution absolutely continuous w.r.t. the Brownian motion, denoted q( X|x0), where we shorthand X Xt [0,1]. Suppose r is associated with an SDE d Xt = vt(Xt)dt + σd Wt, X0=x0, X1=x1, σ>0. Then, the optimal, path-integral solution to (6) can be obtained via p ( X|x0, x1) = 1 Z ω( X|x0, x1)r( X|x0, x1), (10) where Z is the normalization constant and ω is the importance weight: ω( X|x0, x1) := exp Z 1 2 vt(Xt) 2 dt Z 1 1 σ vt(Xt) d Wt Proof. As the terminal boundary condition of (6b) is pinned at x1, we can transform (6) into: 0 Ept(Xt|x0,x1) 2σ2 ut(Xt|x0, x1) 2 + 1 σ2 Vt(Xt) dt log 1x1(X1), (23a) s.t. d Xt = ut(Xt|x0, x1)dt + σd Wt, X0 = x0 (23b) where 1A( ) is the indicator function of the set A. Hence, the terminal cost log 1x1(x) vanishes at x = x1 but otherwise explodes. Equation (23) is a valid stochastic optimal control (SOC) problem, and its optimal solution can be obtained as the form of path integral (Kappen, 2005): p ( X|x0, x1) = 1 1 σ2 Vt(Xt) dt 1x1(X1)q( X|x0), (24) where q( |x0) is the density of the Brownian motion conditioned on X0 = x0, the normalization constant Z is such that (24) remains as a proper distribution, and we shorthand X Xt [0,1]. We highlight that Equations (23) and (24) are essential transformation that allow us to recover the conditional distribution used in prior works (Shi et al., 2023) when Vt := 0 (see the remark below) Directly re-weighting samples from q using (24) may have poor complexity as most samples are assigned with zero weights due to 1x1( ). A more efficient alternative is to rebase the sampling distribution in (24) from q to an importance sampling r( X|x0, x1) such that X0 = x0, X1 = x1, and is absolutely continuous with respect to q. Then, the remarkable results from information-theoretic SOC (Theodorou et al., 2010; Theodorou, 2015) suggest p ( X|x0, x1) = 1 1 σ2 Vt(Xt) dt q( X|x0) r( X|x0, x1)r( X|x0, x1), (25) where the Radon-Nikodym derivative q r can be computed via Girsanov s theorem (S arkk a & Solin, 2019): q( X|x0) r( X|x0, x1) = exp Z 1 1 2σ2 vt(Xt) 2dt Z 1 1 σ vt(Xt) d Wt Substituting (26) back to (25) concludes the proof. Remark (Equation (24) recovers Brownian bridge when Vt := 0). One can verify that, when Vt := 0, the optimal solution p ( X|x0, x1) q( X|x0)1x1(X1) = q( X|x0, x1) is simply the Brownian motion conditioned on the end point being x1, which is precisely the Brownian bridge. Published as a conference paper at ICLR 2024 Theorem 5 (Local convergence). Let the intermediate result of Stage 1 and 2, after repeating n times, be θn. Then, its objective value in (3), L(θn), is monotonically non-increasing as n grows: L(θn) L(θn+1). Proof. Let pθn t and uθn t be the marginal distribution and vector field, respectively, after n steps of alternating optimization according to Alg. 5, i.e., pθn t and uθn t satisfy the FPE. Furthermore, let pθn 0,1 and pθn t|0,1 be the coupling and conditional distribution, respectively, defined by uθn t , and let p t|0,1 be the solution to Cond SOC (6). This implies that the marginal distributions at step n + 1 are given by pθn+1 t := Z pθn 0,1p t|0,1dx0,1. (27) Now, under mild assumptions (Anderson, 1982; Yong & Zhou, 1999) such that all distributions approach zero at a sufficient speed as x , and that all integrands are bounded, we have (note that inputs to all functions are dropped for notational simplicity): L(θn) = Z Z pθn t 2 uθn t 2 + Vt = Z pθn 0,1 Z Z pθn t|0,1 2 uθn t 2 + Vt dxt|0,1dt dx0,1 by Fubini s Theorem Z Z p t|0,1 2 uθn t 2 + Vt dxt|0,1dt dx0,1 by optimizing (6) = Z Z pθn+1 t 2 uθn t 2 + Vt dxtdt by Fubini s Theorem and (27) Z Z pθn+1 t 2 uθn+1 t 2 + Vt where we factorize the solution of Stage 1 by pθn t := R pθn 0,1pθn t|0,1dx0,1. The inequality in (28) follows from the fact that uθn+1 t , as the solution to the proceeding Stage 1, finds the unique minimizer that yields the same marginal as pθn+1 t while minimizing kinetic energy, i.e., Z pθn+1 t uθn t 2 dxt Z pθn+1 t uθn+1 t 2 dxt. Theorem 6. The optimal solution to GSB problem (3) is a fixed point of GSBM, Alg. 5. Proof. Let p t and u t be the optimal solution to the GSB problem in (3), and suppose p t := R p t|0,1p 0,1dx0dx1. It suffices to show that 1. Given x0, x1 p 0,1, the conditional distribution p t|0,1 is the optimal solution to (6). 2. Given p t , both matching algorithms (Alg. 1 and 2) return u t . The first statement follows directly from the Forward-Backward SDE (FBSDE) representation of (3), initially derived in Liu et al. (2022, Theorem 2). The FBSDE theory suggests that the (conditional) optimal control bridging any x0, x1 p 0,1 satisfies a BSDE (Pardoux & Peng, 2005) that is uniquely associated with the HJB PDE of (6), i.e., (18). This readily implies that p t|0,1 is the solution to (6). Since u t is known to be a gradient field (Liu et al., 2022, Eq. (3)), it must be with the solution returned by the implicit matching algorithm (Alg. 1), as Limplicit returns the unique gradient field that matches p t . On the other hand, since the explicit matching loss (5) can be interpreted as a Markovian projection (Shi et al., 2023), i.e., it returns the closest (in the KL sense) Markovian process to the reciprocal path measure (L eonard, 2013; L eonard et al., 2014) defined by the solution to (6). Since Published as a conference paper at ICLR 2024 the solution to (6), as proven in the first statement, is simply p t|0,1, the closest Markovian process is by construction u t . Hence, the second statement also holds, and we conclude the proof. It is important to note that, while Lexplicit and Limplicit do not share the same minimizer in general, they do at the equilibrium p t . C ADDITIONAL DERIVATIONS AND DISCUSSIONS C.1 EXPLICIT MATCHING LOSS How (5) preserves the prescribed pt. A rigorous derivation of (5) can be found in, e.g., Shi et al. (2023, Proposition 2), called Markovian projection. Here, we provide an alternative derivation that follows closer to the one from flow matching (Lipman et al., 2023). Lemma 7. Let the marginal pt be constructed from a mixture of conditional probability paths, i.e., pt(x) := Ep0,1[pt(x|x0, x1)], where pt(x|x0, x1) pt|0,1 is the time marginal of the SDE, d Xt = ut(Xt|x0, x1)dt + σd Wt, X0 = x0, X1 = x1, then the SDE drift that satisfies the FPE prescribed by pt is given by u t (x) = Ep0,1[ut(x|x0, x1)pt|0,1(x|x0, x1)] pt(x) . (29) Proof. It suffices to check that u t (x) satisfies the FPE prescribed by pt: = Z ut|0,1 pt|0,1 p0,1dx0,1 + Z 1 (29) = (u t pt) + 1 An immediate consequence of Lemma 7 is that Lexplicit(θ) = Ept 2 uθ t(Xt) u t (Xt) 2 + O(1), (30) where O(1) is independent of θ. Hence, the uθ t preserves the prescribed pt. Relation to implicit matching (4). Both explicit and implicit matching losses are associated to some regression objectives, except w.r.t. different targets, i.e., u t in (30) vs. s t in Sec. 2. As pointed out in Neklyudov et al. (2023), the two targets relate to each other via the Helmholtz decomposition (Ambrosio et al., 2008, Lemma 8.4.2), which suggests that u t = s t + wt where wt is the divergence-free vector field, i.e., (wtpt) = 0. Though this implies that Lexplicit only upper-bounds the kinetic energy, its solution sequential, by alternating between solving (6) in Stage 2, remains well-defined from the measure perspective (Peluchetti, 2022; 2023; Shi et al., 2023). Specifically, the sequential performs alternate projection between reciprocal path measure defined by the solution to (6), in the form pt := Ep0,1[pt|0,1], and the Markovian path measure, and admits convergence to the optimal solution for standard SB problems. C.2 GAUSSIAN PROBABILITY PATH Derivation of analytic conditional drift in (8). Recall the Gaussian path approximation in (7): Xt = µt + γt Z, Z N(0, Id), which immediately implies the velocity vector field and the score function (S arkk a & Solin, 2019; Albergo et al., 2023): t Xt = tµt + tγt γt (Xt µt) , log pt(Xt) = 1 γ2 t (Xt µt) . Published as a conference paper at ICLR 2024 Figure 10: How Alg. 6 faithfully recovers the covariance matrix of, in this case, Brownian bridge. Algorithm 6 Cov Sample (line 1 in Alg. 4) Require: SDE (6b) with µt, γt Discretize t := [t1, , t K], 0 < t1 < < t K < 1 Solve g t according the 1D ODE in (32) Compute covariance matrix C t RK K by (33) Perform Cholesky decomposition C t = L t LT t Sample X t = µ t + L t Z t, Z t RK d, [Z t]ij N(0, 1) return X t We can then construct the conditional drift: ut(Xt|x0, x1) = t Xt + σ2 2 log pt(Xt) = tµt + 1 (Xt µt). (31) Notice that (31) is of the form of a gradient field due to the linearity in Xt. One can verify that substituting the Brownian bridge, µt := (1 t)x0 + tx1 and γt := σ p t(1 t), to (31) indeed yields the desired drift x1 Xt Efficient simulation with analytic covariance function (Footnote 4). Proposition 4 requires samples from the joint distribution in path space. This requires sequential simulation, which can scale poorly to higher-dimensional applications. Fortunately, there exists efficient computation to (6b) with the (linear) conditional drift ut|0,1 given by (8), as the analytic solution to (6b) reads Xt = egt x0 + Z t 0 e gτ bτdτ + Z t 0 e gτ σd Ws , gt := Z t 0 aτdτ, (32) where at := 1 γt R is defined as in (8), bt := tµt atµt, and R d Ws is the Ito stochastic integral (Itˆo, 1951). The covariance function between two time steps s, t, such that 0 s < t 1, can then be computed by Cov(s, t) = egs+gt Z s 0 e gτ σd Wτ, Z t 0 e gτ σd Wτ (i) = egs+gt Z s 0 e gτ σd Wτ, Z s 0 e gτ σd Wτ (ii) = egs+gt Z s 0 e 2gτ σ2dt (iii) = egt gs γ2 s, where (i) is due to the independence of the Ito integral between [0, s] and [s, t], (ii) is due to the Ito isometry, and, finally, (iii) is due to substituting γ2 s = Var(s) := e2gs R s 0 e 2gτ σ2dt. Repeating the same derivation for t, s such that 0 t < s 1, the covariance function of (6b) between any given two timesteps s, t [0, 1] can be written cleanly as Cov(s, t) = γ2 min(s,t)egmax(s,t) gmin(s,t), (33) which, crucially, requires only solving an 1D ODE, dgt = atdt, g0 = 0. We summarize the algorithm in Alg. 6 with an example of Brownian bridge in Fig. 10. We note again that all computation is parallelizable over the batches. C.3 ANALYTIC SOLUTION TO (6) FOR QUADRATIC V AND σ = 0 Recall the finite-horizon and continuous-time linear quadratic regulator, defined as: min ut x 1 Fx1 + Z 1 x t Qxt + u t Rut dt s.t. xt = Axt + But, (34) Published as a conference paper at ICLR 2024 whose optimal control u t = R 1BPtxt can be obtained by solving a Riccati differential equation: Pt = A Pt + Pt A Pt BR 1B Pt + Q, P1 = F, or, equivalently, a Lyapunov differential equation (one can verify that Ht = P 1 t for all t): Ht = AHt + Ht A BR 1B + Ht QHt, H1 = F 1. (35) The end-point constraint can be accounted by setting F , as suggested in Chen & Georgiou (2015), yielding H1 = 0. With that, the solution to (6) for quadratic V (x) := α x 2 and σ := 0 can be obtained by solving the matrix ODE in (35) with A = 0, B = Id, R = 1 2Id, and Q = αId, which admits an analytic solution in the form of hyperbolic tangent functions, similar to (19). D EXPERIMENT DETAILS D.1 EXPERIMENT SETUP Table 7: Hyperparameters of the Spline Opt (Alg. 3) for each task. Stunnel Vneck GMM Lidar AFHQ Opinion Number of control pts. K 30 30 30 30 8 30 Number of gradient steps M 1000 3000 2000 200 100 700 Number of samples |i| 4 4 4 4 4 4 Optimizer SGD SGD SGD m SGD Adam SGD Baselines. All experiments on Deep GSB are run with their official implementation5 and default hyperparameters. We adopt the actor-critic parameterization as it generally yields better performance, despite requiring additional value networks. On the other hand, we implement DSBM by ourselves, as our GSBM can be made equivalent to DSBM by disabling the optimization of the Cond SOC problem in (6) and returning the analytic solution of Brownian bridges instead. This allows us to more effectively ablate the algorithmic differences, ensuring that any performance gaps are attributed to the presence of Vt. All methods, including our GSBM, are implemented in Py Torch (Paszke et al., 2019). Network architectures. For crowd navigation (Sec. 4.1) and opinion depolarization (Sec. 4.3), we adopt the same architectures from Deep GSB, which consists of 4 to 5 residual blocks with sinusoidal time embedding. For the AFHQ task, we consider the U-Net (Ronneberger et al., 2015) architecture implemented by Dhariwal & Nichol (2021).6 All networks are trained from scratch, without utilizing any pretrained checkpoint, and optimized with Adam W (Loshchilov & Hutter, 2019). Task-specific noise level (σ). For crowd navigation tasks with mean-field cost, we adopt σ = {1.0, 2.0}, whereas the opinion depolarization task uses σ = 0.5. These values are inherited from Deep GSB. On the other hand, we use σ = 1 and 0.5 respectively for Li DAR and AFHQ tasks. GSBM hyperparameters. Table 7 summarizes the hyperparameters used in the spline optimization. By default, the generation processes are discretized into 1000 steps, except for the opinion depolarization task, where we follow Deep GSB setup and discretize into 300 steps. GSBM implementation (forward & backward scheme). In practice, we employ the same forward and backward scheme proposed in DSBM (Shi et al., 2023), parameterizing two drifts, one for the forward SDE and another for the backward. During odd epochs, we simulate the coupling (line 4 in Alg. 5) from the forward drift, solve the corresponding Cond SOC problem (6), then match the resulting pt with the backward drift. Conversely, during even epochs, we follow the reverse process, matching the forward drift with the pt obtained from backward drift. The forward-backward alternating scheme generally improves the performance, as the forward drift always matches the ground-truth terminal distribution p1 = ν (and vise versa for the backward drift). 5https://github.com/ghliu/Deep GSB, under Apache License. 6https://github.com/openai/guided-diffusion, under MIT License. Published as a conference paper at ICLR 2024 Figure 11: Initial and terminal distributions for each crowd navigation task with mean-field state cost. Obstacles are marked gray. Table 8: Multiplicative factors of the state cost for each crowd navigation task: Vt(x) = λobs Lobstacle(x)+λint Linteract(x; pt). Stunnel Vneck GMM λobs 1500 3000 1500 λint 50 8 5 Crowd navigation setup. All three mean-field tasks Stunnel, Vneck, and GMM are adopted from Deep GSB, as shown in Fig. 11. We slightly modify the initial distribution of GMM to testify fully multi-model distributions. As for the mean-field interaction cost (12), we consider entropy cost for the Vneck task, and congestion cost for the Stunnel and GMM tasks. We adjust the multiplicative factors between Lobstacle and Linteract to ensure that noticeable changes occur when enabling mean-field interaction; see Table 8. These factors are much larger than the ones considered in Deep GSB (λ2 < 3). In practice, we soften the obstacle cost for differentiability, similar to prior works (Ruthotto et al., 2020; Lin et al., 2021), and approximate pt(x) P (x0,x1) pt(x|x0, x1) as the mixture of Gaussians. AFHQ setup. As our state cost Vt is defined via a latent space, we pretrain a β-VAE (Higgins et al., 2016) with β = 0.1. On the other hand, the spherical linear interpolation (Slerp; Shoemake (1985)) refers to constant-speed rotational motion along a great circle arc: ISlerp(t, z0, z1) := sin((1 t) Ω) sin Ω z0 + sin(t Ω) sin Ωz1, Ω= arccos( z0, z1 ). (36) D.2 OPINION DEPOLARIZATION Figure 12: Simulation of the polarize drift (37) in R1000. Polarize drift in (15). We use the same polarize drift from Deep GSB, based on the party model (Gaitonde et al., 2021). At each time step t, all agents receive the same random information ξt Rd sampled independently of pt, then react to this information according to fpolarize(x; pt, ξt) := Ey pt [a(x, y, ξt) y], a(x, y, ξt) := 1 if sign( x, ξt ) = sign( y, ξt ) 1 otherwise , where y := y/ y 1 2 and a(x, y, ξt) the agreement function indicates whether the two opinions x and y agree on the information ξt. Hence, (37) suggests that the agents are inclined to be receptive to opinions they agree with while displaying antagonism towards opinions they disagree with. This is known to yield polarization, as shown in Figure 12. Directional similarity in Figures 7 and 12. Directional similarity is a standard visualization for opinion modeling (Schweighofer et al., 2020) that counts the histogram of cosine angle between pairwise opinions. Hence, flatter directional similarity suggests less polarize opinion distribution. D.3 APPROXIMATING GEOMETRIC MANIFOLDS WITH LIDAR DATA Since Li DAR data is a collection of point clouds in [ 5, 5]3 R3, we use standard methods for treating it more as a Riemannian manifold. For every point in the ambient space, we define the projection operator by first taking a k-nearest neighbors and fitting a 2D tangent plane. Let Nk(x) = {x1, . . . , xk} be the set of k-nearest neighbors for a query point x R3 in the ambient space. We Published as a conference paper at ICLR 2024 then fit a 2D plane through a moving least-squares approach (Levin, 1998; Wendland, 2004), arg min a,b,c i=1 w(x, xi)(ax(x) i + bx(y) i + c x(z) i )2 (38) where the superscripts denotes the x, y, z coordinates and we use the weighting w(x, xi) = exp{ x xi /τ} with τ = 0.001. We solve this through a pseudoinverse and obtain the approximate tangent plane ax + by + c = z. When k , this tangent plane is smooth; however, we find that using k = 20 works sufficiently well in our experiments, and our GSBM algorithm is robust to the value of k. The projection operator π(x) is then defined using the plane, π(x) = x x n + c n, where n = [a b 1] . (39) This projection operator is all we need to treat the Li DAR dataset as a manifold. Differentiating through π will automatically project the gradient onto the tangent plane. This will ensure that optimization through the state cost Vt will be appropriated projected onto the tangent plane, allowing us to optimize quantities such as the height of the trajectory over the manifold. The exact state cost we use includes an additional boundary constraint: V (x) = λ h π(x) x 2 | {z } Lmanifold + exp(π(x)(z)) | {z } Lheight p {x(x),x(y)} sigm(p 5/0.1) + (1 sigm(p+5/0.1) | {z } Lboundary (40) where sigm is the sigmoid function, used for relaxing the boundary constraint. The loss Lboundary simply ensures we don t leave the area where the Li DAR data exists. We set λ = 5000. The gradient of Lmanifold(x) moves x in a direction that is orthogonal to the tangent plane while the gradient of Lheight(x) moves x along directions that lie on the tangent plane. If we only have Lmanifold, then Cond SOC essentially solves for geodesic paths and we recover an approximation of Riemannian Flow Matching (Chen & Lipman, 2023) (parameterized in the ambient space) when σ 0. The state cost Lheight depends on π(x) as this ensures we travel down the mountain slope when optimizing for height. Other state costs can of course also be considered instead of Lheight. D.4 ADDITIONAL EXPERIMENT RESULTS Figure 13: Illustration of how different couplings, X1|x0, emerge with nontrivial state costs V (x). Nontrivial V (x) induces different couplings. Figure 13 illustrates how different couplings, X1|x0, emerge with nontrivial state costs V (x). Specifically, we color different regions of the initial distribution (leftmost column) with different colors and track their pushforward maps. We consider the same V (x), i.e., obstacle and congestion costs, for Stunnel (top row) and adopt quadratic cost for Spiral Moon (bottom row). In these two particular cases, having nontrivial V (x) encourages stronger mixing, thereby yielding a different coupling (rightmost column) compared to the one induced without V (x) (middle column). Published as a conference paper at ICLR 2024 Figure 14: Additional comparison between DSBM (Shi et al., 2023) and our GSBM through the mean of pt(Xt|x0, x1) with randomly sampled (x0, x1). While DSBM simply performs linear interpolation between x0 and x1, our GSBM optimizes w.r.t. (6) where Vt is defined via a latent space, thereby exhibiting more semantically meaningful interpolations. Table 9: Quantitative comparison between NLSB (Koshizuka & Sato, 2023) and our GSBM. Notice that our GSBM consistently ensures feasibility. In contrast, as NLSB approaches the GSB problem by transforming it into a stochastic optimal control (SOC) problem with a soft terminal cost (see Appendix A), it may overly emphasize on optimality at the expense of feasibility. Feasibility W(pθ 1, ν) Optimality (3) Stunnel Vneck GMM Stunnel Vneck GMM NLSB (Koshizuka & Sato, 2023) 30.54 0.02 67.76 207.06 147.85 4202.71 GSBM (ours) 0.03 0.01 4.13 460.88 155.53 229.12 Figure 15: Performance of NLSB (Koshizuka & Sato, 2023) on the same crowd navigation tasks. Notice how NLSB was trapped in some local minima on Stunnel and completely failed on GMM. Additional interpolation results on AFHQ. Figure 14 provides comparison results between DSBM (Shi et al., 2023) and our GSBM on pt(Xt|x0, x1 with randomly sampled (x0, x1). Additional comparisons on crowd navigation with mean-field cost. Table 9 and Fig. 15 report the performance of NLSB7 (Koshizuka & Sato, 2023) an adjoint-based method for approximat- 7https://github.com/take-koshizuka/nlsb, under MIT License. Published as a conference paper at ICLR 2024 ing solutions to the same GSB problem (3). Figure 16 provides additional comparison between Deep GSB (Liu et al., 2022) and our GSBM. It should be obvious that our GSBM outperforms both methods. Figure 16: Additional comparison betwee Deep GSB (Liu et al., 2022) our GSBM on (top to bottom) Stunnel with σ = 2.0 and Vneck with σ = {1.0, 2.0}. Notice again how the training of Deep GSB exhibits relative instability and occasional divergence.