# building_normalizing_flows_with_stochastic_interpolants__db205595.pdf Published as a conference paper at ICLR 2023 BUILDING NORMALIZING FLOWS WITH STOCHASTIC INTERPOLANTS Michael S. Albergo Center for Cosmology and Particle Physics New York University New York, NY 10003, USA albergo@nyu.edu Eric Vanden-Eijnden Courant Institute of Mathematical Sciences New York University New York, NY 10012, USA eve2@cims.nyu.edu A generative model based on a continuous-time normalizing flow between any pair of base and target probability densities is proposed. The velocity field of this flow is inferred from the probability current of a time-dependent density that interpolates between the base and the target in finite time. Unlike conventional normalizing flow inference methods based the maximum likelihood principle, which require costly backpropagation through ODE solvers, our interpolant approach leads to a simple quadratic loss for the velocity itself which is expressed in terms of expectations that are readily amenable to empirical estimation. The flow can be used to generate samples from either the base or target, and to estimate the likelihood at any time along the interpolant. In addition, the flow can be optimized to minimize the path length of the interpolant density, thereby paving the way for building optimal transport maps. In situations where the base is a Gaussian density, we also show that the velocity of our normalizing flow can also be used to construct a diffusion model to sample the target as well as estimate its score. However, our approach shows that we can bypass this diffusion completely and work at the level of the probability flow with greater simplicity, opening an avenue for methods based solely on ordinary differential equations as an alternative to those based on stochastic differential equations. Benchmarking on density estimation tasks illustrates that the learned flow can match and surpass conventional continuous flows at a fraction of the cost, and compares well with diffusions on image generation on CIFAR-10 and Image Net 32 32. The method scales ab-initio ODE flows to previously unreachable image resolutions, demonstrated up to 128 128. 1 INTRODUCTION Contemporary generative models have primarily been designed around the construction of a map between two probability distributions that transform samples from the first into samples from the second. While progress has been from various angles with tools such as implicit maps (Goodfellow et al., 2014; Brock et al., 2019), and autoregressive maps (Menick & Kalchbrenner, 2019; Razavi et al., 2019; Lee et al., 2022), we focus on the case where the map has a clear associated probability flow. Advances in this domain, namely from flow and diffusion models, have arisen through the introduction of algorithms or inductive biases that make learning this map, and the Jacobian of the associated change of variables, more tractable. The challenge is to choose what structure to impose on the transport to best reach a complex target distribution from a simple one used as base, while maintaining computational efficiency. In the continuous time perspective, this problem can be framed as the design of a time-dependent map, Xt(x) with t [0, 1], which functions as the push-forward of the base distribution at time t = 0 onto some time-dependent distribution that reaches the target at time t = 1. Assuming that these distributions have densities supported on Ω Rd, say ρ0 for the base and ρ1 for the target, this amounts to constructing Xt : Ω Ωsuch that if x ρ0 then Xt(x) ρt for some density ρt such that ρt=0 = ρ0 and ρt=1 = ρ1. (1) Published as a conference paper at ICLR 2023 Figure 1: The density ρt(x) produced by the stochastic interpolant based on (5) between a standard Gaussian density and a Gaussian mixture density with three modes. Also shows in white are the flow lines of the map Xt(x) our method produces. One convenient way to represent this time-continuous map is to define it as the flow associated with the ordinary differential equation (ODE) Xt(x) = vt(Xt(x)), Xt=0(x) = x (2) where the dot denotes derivative with respect to t and vt(x) is the velocity field governing the transport. This is equivalent to saying that the probability density function ρt(x) defined as the pushforward of the base ρ0(x) by the map Xt satisfies the continuity equation (see e.g. (Villani, 2009; Santambrogio, 2015) and Appendix A) tρt + (vtρt) = 0 with ρt=0 = ρ0 and ρt=1 = ρ1, (3) and the inference problem becomes to estimate a velocity field such that (3) holds. Here we propose a solution to this problem based on introducing a time-differentiable interpolant It : Ω Ω Ωsuch that It=0(x0, x1) = x0 and It=1(x0, x1) = x1 (4) A useful instance of such an interpolant that we will employ is It(x0, x1) = cos( 1 2πt)x0 + sin( 1 2πt)x1, (5) though we stress the framework we propose applies to any It(x0, x1) satisfying (4) under mild additional assumptions on ρ0, ρ1, and It specified below. Given this interpolant, we then construct the stochastic process xt by sampling independently x0 from ρ0 and x1 from ρ1, and passing them through It: xt = It(x0, x1), x0 ρ0, x1 ρ1 independent. (6) We refer to the process xt as a stochastic interpolant. Under this paradigm, we make the following key observations as our main contributions in this work: The probability density ρt(x) of xt connecting the two densities, henceforth referred to as the interpolant density, satisfies (3) with a velocity vt(x) which is the unique minimizer of a simple quadratic objective. This result is the content of Proposition 1 below, and it can be leveraged to estimate vt(x) in a parametric class (e.g. using deep neural networks) to construct a generative model through the solution of the probability flow equation (2), which we call Inter Flow. By specifying an interpolant density, the method therefore separates the tasks of minimizing the objective from discovering a path between the base and target densities. This is in contrast with conventional maximum likelihood (MLE) training of flows where one is forced to couple the choice of path in the space of measures to maximizing the objective. We show that the Wasserstein-2 (W2) distance between the target density ρ1 and the density ˆρ1 obtained by transporting ρ0 using an approximate velocity ˆvt in (2) is controlled by our objective function. We also show that the value of the objective on ˆvt during training can be used to check convergence of this learned velocity field towards the exact vt. We show that our approach can be generalized to shorten the path length of the interpolant density and optimize the transport by additionally maximizing our objective over the interpolant It(x0, x1) and/or adjustable parameters in the base density ρ0. By choosing ρ0 to be a Gaussian density and using (5) as interpolant, we show that the score of the interpolant density, log ρt, can be explicitly related to the velocity field vt. This allows us to draw connection between our approach and score-based diffusion models, providing theoretical groundwork for future exploration of this duality. We demonstrate the feasibility of the method on toy and high dimensional tabular datasets, and show that the method matches or supersedes conventional ODE flows at lower cost, as it avoids the need to backpropagate through ODE solves. We demonstrate our approach on image generation for CIFAR-10 and Image Net 32x32 and show that it scales well to larger sizes, e.g. on the 128 128 Oxford flower dataset. Published as a conference paper at ICLR 2023 1.1 RELATED WORKS Map Type Finite Time Integration Simulation Free Training Computable Likelihood Optimizable Transport FFJORD D x Score Flows S/D x x Schr odinger Bridge S x x Inter Flow (Ours) D Table 1: Description of the qualities of continuous time transport methods defined by a stochastic or deterministic process. Early works on exploiting transport maps for generative modeling go back at least to Chen & Gopinath (2000), which focuses on normalizing a dataset to infer its likelihood. This idea was brought closer to contemporary use cases through the work of Tabak & Vanden-Eijnden (2010) and Tabak & Turner (2013), which devised to expressly map between densities using simple transport functions inferred through maximum likelihood estimation (MLE). These transformations were learned in sequence via a greedy procedure. We detail below how this paradigm has evolved in the case where the map is represented by a neural network and optimized accordingly. Discrete and continuous time flows. The first success of normalizing flows with neural network parametrizations follow the work of Tabak & Turner (2013) with a finite set of steps along the map. By imposing structure on the transformation so that it remains an efficiently invertible diffeomorphism, the models of Rezende & Mohamed (2015); Dinh et al. (2017); Huang et al. (2018); Durkan et al. (2019) can be optimized through maximum likelihood estimation at the cost of limiting the expressive power of the representation, as the Jacobian of the map must be kept simple to calculate the likelihood. Extending this to the continuous case allowed the Jacobian to be unstructured yet still estimable through trace estimation techniques (Chen et al., 2018; Grathwohl et al., 2019; Hutchinson, 1989). Yet, learning this map through MLE requires costly backpropagation through numerical integration. Regulating the path can reduce the number of solver calls (Finlay et al., 2020; Onken et al., 2021), though this does not alleviate the main structural challenge of the optimization. Our work uses a continuous map Xt as well but allows for direct estimation of the underlying velocity. While recent work has also considered simulation-free training by fitting a velocity field, these works present scalability issues (Rozen et al., 2021) and biased optimization (Ben-Hamu et al., 2022), and are limited to manifolds. Moreover, (Rozen et al., 2021) relies on interpolating directly the probability measures, which can lead to unstable velocities. Score-based flows. Adjacent research has made use of diffusion processes, commonly the Ornstein Uhlenbeck (OU) process, to connect the target ρ1 to the base ρ0. In this case the transport is governed by a stochastic differential equation (SDE) that is evolved for infinite time, and the challenge of learning a generative model can be framed as fitting the reverse time evolution of the SDE from Gaussian noise back to ρ1 (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song et al., 2021b). Doing so indirectly learns the velocity field by means of learning the score function log ρt(x), using the Fischer divergence instead of the MLE objective. While this approach has shown great promise to model high dimensional distributions (Rombach et al., 2022; Hoogeboom et al., 2022), particularly in the case of text-to-image generation (Ramesh et al., 2022; Saharia et al., 2022), there is an absence of theoretical motivation for the SDE-flow framework and the complexity it induces. Namely, the SDE must evolve for infinite time to connect the distributions, the parameterization of the time steps remains heuristic (Xiao et al., 2022), and the criticality of noise, as well as the score, is not absolutely apparent (Bansal et al., 2022; Lu et al., 2022). In particular, while the objective used in score-based diffusion models was shown to bound the Kullback-Leibler divergence (Song et al., 2021a), actual calculation of the likelihood requires one to work with the ODE probability flow associated with the SDE. This motivates further research into effective, ODE-driven, approaches to learning the map. Our approach can be viewed as an alternative to score-based diffusion models in which the ODE velocity is learned through the interpolant xt rather than an OU process, leading to greater simplicity and flexibility (as we can connect any two densities exactly over a finite time interval). Bridge-based methods. Heng et al. (2021) propose to learn Schroedinger bridges, which are a entropic regularized version of the optimal transportation plan connecting two densities in finite time, using the framework of score-based diffusion. Similarly, Peluchetti (2022) investigates the use of bridge processes, i.e. SDE whose position is constrained both at the initial and final times, to perform exact density interpolation in finite time. A key difference between these approaches and ours is that they give diffusion-based models, whereas our method builds a probability flow ODE directly using a quadratic loss for its velocity, which is simpler and shown here to be scalable. Interpolants. Co-incident works by Liu et al. (2022); Lipman et al. (2022) derive an analogous optimization to us, with a focus on straight interpolants, also contrasting it with score-based methods. Liu et al. (2022) describe an iterative way of rectifying the interpolant path, which can be shown to arrive at an optimal transport map when the procedure is repeated ad infinitum (Liu, 2022). We also Published as a conference paper at ICLR 2023 propose a solution to the problem of optimal transport that involves optimizing our objective over the stochastic interpolant. 1.2 NOTATIONS AND ASSUMPTIONS We assume that the base and the target distribution are both absolutely continuous with respect to the Lebesgue measure on Rd, with densities ρ0 and ρ1, respectively. We do not require these densities to be positive everywhere on Rd, but we assume that ρ0(x) and ρ1(x) are continuously differentiable in x. Regarding the interpolant It : Rd Rd Rd, we assume that it is surjective for all t [0, 1] and satisfies (4). We also assume that It(x0, x1) is continuously differentiable in (t, x0, x1), and that it is such that E | t It(x0, x1)|2 < (7) A few additional technical assumptions on ρ0, ρ1 and It are listed in Appendix B. Given any function ft(x0, x1) we denote E[ft(x0, x1)] = Z 1 Rd Rd ft(x0, x1)ρ0(x0)ρ1(x1)dx0dx1dt (8) its expectation over t, x0, and x1 drawn independently from the uniform density on [0, 1], ρ0, and ρ1, respectively. We use to denote the gradient operator. 2 STOCHASTIC INTERPOLANTS AND ASSOCIATED FLOWS Our first main theoretical result can be phrased as follows: Proposition 1. The stochastic interpolant xt defined in (6) with It(x0, x1) satisfying (4) has a probability density ρt(x) that satisfies the continuity equation (3) with a velocity vt(x) which is the unique minimizer over ˆvt(x) of the objective G(ˆv) = E |ˆvt(It(x0, x1))|2 2 t It(x0, x1) ˆvt(It(x0, x1)) (9) In addition the minimum value of this objective is given by G(v) = E |vt(It(x0, x1))|2 = Z 1 Rd |vt(x)|2ρt(x)dxdt > (10) Proposition 1 is proven in Appendix B under Assumption B.1. As this proof shows, the first statement of the proposition remains true if the expectation over t is performed using any probability density ω(t) > 0, which may prove useful in practice. We now describe some primary facts resulting from this proposition, itemized for clarity: The objective G(ˆv) is given in terms of an expectation that is amenable to empirical estimation given samples t, x0, and x1 drawn from ρ0, ρ1 and U([0, 1]). Below, we will exploit this property to propose a numerical scheme to perform the minimization of G(ˆv). While the minimizer of the objective G(ˆv) is not available analytically in general, a notable exception is when ρ0 and ρ1 are Gaussian mixture densities and we use the trigonometric interpolant (5) or generalization thereof, as discussed in Appendix C. The minimal value in (10) achieved by the objective implies that a necessary (albeit not sufficient) condition for ˆv = v is G(ˆv) = G(ˆv) + E |ˆvt(It(x0, x1))|2 = 0. (11) In our numerical experiments we will monitor this quantity. This minimal value also suggests to maximize G(v) = minˆv G(ˆv) with respect to additional control parameters (e.g. the interpolant) to shorten the W2 length of the path {ρt(x) : t [0, 1]}. In Appendix D, we show that this procedure achieves optimal transport under minimal assumptions. The last bound in (10), which is proven in in Lemma B.2, implies that the path length is always finite, even if it is not the shortest possible. Published as a conference paper at ICLR 2023 Let us now provide some intuitive derivation of the statements of Proposition 1: Continuity equation. By definition of the stochastic interpolant xt we can express its density ρt(x) using the Dirac delta distribution as Rd Rd δ (x It(x0, x1)) ρ0(x0)ρ1(x1)dx0dx1. (12) Since It=0(x0, x1) = x0 and It=1(x0, x1) = x1 by definition, we have ρt=0 = ρ0 and ρt=1 = ρ1, which means that ρt(x) satisfies the boundary conditions at t = 0, 1 in (3). Differentiating (12) in time using the chain rule gives Rd Rd t It(x0, x1) δ (x It(x0, x1)) ρ0(x0)ρ1(x1)dx0dx1 jt(x) (13) where we defined the probability current Rd Rd t It(x0, x1)δ (x It(x0, x1)) ρ0(x0)ρ1(x1)dx0dx1. (14) Therefore if we introduce the velocity vt(x) via vt(x) = jt(x)/ρt(x) if ρt(x) > 0, 0 else (15) we see that we can write (13) as the continuity equation in (3). Variational formulation. Using the expressions in (12) and (14) for ρt(x) and jt(x) shows that we can write the objective (9) as G(ˆv) = Z 1 |ˆvt(x)|2ρt(x) 2ˆvt(x) jt(x) dxdt (16) Since ρt(x) and jt(x) have the same support, the minimizer of this quadratic objective is unique for all (t, x) where ρt(x) > 0 and given by (15). Minimum value of the objective. Let vt(x) be given by (15) and consider the alternative objective H(ˆv) = Z 1 Rd |ˆvt(x) vt(x)|2 ρt(x)dxdt |ˆvt(x)|2ρt(x) 2ˆvt(x) jt(x) + |vt(x)|2ρt(x) dxdt where we expanded the square and used the identity vt(x)ρt(x) = jt(x) to get the second equality. The objective G(ˆv) given in (16) can be written in term of H(ˆv) as G(ˆv) = H(ˆv) Z 1 Rd |vt(x)|2ρt(x)dxdt = H(ˆv) E |vt(It(x0, x1))|2 (18) The equality in (10) follows by evaluating (18) at ˆvt(x) = vt(x) using H(v) = 0. Optimality gap. The argument above shows that we can use E |ˆvt(It(x0, x1)) t It(x0, x1)|2 as alternative objective to G(ˆv) since their first variations coincide. However, it should be stressed that this quadratic objective remains strictly positive at ˆv = v in general so it offers no baseline measure of convergence. To see why, complete the square in G(ˆv) to write (18) as H(ˆv) = E |ˆvt(It) t It|2 E | t It|2 + E |vt(It)|2 E | t It|2 + E |vt(It)|2 (19) where we used the shorthand notation It = It(x0, x1) and t It = t It(x0, x1). Evaluating this inequality at ˆv = v using H(v) = 0 we deduce E | t It|2 E |vt(It)|2 (20) However we stress that this inequality is not saturated, i.e. E |vt(It)|2 = E | t It|2 , in general (see Remark B.3). Hence E |vt(It) t It|2 = min ˆv E |ˆvt(It) t It|2 = min ˆv G(ˆv) + E | t It|2 = E |vt(It)|2 + E | t It|2 0. (21) Published as a conference paper at ICLR 2023 Optimizing the transport. It is natural to ask whether our stochastic interpolant construction can be amended or generalized to derive optimal maps. Here, we state a positive answer to this question by showing that the maximizing the objective G(ˆv) in (9) with respect to the interpolant yields a solution to the optimal transport problem in the framework of Benamou & Brenier (2000). This is proven in Appendix D, where we also discuss how to shorten the path length in density space by optimizing adjustable parameters in the base density ρ0. Experiments are given in Appendix H. Since our primary aim here is to construct a map T = Xt=1 that pushes forward ρ0 onto ρ1, but not necessarily to identify the optimal one, we leave the full investigation of the consequences of these results for future work, but state the proposition explicitly here. In their seminal paper, Benamou & Brenier (2000) showed that finding the optimal map requires solving the minimization problem min (ˆv,ˆρ) Rd |ˆvt(x)|2ˆρt(x)dxdt subject to: tˆρt + ˆvtˆρt = 0, ˆρt=0 = ρ0, ˆρt=1 = ρ1. (22) The minimizing coupling (ρ t , ϕ t ) for gradient field v t (x) = ϕ t (x) is unique and satisfies: tρ t + ϕ t ρ t = 0, ρ t=0 = ρ0, ρ t=1 = ρ1, tϕ t + 1 2| ϕ t |2 = 0. (23) In the interpolant flow picture, ρt(x) is fixed by the choice of interpolant It(x0, x1), and in general ρt(x) = ρ t (x). Because the value of the objective in (22) is equal to the minimum of G(ˆv) given in (10), a natural suggestion to optimize the transport is to maximize this minimum over the interpolant. Under some assumption on the Benamou-Brenier density ρ t (x) solution of (23), this procedure works. We show this through the use of interpolable densities as discussed in Mikulincer & Shenfeld (2022) and defined in D.1. Proposition 2. Assume that (i) the optimal density function ρ t (x) minimizing (22) is interpolable and (ii) (23) has a classical solution. Consider the max-min problem max ˆI min ˆv G(ˆv) (24) where G(ˆv) is the objective in (9) and the maximum is taken over interpolants satisfying (4). Then a maximizer of (24) exists, and any maximizer I t (x0, x1) is such that the probability density function of x t = I t (x0, x1), with x0 ρ0 and x1 ρ1 independent, is the optimal ρ t (x), the mimimizing velocity is v t (x) = ϕ t (x), and the pair (ρ t (x), ϕ t (x)) satisfies (23). The proof of Proposition 2 is given in Appendix D, along with further discussion. Proposition 2 relies on Lemma D.3 that reformulates (22) in a way which shows this problem is equivalent to the max-min problem in (24) for interpolable densities. 2.1 WASSERSTEIN BOUNDS The following result shows that the objective in (17) controls the Wasserstein distance between the target density ρ1 and the the density ˆρ1 obtained as the pushforward of the base density ρ0 by the map ˆXt=1 associated with the velocity ˆvt: Proposition 3. Let ρt(x) be the exact interpolant density defined in (12) and, given a velocity field ˆvt(x), let us define ˆρt(x) as the solution of the initial value problem tˆρt + (ˆvtˆρt) = 0, ˆρt=0 = ρ0 (25) Assume that ˆvt(x) is continuously differentiable in (t, x) and Lipschitz in x uniformily on (t, x) [0, 1] Rd with Lipschitz constant ˆK. Then the square of the W2 distance between ρ1 and ˆρ1 is bounded by W 2 2 (ρ1, ˆρ1) e1+2 ˆ KH(ˆv) (26) where H(ˆv) is the objective function defined in (17). The proof of Proposition 3 is given in Appendix E: it leverages the following bound on the square of W-2 distance W 2 2 (ρ1, ˆρ1) Z 1 Rd |Xt=1(x) ˆXt=1(x)|2ρ0(x)dxdt (27) where Xt is the flow map solution of (2) with the exact vt(x) defined in (15) and ˆXt is the flow map obtained by solving (2) with vt(x) replaced by ˆvt(x). Published as a conference paper at ICLR 2023 2.2 LINK WITH SCORE-BASED GENERATIVE MODELS The following result shows that if ρ0 is a Gaussian density, the velocity vt(x) can be related to the score of the density ρt(x): Proposition 4. Assume that the base density ρ0(x) is a standard Gaussian density N(0, Id) and suppose that the interpolant It(x0, x1) is given by (5). Then the score log ρt(x) is related to velocity vt(x) as log ρt(x) = 2πt)vt(x) if t [0, 1) π2 tvt(x)|t=1 if t = 1. (28) The proof of this proposition is given in Appendix F. The first formula for t [0, 1) is based on a direct calculation using Gaussian integration by parts; the second formula at t = 1 is obtained by taking the limit of the first using vt=1(x) = 0 from (B.20) and l Hˆopital s rule. It shows that we can in principle resample ρt at any t [0, 1] using the stochastic differential equation in artificial time τ whose drift is the score ˆst(x) obtained by evaluating (28) on the estimated ˆvt(x): dxτ = ˆst(xτ)dτ + 2d Wτ. (29) Similarly, the score ˆst(x) could in principle be used in score-based diffusion models, as explained in Appendix F. We stress however that while our velocity is well-behaved for all times in [0, 1], as shown in (B.20), the drift and diffusion coefficient in the associated SDE are singular at t = 0, 1. 3 PRACTICAL IMPLEMENTATION AND NUMERICAL EXPERIMENTS The objective detailed in Section 2 is amenable to efficient empirical estimation, see (I.1), which we utilize to experimentally validate the method. Moreover, it is appealing to consider a neural network parameterization of the velocity field. In this case, the parameters of the model ˆv can be optimized through stochastic gradient descent (SGD) or its variants, like Adam (Kingma & Ba, 2015). Following the recent literature regarding density estimation, we benchmark the method on visualizable yet complicated densities that display multimodality, as well as higher dimensional tabular data initially provided in Papamakarios et al. (2017) and tested in other works such as Grathwohl et al. (2019). The 2D test cases demonstrate the ability to flow between empirical densities with no known analytic form. In all cases, numerical integration for sampling is done with the Dormand Prince, explicit Runge-Kutta of order (4)5 (Dormand & Prince, 1980). In Sections 3.1-3.4 the choice of interpolant for experimentation was selected to be that of (5), as it is the one used to draw connections to the technique of score based diffusions in Proposition 4. In Section H we use the interpolant (B.2) and optimize at and bt to investigate the possibility and impact of shortening the path length. 3.1 2D DENSITY ESTIMATION An intuitive first test to benchmark the validity of the method is sampling a target density whose analytic form is known or whose density can be visualized for comparison. To this end, we follow the practice of choosing a few complicated 2-dimensional toy datasets, namely those from (Grathwohl et al., 2019), which were selected to differentiate the flexibility of continuous flows from discrete time flows, which cannot fully separate the modes. We consider anisotropic curved densities, a mixture of 8 separated Gaussians, and a checkerboard density. The velocity field of the interpolant Exact Interpolant Flow Figure 2: Left: 2-D density estimation. Right: Learning a flow map between densities when neither are analytically known. Published as a conference paper at ICLR 2023 POWER GAS HEPMASS MINIBOONE BSDS300 MADE 3.08 -3.56 20.98 15.59 -148.85 Real NVP -0.17 -8.33 18.71 13.55 -153.28 Glow -0.17 -8.15 18.92 11.35 -155.07 CPF -0.52 -10.36 16.93 10.58 -154.99 NSP -0.64 -13.09 14.75 9.67 -157.54 FFJORD -0.46 -8.59 14.92 10.43 -157.40 OT-Flow -0.30 -9.20 17.32 10.55 -154.20 Ours -0.57 -12.35 14.85 10.42 -156.22 Method CIFAR-10 Image Net-32x32 NLL FID NLL FID FFJORD 3.40 Glow 3.35 4.09 DDPM 3.75 3.17 DDPM++ 3.37 2.90 Score SDE 2.99 2.92 VDM 2.65 7.41 3.72 Soft Truncation 2.88 3.45 3.85 8.42 Score Flow 2.81 5.40 3.76 10.18 Ours 2.99 10.27 3.48 8.49 Table 2: Left: Negative log likelihoods (NLL) computed on test data unseen during training (lower is better). Values of MADE, Real NVP, and Glow quoted from the FFJORD paper. Values of OTFlow, CPF, and NSP quoted from their respective publications. Right: NLL and FID scores on unconditional image generation tasks for recent advanced models that emit a likelihood. flow is parameterized by a simple feed forward neural network with Re LU Nair & Hinton (2010) activation functions. The network for each model has 3 layers, each of width equal to 256 hidden units. Optimizer is performed on G(ˆv) for 10k epochs. We plot a kernel density estimate over 80k samples from both the flow and true distribution in Figure 2. The interpolant flow captures all the modes of the target density without artificial stretching or smearing, evincing a smooth map. 3.2 DATASET INTERPOLATION As described in Section 2, the velocity field associated to the flow can be inferred from arbitrary densities ρ0, ρ1 this deviates from the score-based diffusion perspective, in which one distribution must be taken to be Gaussian for the training paradigm to be tractable. In Figure 2, we illustrate this capacity by learning the velocity field connecting the anisotropic swirls distribution to that of the checkerboard. The interpolant formulation allows us to draw samples from ρt at any time t [0, 1], which we exploit to check that the velocity field is empirically correct at all times on the interval, rather than just at the end points. This aspect of interpolants is also noted in (Choi et al., 2022), but for the purpose of density ratio estimation. The above observation highlights an intrinsic difference of the proposed method compared to MLE training of flows, where the map that is the minimizer of G(ˆv) is not empirically known. We stress that query access to ρ0 or ρ1 is not needed to use our interpolation procedure since it only uses samples from these densities. 3.3 TABULAR DATA FOR HIGHER DIMENSIONAL TESTING A set of tabular datasets introduced by (Papamakarios et al., 2017) has served as a consistent test bed for demonstrating flow-based sampling and its associated density estimation capabilities. We continue that practice here to provide a benchmark of the method on models which provide an exact likelihood, separating and comparing to exemplary discrete and continuous flows: MADE (Germain et al., 2015), Real NVP (Dinh et al., 2017), Convex Potential Flows (CPF) (Huang et al., 2021), Neural Spline Flows (NSP) Durkan et al. (2019), Free-form continuous flows (FFJORD) (Grathwohl et al., 2019), and OT-Flow (Finlay et al., 2020). Our primary point of comparison is to other continuous time models, so we sequester them in benchmarking. We train the interpolant flow model on each target dataset listed in Table 2, choosing the reference distribution of the interpolant ρ0 to be a Gaussian density with mean zero and variance Id, where d is the data dimension. The architectures and hyperparameters are given in Appendix I. We highlight some of the main characteristics of the models here. In each case, sampling of the time t was reweighted according to a beta distribution, with parameters α, β provided in the same appendix. Results from the tabular experiments are displayed in Table 2, in which the negative log-likelihood averaged over a test set of held out data is computed. We note that the interpolant flow achieves better or equivalent held out likelihoods on all ODE based models, except BSDS300, in which the FFJORD outperforms the interpolant by 0.6%. We note upwards of 30% improvements compared to baselines. Note that these likelihoods are achieved without direct optimization of it. Published as a conference paper at ICLR 2023 3.4 UNCONDITIONAL IMAGE GENERATION To compare with recent advances in continuous time generative models such as DDPM (Ho et al., 2020), Score SDE(Song et al., 2021b), and Score Flow (Song et al., 2021a), we provide a demonstration of the interpolant flow method on learning to unconditionally generate images trained from the CIFAR-10 (Krizhevsky et al., 2009) and Image Net 32 32 datasets (Deng et al., 2009; Van Den Oord et al., 2016), which follows suit with Score Flow and Variational Diffusion Models (VDM) (Kingma et al., 2021). We train an interpolant flow built from the UNet architecture from DDPM (Ho et al., 2020) on a single NVIDIA A100 GPU, which was previously impossible under maximum likelihood training of continuous time flows. Experimental details can be found in Appendix I. Note that we used a beta distribution reweighting of the time sampling as in the tabular experiments. Table 2 provides a comparison of the negative log likelihoods (NLL), measured in bits per dim (BPD) and Frechet Inception Distance (FID), of our method compared to past flows and state of the art diffusions. We focus our comparison against models which emit a likelihood, as this is necessary to compare NLL. Figure 3: Left: Inter Flow samples training on 128 128 flowers dataset. Right: Samples from flow trained on Image Net-32 32 (top) and CIFAR-10 (bottom). We compare to other flows FFJORD and Glow (Grathwohl et al., 2019; Kingma & Dhariwal, 2018), as well as to recent advances in score based diffusion in DDPM, DDPM++, VDM, Score SDE, Score Flow, and Soft Truncation (Ho et al., 2020; Nichol & Dhariwal, 2021; Kingma et al., 2021; Song et al., 2021b;a; Kim et al., 2022). We present results without data augmentation. Our models emit likelihoods, measured in bits per dim, that are competitive with diffusions on both datasets with a NLL of 2.99 and 3.45. Measures of FID are proximal to those from diffusions, though slightly behind the best results. We note however that this is a first example of this type of model, and has not been optimized with the training tricks that appear in many of the recent works on diffusions, like exponential moving averages, truncations, learning rate warm-ups, and the like. To demonstrate efficiency on larger domains, we train on the Oxford flowers dataset (Nilsback & Zisserman, 2006), which are images of resolution 128 128. We show example generated images in Figure 3. 4 DISCUSSION, CHALLENGES, AND FUTURE WORK We introduced a continuous time flow method that can be efficiently trained. The approach has a number of intriguing and appealing characteristics. The training circumvents any backpropagation through ODE solves, and emits a stable and interpretable quadratic objective function. This objective has an easily accessible diagnostic which can verify whether a proposed minimizer of the loss is a valid minimizer, and controls the Wasserstein-2 distance between the model and the target. One salient feature of the proposed method is that choosing an interpolant It(x0, x1) decouples the optimization problem from that of also choosing a transport path. This separation is also exploited by score-based diffusion models, but our approach offers better explicit control on both. In particular we can interpolate between any two densities in finite time and directly obtain the probability flow needed to calculate the likelihood. Moreover, we showed in Section 2 and Appendices D and G that the interpolant can be optimized to achieve optimal transport, a feature which can reduce the cost of solving the ODE to draw samples. In future work, we will investigate more thoroughly realizations of this procedure by learning the interpolant It in a wider class of functions, in addition to minimizing G(ˆv). The intrinsic connection to score-based diffusion presented in Proposition 4 may be fruitful ground for understanding the benefits and tradeoffs of SDE vs ODE approaches to generative modeling. Exploring this relation is already underway (Lu et al., 2022; Boffi & Vanden-Eijnden, 2022), and can hopefully provide theoretical insight into designing more effective models. Published as a conference paper at ICLR 2023 ACKNOWLEDGMENTS We thank G erard Ben Arous, Nick Boffi, Kyle Cranmer, Michael Lindsey, Jonathan Niles-Weed, Esteban Tabak for helpful discussions about transport. MSA is supported by the National Science Foundation under the award PHY-2141336. MSA is grateful for the hospitality of the Center for Computational Quantum Physics at the Flatiron Institute. The Flatiron Institute is a division of the Simons Foundation. EVE is supported by the National Science Foundation under awards DMR1420073, DMS-2012510, and DMS-2134216, by the Simons Collaboration on Wave Turbulence, Grant No. 617006, and by a Vannevar Bush Faculty Fellowship. Arpit Bansal, Eitan Borgnia, Hong-Min Chu, Jie S. Li, Hamid Kazemi, Furong Huang, Micah Goldblum, Jonas Geiping, and Tom Goldstein. Cold diffusion: Inverting arbitrary image transforms without noise, 2022. URL https://arxiv.org/abs/2208.09392. Heli Ben-Hamu, Samuel Cohen, Joey Bose, Brandon Amos, Maximilian Nickel, Aditya Grover, Ricky T. Q. Chen, and Yaron Lipman. Matching normalizing flows and probability paths on manifolds. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesv ari, Gang Niu, and Sivan Sabato (eds.), International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pp. 1749 1763. PMLR, 2022. URL https://proceedings.mlr.press/v162/ben-hamu22a. html. Jean-David Benamou and Yann Brenier. A computational fluid mechanics solution to the mongekantorovich mass transfer problem. Numerische Mathematik, 84(3):375 393, 2000. Nicholas M. Boffi and Eric Vanden-Eijnden. Probability flow solution of the fokker-planck equation, 2022. URL https://arxiv.org/abs/2206.04642. Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelity natural image synthesis. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=B1xsqj09Fm. Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. Neural ordinary differential equations. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper/2018/file/ 69386f6bb1dfed68692a24c8686939b9-Paper.pdf. Scott Chen and Ramesh Gopinath. Gaussianization. In T. Leen, T. Dietterich, and V. Tresp (eds.), Advances in Neural Information Processing Systems, volume 13. MIT Press, 2000. URL https://proceedings.neurips.cc/paper/2000/file/ 3c947bc2f7ff007b86a9428b74654de5-Paper.pdf. Kristy Choi, Chenlin Meng, Yang Song, and Stefano Ermon. Density ratio estimation via infinitesimal classification. In Gustau Camps-Valls, Francisco J. R. Ruiz, and Isabel Valera (eds.), International Conference on Artificial Intelligence and Statistics, AISTATS 2022, 28-30 March 2022, Virtual Event, volume 151 of Proceedings of Machine Learning Research, pp. 2552 2573. PMLR, 2022. URL https://proceedings.mlr.press/v151/choi22a.html. Djork-Arn e Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). In Yoshua Bengio and Yann Le Cun (eds.), 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016. URL http://arxiv.org/abs/1511.07289. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248 255, 2009. doi: 10.1109/CVPR.2009.5206848. Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real NVP. In International Conference on Learning Representations, 2017. URL https://openreview. net/forum?id=Hkpbn H9lx. Published as a conference paper at ICLR 2023 J.R. Dormand and P.J. Prince. A family of embedded runge-kutta formulae. Journal of Computational and Applied Mathematics, 6(1):19 26, 1980. ISSN 0377-0427. doi: https://doi.org/ 10.1016/0771-050X(80)90013-3. URL https://www.sciencedirect.com/science/ article/pii/0771050X80900133. Conor Durkan, Artur Bekasov, Iain Murray, and George Papamakarios. Neural spline flows. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch e-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/ 7ac71d433f282034e088473244df8c02-Paper.pdf. Chris Finlay, Joern-Henrik Jacobsen, Levon Nurbekyan, and Adam Oberman. How to train your neural ODE: the world of Jacobian and kinetic regularization. In Hal Daum e III and Aarti Singh (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 3154 3164. PMLR, 13 18 Jul 2020. URL https://proceedings.mlr.press/v119/finlay20a.html. Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. Made: Masked autoencoder for distribution estimation. In Francis Bach and David Blei (eds.), Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pp. 881 889, Lille, France, 07 09 Jul 2015. PMLR. URL https://proceedings. mlr.press/v37/germain15.html. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672 2680, 2014. Will Grathwohl, Ricky T. Q. Chen, Jesse Bettencourt, and David Duvenaud. Scalable reversible generative models with free-form continuous dynamics. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=r Jxgkn Cc K7. Jeremy Heng, Valentin De Bortoli, Arnaud Doucet, and James Thornton. Simulating diffusion bridges with score matching, 2021. URL https://arxiv.org/abs/2111.07243. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 6840 6851. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/ 4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf. Emiel Hoogeboom, V ıctor Garcia Satorras, Cl ement Vignac, and Max Welling. Equivariant diffusion for molecule generation in 3D. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pp. 8867 8887. PMLR, 17 23 Jul 2022. URL https://proceedings.mlr.press/v162/ hoogeboom22a.html. Chin-Wei Huang, David Krueger, Alexandre Lacoste, and Aaron Courville. Neural autoregressive flows. In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 2078 2087. PMLR, 10 15 Jul 2018. URL https://proceedings.mlr.press/v80/ huang18d.html. Chin-Wei Huang, Ricky T. Q. Chen, Christos Tsirigotis, and Aaron Courville. Convex potential flows: Universal probability distributions with optimal transport and convex optimization. In International Conference on Learning Representations, 2021. URL https://openreview. net/forum?id=te7PVH1s Px J. M.F. Hutchinson. A stochastic estimator of the trace of the influence matrix for laplacian smoothing splines. Communications in Statistics - Simulation and Computation, 18(3):1059 1076, 1989. doi: 10.1080/03610918908812806. URL https://doi.org/10.1080/ 03610918908812806. Published as a conference paper at ICLR 2023 Dongjun Kim, Seungjae Shin, Kyungwoo Song, Wanmo Kang, and Il-Chul Moon. Soft truncation: A universal training technique of score-based diffusion model for high precision score estimation. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pp. 11201 11228. PMLR, 17 23 Jul 2022. URL https://proceedings.mlr.press/v162/kim22i.html. Diederick P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), 2015. Diederik P Kingma, Tim Salimans, Ben Poole, and Jonathan Ho. On density estimation with diffusion models. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, 2021. URL https://openreview.net/ forum?id=2Ld Bqxc1Yv. Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper/2018/file/ d139db6a236200b21cc7f752979132d0-Paper.pdf. Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. Cifar-10 (canadian institute for advanced research). 2009. URL http://www.cs.toronto.edu/ kriz/cifar.html. Doyup Lee, Chiheon Kim, Saehoon Kim, Minsu Cho, and Wook-Shin Han. Autoregressive image generation using residual quantization. Co RR, abs/2203.01941, 2022. doi: 10.48550/ar Xiv.2203. 01941. URL https://doi.org/10.48550/ar Xiv.2203.01941. Yaron Lipman, Ricky T. Q. Chen, Heli Ben-Hamu, Maximilian Nickel, and Matt Le. Flow matching for generative modeling, 2022. URL https://arxiv.org/abs/2210.02747. Qiang Liu. Rectified flow: A marginal preserving approach to optimal transport, 2022. URL https://arxiv.org/abs/2209.14577. Xingchao Liu, Chengyue Gong, and Qiang Liu. Flow straight and fast: Learning to generate and transfer data with rectified flow, 2022. URL https://arxiv.org/abs/2209.03003. Cheng Lu, Kaiwen Zheng, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. Maximum likelihood training for score-based diffusion ODEs by high order denoising score matching. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pp. 14429 14460. PMLR, 17 23 Jul 2022. URL https://proceedings.mlr.press/v162/lu22f.html. Jacob Menick and Nal Kalchbrenner. GENERATING HIGH FIDELITY IMAGES WITH SUBSCALE PIXEL NETWORKS AND MULTIDIMENSIONAL UPSCALING. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum? id=Hylz Ti C5Km. Dan Mikulincer and Yair Shenfeld. On the lipschitz properties of transportation along heat flows, 2022. URL https://arxiv.org/abs/2201.01382. Vinod Nair and Geoffrey E. Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on International Conference on Machine Learning, ICML 10, pp. 807 814, Madison, WI, USA, 2010. Omnipress. ISBN 9781605589077. Kirill Neklyudov, Daniel Severo, and Alireza Makhzani. Action matching: A variational method for learning stochastic dynamics from samples, 2022. URL https://arxiv.org/abs/2210. 06662. Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 8162 8171. PMLR, 18 24 Jul 2021. URL https://proceedings.mlr.press/v139/ nichol21a.html. Published as a conference paper at ICLR 2023 Maria-Elena Nilsback and Andrew Zisserman. A visual vocabulary for flower classification. In IEEE Conference on Computer Vision and Pattern Recognition, volume 2, pp. 1447 1454, 2006. Derek Onken, Samy Wu Fung, Xingjian Li, and Lars Ruthotto. Ot-flow: Fast and accurate continuous normalizing flows via optimal transport. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10):9223 9232, May 2021. doi: 10.1609/aaai.v35i10.17113. URL https://ojs.aaai.org/index.php/AAAI/article/view/17113. George Papamakarios, Theo Pavlakou, and Iain Murray. Masked autoregressive flow for density estimation. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS 17, pp. 2335 2344, Red Hook, NY, USA, 2017. Curran Associates Inc. ISBN 9781510860964. Stefano Peluchetti. Non-denoising forward-time diffusions, 2022. URL https:// openreview.net/forum?id=o Vf IKuhqf C. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical textconditional image generation with clip latents, 2022. URL https://arxiv.org/abs/ 2204.06125. Ali Razavi, Aaron van den Oord, and Oriol Vinyals. Generating diverse high-fidelity images with vq-vae-2. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch e-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/ 5f8e2fa1718d1bbcadf1cd9c7a54fb8c-Paper.pdf. Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In Francis Bach and David Blei (eds.), Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pp. 1530 1538, Lille, France, 07 09 Jul 2015. PMLR. URL https://proceedings.mlr.press/v37/rezende15. html. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj orn Ommer. Highresolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10684 10695, June 2022. Noam Rozen, Aditya Grover, Maximilian Nickel, and Yaron Lipman. Moser flow: Divergencebased generative modeling on manifolds. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, 2021. URL https://openreview.net/forum?id=q Gv Mv3und NJ. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, and Mohammad Norouzi. Photorealistic text-to-image diffusion models with deep language understanding, 2022. URL https://arxiv.org/abs/ 2205.11487. Filippo Santambrogio. Optimal transport for applied mathematicians. Birk auser, NY, 55(58-63):94, 2015. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In Francis Bach and David Blei (eds.), Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pp. 2256 2265, Lille, France, 07 09 Jul 2015. PMLR. URL https://proceedings.mlr.press/v37/sohl-dickstein15.html. Yang Song, Conor Durkan, Iain Murray, and Stefano Ermon. Maximum likelihood training of score-based diffusion models. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, volume 34, pp. 1415 1428. Curran Associates, Inc., 2021a. URL https://proceedings.neurips. cc/paper/2021/file/0a9fdbb17feb6ccb7ec405cfb85222c4-Paper.pdf. Published as a conference paper at ICLR 2023 Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2021b. URL https://openreview.net/ forum?id=Px TIG12RRHS. E. G. Tabak and Cristina V. Turner. A family of nonparametric density estimation algorithms. Communications on Pure and Applied Mathematics, 66(2):145 164, 2013. doi: https://doi.org/10. 1002/cpa.21423. URL https://onlinelibrary.wiley.com/doi/abs/10.1002/ cpa.21423. Esteban G. Tabak and Eric Vanden-Eijnden. Density estimation by dual ascent of the log-likelihood. Communications in Mathematical Sciences, 8(1):217 233, 2010. doi: cms/1266935020. URL https://doi.org/. A aron Van Den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML 16, pp. 1747 1756. JMLR.org, 2016. C edric Villani. Optimal transport: old and new, volume 338. Springer, 2009. Zhisheng Xiao, Karsten Kreis, and Arash Vahdat. Tackling the generative learning trilemma with denoising diffusion GANs. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=Jpr M0p-q0Co. A BACKGROUND ON TRANSPORT MAPS AND THE CONTINUITY EQUATION The following result is standard and can be found e.g. in (Villani, 2009; Santambrogio, 2015) Proposition A.1. Let ρt(x) satisfy the continuity equation tρt + (vtρt) = 0. (A.1) Assume that vt(x) is C1 in both t and x for t 0 and globally Lipschitz in x. Then, given any t, t 0, the solution of (A.1) satisfies ρt(x) = ρt (Xt,t (x)) exp Z t t vs(Xt,s(x))ds (A.2) where Xs,t is the probability flow solution to d dt Xs,t(x) = vt(Xs,t(x)), Xs,s(x) = x. (A.3) In addition, given any test function ϕ : Ω R, we have Z Ω ϕ(x)ρt(x)dx = Z Ω ϕ(Xt ,t(x))ρt (x)dx. (A.4) In words, Lemma A.1 states that an evaluation of the PDF ρt at a given point x may be obtained by evolving the probability flow equation (2) backwards to some earlier time t to find the point x that evolves to x at time t, assuming that ρt (x ) is available. In particular, for t = 0, we obtain ρt(x) = ρ0(Xt,0(x)) exp Z t 0 vs(Xt,s(x))ds , (A.5) Ω ϕ(x)ρt(x)dx = Z Ω ϕ(X0,t(x))ρ0(x)dx. (A.6) Proof. The assumed C1 and globally Lipschitz conditions on vt guarantee global existence (on t 0) and uniqueness of the solution to (2). Differentiating ρt(Xt ,t(x)) with respect to t and using (2) and (A.1) we deduce d dtρt(Xt ,t(x)) = tρt(Xt ,t(x)) + d dt Xt ,t(x) ρt(Xt ,t(x)) = tρt(Xt ,t(x)) + vt(Xt ,t(x)) ρt(Xt ,t(x)) = vt(Xt ,t(x)) ρt(Xt ,t(x)) Published as a conference paper at ICLR 2023 Integrating this equation in t from t = t to t = t gives ρt(Xt ,t(x)) = ρt (x) exp Z t t vs(Xt ,s(x))ds (A.8) Evaluating this expression at x = Xt,t (x) and using the group properties (i) Xt ,t(Xt,t (x)) = x and (ii) Xt ,s(Xt,t (x)) = Xt,s(x) gives (A.2). Equation (A.4) can be derived by using (A.2) to express ρt(x) in the integral at the left hand-side, changing integration variable x Xt ,t(x) and noting that the factor exp R t t vs(Xt,s(x))ds is precisely the Jacobian of this change of variable. The result is the integral at the right hand-side of (A.4). B PROOF OF PROPOSITION 1 We will work under the following assumption: Assumption B.1. The densities ρ0(x) and ρ1(x) are continuously differentiable in x; It(x0, x1) is continuously differentiable in (t, x0, x1) and satisfies (4) and (7); and for all t [0, 1] we have Z Rd Rd eik It(x0,x1)ρ0(x0)ρ1(x1)dx0dx1 Rd Rd t It(x0, x1)eik It(x0,x1)ρ0(x0)ρ1(x1)dx0dx1 This structural assumption guarantees that the stochastic interpolant xt has a probability density and a probability current. As shown in Appendix C it is satisfied e.g. if ρ0 and ρ1 are Gaussian mixture densities and It(x0, x1) = atx0 + btx1, (B.2) where at and bt are C1 function of t [0, 1] satisfying at 0, bt 0, a0 = 1, a1 = 0, b0 = 0, b1 = 1, at > 0 on t [0, 1), bt > 0 on t (0, 1]. (B.3) The interpolant (4) is in this class for the choice at = cos( 1 2πt), bt = sin( 1 Our proof of Proposition 1 will rely on the following result that quantifies the probability density and the probability current of the stochastic interpolant xt defined in (6): Lemma B.1. If Assumption B.1 holds, then the stochastic interpolant xt defined in (6) has a probability density function ρt(x) given by ρt(x) = (2π) d Z Rd Rd Rd e ik (x It(x0,x1))ρ0(x0)ρ1(x1)dx0dx1dk (B.5) and it satisfies the continuity equation tρt(x) + jt(x) = 0, ρt=0(x) = ρ0(x), ρt=1(x) = ρ1(x) (B.6) withe the probability current jt(x) given by jt(x) = (2π) d Z Rd Rd Rd t It(x0, x1)e ik (x It(x0,x1))ρ0(x0)ρ1(x1)dx0dx1dk (B.7) In addition the action of ρt(x) and jt(x) against any test function ϕ : Rd R can be expressed as Z Rd ϕ(x)ρt(x)dx = Z Rd Rd ϕ(It(x0, x1))ρ0(x0)ρ1(x1)dx0dx1 (B.8) Rd ϕ(x)jt(x)dx = Z Rd Rd Rd t It(x0, x1)ϕ(It(x0, x1))ρ0(x0)ρ1(x1)dx0dx1 (B.9) Published as a conference paper at ICLR 2023 Note that (B.8) and (B.9) can be formally rewritten as (12) and (14) using the Dirac delta distribution. Proof. By definition of xt in (6), the characteristic function of this random variable is E exp(ik xt) = Z Rd Rd eik It(x0,x1)ρ0(x0)ρ1(x1)dx0dx1 (B.10) Under Assumption B.1, the Fourier inversion theorem implies that xt has a density ρt(x) given by (B.5). Taking the time derivative of this density gives tρt(x) = (2π) d Z Rd Rd Rd ik t It(x0, x1)e ik (x It(x0,x1))ρ0(x0)ρ1(x1)dx0dx1dk = jt(x) (B.11) with jt(x) given by (B.7). Lemma B.1 shows that ρt(x) satisfies the continuity equation (3) with the velocity field vt(x) defined in (15). It also show that the objective function G(ˆv) in (9) is well-defined, which implies the first part Proposition 1. For the second part, we will need Lemma B.2. If Assumption B.1 holds, then Z 1 Rd |vt(x)|2ρt(x)dxdt = E |vt(It(x0, x1))|2 E | t It(x0, x1)|2 < (B.12) Proof. For K < , define ϕK t (x) = 1 if |vt(x)| K 0 else (B.13) Then, using the pointwise identity vt(x)ρt(x) = jt(x) as well as (B.8) and (B.9), we can write Rd ϕK t (x) 2|vt(x)|2ρt(x) 2vt(x) jt(x) dxdt = 2E ϕK t (It)|vt(It)|2 2E ϕK t (It) t It vt(It) = E ϕK t (It)|vt(It)|2 + E ϕK t (It)|vt(It) t It|2 E ϕK t (It)| t It|2 E ϕK t (It)|vt(It)|2 E ϕK t (It)| t It|2 . where we use the shorthand It = It(x0, x1) and t It = t It(x0, x1). Therefore 0 E ϕK t (It)|vt(It)|2 E ϕK t (It)| t It|2 . (B.15) Since lim K E ϕK t (It)| t It|2 = E | t It|2 and this quantity is finite by assumption we deduce that lim K E ϕK t (It)|vt(It)|2 exists and is bounded by E | t It|2 . Lemma B.2 implies that the objective H(ˆv) in (17) is well-defined, and the second part of statement of Proposition 1 follows from the argument given after (17). For the third part of the proposition we can then proceed as explained in main text, starting from the Poisson equation (B.21). Remark B.3. Let us show on a simple example that the inequality E |vt(It(x0, x1))|2 E | t It(x0, x1)|2 is not saturated in general. Assume that ρ0(x) is a Gaussian density of mean zero and variance one, and ρ1(x) a Gaussian density of mean m R and variance one. In this case, if we use the trigonometric interpolant (5), (C.4) indicates that ρt(x) is a Gaussian density with mean sin( 1 2πt)m and variance one, and (C.6) simplifies to vt(x) = 1 4π2m2 cos2( 1 2πt) , so that Z R |vt(x)|2ρt(x)dx = 1 4π2m2 cos2( 1 2πt) (B.16) Published as a conference paper at ICLR 2023 At the same time Z R R | t It(x0, x1)|2ρ0(x0)ρ1(x1)dx0dx1 2πt)x0 + cos( 1 2πt)x1 2ρ0(x0)ρ1(x1)dx0dx1 4π2 sin2( 1 2πt) + cos2( 1 2πt)(1 + m2) 4π2 1 + m2 cos2( 1 and so E |vt(It(x0, x1))|2 = E | t It(x0, x1))|2 1 4π2 < E | t It((x0, x1))|2 . The interpolant density ρt(x) and the current jt(x) are given explicitly in Appendix C in the case where ρ0 and ρ1 are both Gaussian mixture densities and we use the linear interpolant (B.2). Notice that we can evaluate vt=0(x) and vt=1(x) more explicitly. For example, with the linear interpolant (B.2) we have jt=0(x) = a0xρ0(x) + b0ρ0(x) Z Rd x1ρ1(x1)dx1, jt=1(x) = b1xρ1(x) + a1ρ1(x) Z Rd x0ρ0(x0)dx0. (B.18) From (12), this implies v0(x) = a0x + b0 Rd x1ρ1(x1)dx1, v1(x) = b1x + a1 Rd x0ρ0(x0)dx0 (B.19) For the trigonometric interpolant that uses (B.4) these reduce to vt=0(x) = 1 Rd x1ρ1(x1)dx1, vt=1(x) = 1 Rd x0ρ0(x0)dx0. (B.20) Finally, we note that the result of Proposition 1 remains valid if we work with velocities that are gradient fields. We state this result as: Proposition B.4. The statements of Proposition 1 hold if G(ˆv) is minimized over velocities that are gradient fields, in which case the minimizer is of the form vt(x) = ϕt(x) for some potential ϕt : Rd R uniquely defined up to a constant. Minimizing G(ˆv) over gradient fields ˆvt(x) = ˆϕt(x) guarantees that the minimizer does not contain any component vt(x) such that ( vt(x)ρt(x)) = 0. Such a component of the velocity has no effect on the evolution of ρt(x) but affects the map Xt, the solution of (2). We stress however that: even if we minimize G(ˆv) over velocities that are not gradient fields, the minimizer vt(x) produces a map Xt via (2) that satisfies the pushforward condition in (1). Proof. Define the potential ϕt : Rd R as the solution to (ρt ϕt) = jt (B.21) with ρt(x) and jt(x) given by (12) and (14), respectively. This is a Poisson equation for ϕt(x) which has a unique (up to a constant) solution by the Fredholm alternative since ρt(x) and jt(x) have the same support and R Rd jt(x)dx = 0. In terms of ϕt(x), (13) can therefore be written as tρt + (ρt ϕt) = 0 (B.22) The velocity ϕt(x) is also the unique minimizer of the objective (9) over gradient fields since if we set ˆvt(x) = ˆϕt(x) and optimize G( ˆϕ) over ˆϕ, the Euler-Lagrange equation for the minimizer is precisely the Poisson equation (B.21). The lower bound on the objective evaluated at ϕt(x) still holds since in the argument above involving (17) and (18) can be made by replacing the identity vt(x)ρt(x) = jt(x) with R Rd ˆϕt(x) ϕt(x)ρt(x)dx = R Rd ˆϕt(x) jt(x)dx, which is (B.21) written in weak form. Published as a conference paper at ICLR 2023 Interestingly, with vt(x) defined in (15) and ϕt(x) defined as the solution to (B.21), we have Z Rd |vt(x)|2ρt(x)dx Z Rd | ϕt(x)|2ρt(x)dx (B.23) consistent with the fact that the gradient field ϕt(x) is the velocity that minimizes R Rd |ˆvt(x)|2ρt(x)dx over all ˆvt(x) such that tρt + (ˆvtρt) = 0 with ρt(x) given by (12). We stress however that, even if we work we gradient fields, the inequality (21) is not saturated in general. Remark B.5. If we assume that ˆvt(x) = ˆϕt(x), we can write the objective in (16) as G( ˆϕ) = Z 1 | ˆϕt(x)|2ρt(x) 2 ˆϕt(x) jt(x) dxdt | ˆϕt(x)|2ρt(x) + 2ˆϕt(x) jt(x) dxdt | ˆϕt(x)|2ρt(x) 2ˆϕt(x) tρt(x) dxdt | ˆϕt(x)|2ρt(x) + 2 t ˆϕt(x)ρt(x) dxdt ˆϕt=0(x)ρ0(x) ˆϕt=1(x)ρ1(x) dx, where we integrated by parts in x to get the second equality, we used the continuity equation (13) to get the second, and integrated by parts in t to get the third using ρt=0 = ρ0 and ρt=1 = ρ1. Since the objective in the last expression is an expectation with respect to ρt, we can evaluate it as G( ˆϕ) = E | ˆϕt(It)|2 + 2 t ˆϕt(It) + 2E0 ˆϕt=0 2E1 ˆϕt=1, (B.25) where E1 and E0 denote expectations with respect to ρ1 and ρ0, respectively. Therefore, we could use (B.25) as alternative objective to obtain vt(x) = ϕt(x) by minimization. This objective is, up to a sign and a factor 2, the KILBO objective introduced in Neklyudov et al. (2022). Notice that using the original objective in (9) avoids the computation of derivatives in x and t, and allows one to work with ˆvt(x) directly. C THE CASE OF GAUSSIAN MIXTURE DENSITIES Here we consider the case where ρ0 and ρ1 are both Gaussian mixture densities. We denote N(x|m, C) = (2π) d/2[det C] 1/2 exp 1 2(x m)T C 1(x m) Rd eik (x m) 1 2 k T Ckdk (C.1) the Gaussian probability density with mean vector m Rd and positive-definite symmetric covariance matrix C = CT Rd Rd. We assume that i=1 p0 i N(x|m0 i , C0 i ), ρ1(x) = i=1 p1 i N(x|m1 i , C1 i ) (C.2) where N0, N1 N, p0 i > 0 with PN0 i=1 p0 i = 1, m0 i Rd, C0 i = (C0 i )T Rd Rd, positivedefinite, and similarly for p1 i , m1 i , and C1 i . We assume that the interpolant is of the form (B.2) and we denote mij t = atm0 i + btm1 j, Cij t = a2 t C0 i + b2 t C1 j , i = 1, . . . , N0, j = 1, . . . , N1 (C.3) Note that if all the covariance matrices are the same, C0 i = C1 j = C, with the trigonometric interpolant in (5) we have Cij t = C, which justifies this choice of interpolant. Published as a conference paper at ICLR 2023 Proposition C.1. The interpolant density ρt(x) obtained by connecting the probability densities in (C.2) using the linear interpolant (B.2) is given by j=1 p0 i p1 j N(x|mij t , Cij t ) (C.4) and it satisfies the continuity equation tρt(x) + jt(x) = 0 with the current j=1 p0 i p1 j mij t + 1 2 Cij t (Cij t ) 1(x mij t ) N(x|mij t , Cij t ) (C.5) This proposition implies that PN0 i=1 PN1 j=1 p0 i p1 j mij t + 1 2 Cij t (Cij t ) 1(x mij t ) N(x|mij t , Cij t ) PN0 i=1 PN1 j=1 p0 i p1 j N(x|mij t , Cij t ) (C.6) This velocity field is growing at most linearly in x, and when the mode of the Gaussian are well separated, in each mode it approximately reduces to 2 Cij t (Cij t ) 1(x mij t ) (C.7) Proof. Using the Fourier representation in (C.1) and proceeding as in the proof of Lemma B.1, we deduce that ρt(x) is given by ρt(x) = (2π) d N0 X j=1 p0 i p1 j Rd eik (x mij t ) 1 2 k T Cij t kdk. (C.8) Performing the integral over k gives (C.4). Taking the time derivative of this density gives tρt(x) = (2π) d N0 X j=1 p0 i p1 j ik mij t + 1 2k T Cij t k eik (x mij t ) 1 2 k T Cij t kdk = (2π) d N0 X j=1 p0 i p1 j Rd ik mij t 1 2i Cij t k eik (x mij t ) 1 2 k T Cij t kdk jt(x) = (2π) d N0 X j=1 p0 i p1 j Rd( mij t 1 2i Cij t k)eik (x mij t ) 1 2 k T Cij t kdk = (2π) d N0 X j=1 p0 i p1 j mij t Rd eik (x mij t ) 1 2 k T Cij t kdk 2(2π) d N0 X j=1 p0 i p1 j Cij t Z Rd eik (x mij t ) 1 2 k T Cij t kdk j=1 p0 i p1 j mij t 1 2 Cij t N(x|mij t , Cij t ) j=1 p0 i p1 j mij t + 1 2 Cij t (Cij t ) 1(x mij t ) N(x|mij t , Cij t ) Published as a conference paper at ICLR 2023 D OPTIMIZING TRANSPORT THROUGH STOCHASTIC INTERPOLANTS Using the velocity vt(x) in (15) that minimizes the objective (9) in the ODE (2) gives an exact transport map T = Xt=1 from ρ0 to ρ1. However, this map is not optimal in general, in the sense that it does not minimize Z Rd | ˆT(x) x|2ρ0(x)dx (D.1) over all ˆT such that ˆT ρ0 = ρ1. It is easy to understand why: In their seminal paper Benamou & Brenier (2000) showed that finding the optimal map requires solving the minimization problem min (ˆv,ˆρ) Rd |ˆvt(x)|2ˆρt(x)dxdt subject to: tˆρt + ˆvtˆρt = 0, ˆρt=0 = ρ0, ˆρt=1 = ρ1. (D.2) As also shown in (Benamou & Brenier, 2000), the velocity minimizing (D.2) is a gradient field, v t (x) = ϕ t (x), and the minimizing couple (ρ t , ϕ t ) is unique and satisfies tρ t + ϕ t ρ t = 0, ρ t=0 = ρ0, ρ t=1 = ρ1 tϕ t + 1 2| ϕ t |2 = 0. (D.3) In contrast, in our construction the interpolant density ρt(x) is fixed by the choice of interpolant It(x0, x1), and ρt(x) = ρ t (x) in general. As a result, the value of R 1 0 R Rd |vt(x)|2ρt(x)dxdt = E[|vt(It)|2] for the velocity vt(x) minimizing (9) is not the minimum in (D.2). Minimizing (9) over gradient fields reduces the value of the objective in (D.2), but it does not yield an optimal map either indeed the gradient velocity ϕt(x) with the potential ϕt(x) solution to (B.21) only minimizes the objective in (D.2) over all ˆvt(x) with ˆρt(x) = ρt(x) fixed, as explained after (B.23). It is natural to ask whether our stochastic interpolant construction can be amended or generalized to derive optimal maps. This question is discussed next, from two complementary perspectives: via optimization of the interpolant It(x0, x1), and/or via optimization of the base density ρ0, assuming that we have some leeway to choose this density. D.1 OPTIMAL TRANSPORT WITH OPTIMAL INTERPOLANTS Since (10) indicates that the minimum of G(ˆv) is lower bounded by minus the value of the objective in (D.2), one way to optimize the transport is to maximize this minimum over the interpolant. Under some assumption on the Benamou-Brenier density ρ t (x) solution of (D.3), this procedure works, as we show now. Let us begin with a definition: Definition D.1 (Interpolable density). We say that one-parameter family of probability densities ρt(x) with t [0, 1] is interpolable (in short: ρt(x) is interpolable) if there exists a one-parameter family of invertible maps Tt : Rd Rd with t [0, 1], continuously-differentiable in time and space, such that ρt is the pushforward by Tt of the Gaussian density with mean zero and covariance identity, i.e. Tt N(0, Id) = ρt for all t [0, 1]. Interpolable densities form a wide class, as discussed e.g. in Mikulincer & Shenfeld (2022). These densities also are the ones that can be learned by score-based diffusion modeling discussed in Section 2.2. They are useful for our purpose because of the following result showing that any interpolable density can be represented as an interpolant density: Proposition D.1. Let ρt(x) be an interpolable density in the sense of Definition D.1. Then It(x0, x1) = Tt T 1 0 (x0) cos( 1 2πt) + T 1 1 (x1) sin( 1 satisfies (4) and is such that the stochastic interpolant defined in (6) satisfies xt ρt. We stress that the interpolant in (D.4) is in general not the only one giving the interpolable density ρt(x), and the actual value of the map Tt plays no role in the results below. Published as a conference paper at ICLR 2023 Proof. First notice that It=0(x0, x1) = T0(T 1 0 (x0)) = x0, It=1(x0, x1) = T1(T 1 1 (x0)) = x1 (D.5) so that the boundary condition in (4) are satisfied. Then observe that, by definition of Tt, x0 ρ0 T 1 0 (x0) N(0, Id), x1 ρ1 T 1 1 (x1) N(0, Id), (D.6) This implies that, if x0 ρ0, x1 ρ1, and they are independent, T 1 0 (x0) cos( 1 2πt) + T 1 1 (x1) sin( 1 is a Gaussian random variable with mean zero and covariance 2πt) + Id sin2( 1 2πt) = Id, (D.8) i.e. it is a sample from N(0, Id). By definition of Tt, this implies that xt = It(x0, x1) = Tt T 1 0 (x0) cos( 1 2πt) + T 1 1 (x1) sin( 1 is a sample from ρt if x0 ρ0, x1 ρ1, and they are independent. Proposition D.1 implies that: Proposition D.2. Assume that (i) the optimal density function ρ t (x) minimizing (D.2) is interpolable and (ii) (D.3) has classical solution. Consider the max-min problem max ˆI min ˆv G(ˆv) (D.10) where G(ˆv) is the objective in (9) and the maximum is taken over interpolants satisfying (4). Then a maximizer of (D.10) exists, and any maximizer I t (x0, x1) is such that the probability density function of x t = I t (x0, x1), with x0 ρ0 and x1 ρ1 independent, is the optimal ρ t (x), the mimimizing velocity is v t (x) = ϕ t (x), and the pair (ρ t (x), ϕ t (x)) satisfies (D.3). The proof of Proposition D.2 relies on the following simple reformulation of (D.2): Lemma D.3. The Benamou-Brenier minimization problem in (D.2) is equivalent to the min-max problem max ˆρ,ˆȷ min ˆv 2|ˆvt(x)|2ˆρt(x) ˆvt(x) ˆȷt(x) dxdt = min ˆv max ˆρ,ˆȷ 2|ˆvt(x)|2ˆρt(x) ˆvt(x) ˆȷt(x) dxdt subject to: tˆρt + ˆȷt = 0, ˆρt=0 = ρ0, ˆρt=1 = ρ1 In particular, under the conditions on ρ0 and ρ1 such that (D.2) has a minimizer, the optimizer (ρ t , v t , j t ) is unique and satisfies v t (x) = ϕ t (x), j t (x) = ϕ t (x)ρ t (x) with (ρ t , ϕ t ) solution to (D.3). Proof. Since (D.11) is convex in ˆv and concave in (ˆρ, ˆȷ), the min-max and the max-min are equivalent by von Neumann s minimax theorem. Considering the problem where we minimize over ˆvt(x) first, the minimizer must satisfy: ˆvt(x)ˆρt(x) = ˆȷt(x) (D.12) Since ˆρt(x) 0 and ˆȷt(x) have the same support by the constraint in (D.11), the solution to this equation is unique on this support. Using (D.12) in (D.11), we can therefore rewrite the max-min problem as Rd |ˆvt(x)|2ˆρt(x)dxdt subject to: tˆρt + ˆvtρt = 0, ˆρt=0 = ρ0, ˆρt=1 = ρ1 This problem is equivalent to the Benamou-Brenier minimization problem in (D.2). Published as a conference paper at ICLR 2023 To write the Euler-Lagrange equations for the optimizers of the min-max problem (D.11), let us introduce the extended objective Z 1 2|ˆvt(x)|2ˆρt(x) ˆvt(x) ˆȷt(x) dxdt Z 1 Rd ˆϕt(x) tˆρt + ˆȷt(x) dxdt Rd(ˆϕ1(x)ρ1 ˆϕ0(x)ρ0)dx (D.14) where ˆϕt : Rd R is a Lagrangian multiplier to be determined. The Euler-Lagrange equations can be obtained by taking the first variation of the objective (D.14) over ˆϕ, ˆρ, ˆȷ, and ˆv, respectively. They read 0 = tρ t + j t , ρ t=0 = ρ0, ρ t=1 = ρ1 0 = tϕ t + 1 0 = v t + ϕ t , 0 = v t ρ t j t . These equations imply that v t (x) = ϕ t (x), j t (x) = v t (x)ρ t (x) = ϕ t (x)ρ t (x), with (ρ t , ϕ t ) solution to (D.3), as stated in the lemma. Since the optimization problem in (D.11) is convex in ˆv and concave in (ˆρ, ˆȷ), its optimizer is unique and solves these equations. Proof of Proposition D.2. We can reformulate the max-min problem (D.10) as max ˆρ,ˆȷ min ˆv 2|ˆvt(x)|2ˆρt(x) ˆvt(x) ˆȷt(x) dxdt (D.16) where the maximization is taken over probability density functions ˆρt(x) and probability currents ˆȷt(x) as in (B.5) and (B.7) with It replaced by ˆIt, i.e. formally given in terms of the Dirac delta distribution as Rd Rd δ x ˆIt(x0, x1) ρ0(x0)ρ1(x1)dx0dx1 Rd Rd t ˆIt(x0, x1)δ x ˆIt(x0, x1) ρ0(x0)ρ1(x1)dx0dx1 (D.17) Since this pair automatically satisfies tˆρt + ˆȷt = 0, ˆρt=0 = ρ0, ˆρt=1 = ρ1, (D.18) the max-min problem (D.16) is similar to the one considered in Lemma (D.3), except that the maximization is taken over probability density functions and associated currents that can be written as in (D.17). By Proposition D.1, this class is large enough to represent ρ t (x) since we have assumed that ρ t (x) is an interpolable density, and the statement of the proposition follows. Since our primary aim here is to construct a map T = Xt=1 that pushes forward ρ0 onto ρ1, but not necessarily to identify the optimal one, we can perform the maximization over ˆIt(x0, x1) in a restricted class (though of course the corresponding map is no longer optimal in that case). We investigate this option in numerical examples in Appendix H, using interpolants of the type (B.2) and maximizing over the functions at and bt, subject to a0 = b1 = 1, a1 = b0 = 0. In Section G we also discussion generalizations of the interpolant that can render the optimization of the transport easier to perform. We leave the full investigation of the consequences of Proposition D.2 for future work. Remark D.4 (Optimal interpolant for Gaussian densities). Note that if ρ0 and ρ1 are both Gaussian densities with respective mean m0 Rd and m1 Rd and the same covariance C Rd Rd, an interpolant of the type D.4 is It(x0, x1) = cos( 1 2πt)(x0 m0) + sin( 1 2πt)(x1 m11) + (1 t)m0 + tm1, (D.19) and a calculation similar to the one presented in Appendix C shows that the associated velocity field vt(x) is vt(x) = (m1 m0) (D.20) This is the velocity giving the optimal transport map Xt(x) = x + (m1 m0)t. Published as a conference paper at ICLR 2023 Remark D.5 (Rectifying the map). In (Liu et al., 2022; Liu, 2022), where an approach similar to ours using the linear interpolant xt = x0(1 t) + x1t is introduced, an alternative procedure is proposed to optimize the transport. Specifically, it is suggested to rectify the map T = Xt=1 learned by repeating the procedure using the new interpolant x t = x0(1 t) + T(x0)t with x0 ρ0. As shown in (Liu, 2022) iterating on this rectification step yields successive maps that are getting closer to optimality. The main drawback of this approach is that it requires each of these maps (including the first one) to be learned exactly, i.e. we must have T ρ0 = ρ1 to use the interpolant x t above. If the maps are not exact, which is unavoidable in practice, the procedure introduces a bias whose amplitude will grow with the iterations, leading to instability (as noted in Liu et al. (2022); Liu (2022)). D.2 OPTIMIZING THE BASE DENSITY While our construction allows one to connect any pair of densities ρ0 and ρ1, the typical situation of interest is when ρ1 is a complex target density and we wish to construct a generative model for this density by transporting samples from a simple base density ρ0. In this case it is natural to adjust parameters in ρ0 to optimize the transport via maximization of G(v) = minˆv G(ˆv) over these parameters indeed changing ρ0 also affects the stochastic interpolant xt defined in (6), and hence both the interpolant density ρt(x) and the velocity vt(x). For example we can take ρ0 to be a Gaussian with mean m Rd and covariance C Rd Rd, and maximize G(v) = minˆv G(ˆv) over m and C. This construction is tested in the numerical examples treated in Appendix H. We also discuss how to generalize it in Section G. Optimizing ρ0 only makes practical sense if we do so in a restricted class, like that of Gaussian densities that we just discussed. Still, we may wonder whether optimizing ρ0 over all densities would automatically give ρ0 = ρ1 and vt = 0. If the interpolant is fixed, the answer is no, in general. Indeed, even if we set ρ0 = ρ1, the interpolant density will still evolve, i.e. ρt = ρ0 except at t = 0, 1, in general. This indicates that optimizing the interpolant in concert with ρ0 is necessary if we want to optimize the transport. E PROOF OF PROPOSITION 3 To use the bound in (27), let us consider the evolution of Rd |Xt(x) ˆXt(x)|2ρ0(x)dx (E.1) Using Xt(x) = vt(Xt(x)) and ˆXt(x) = ˆvt( ˆXt(x)), we deduce Rd(Xt(x) ˆXt(x)) (vt(Xt(x)) ˆvt( ˆXt(x)))ρ0(x)dx Rd(Xt(x) ˆXt(x)) (vt(Xt(x)) ˆvt(Xt(x)))ρ0(x)dx Rd(Xt(x) ˆXt(x)) (ˆvt(Xt(x)) ˆvt( ˆXt(x)))ρ0(x)dx Now use 2(Xt ˆXt) (vt(Xt) ˆvt(Xt)) |Xt ˆXt|2 + |vt(Xt) ˆvt(Xt)|2 (E.3) and 2(Xt ˆXt) (ˆvt(Xt) ˆvt( ˆXt)) 2 ˆK|Xt ˆXt|2 (E.4) to obtain Qt (1 + 2 ˆK)Qt + Z Rd |vt(Xt(x)) ˆvt(Xt(x))|2ρ0(x)dx (E.5) Therefore, by Gronwall s inequality and since Q0 = 0 we deduce Q1 e1+2 ˆ K Z 1 Rd |vt(Xt(x)) ˆvt(Xt(x))|2ρ0(x)dxdt = e1+2 ˆ KH(ˆv). (E.6) Since W 2 2 (ρ1, ˆρ1) Q1 by (27), we are done. Published as a conference paper at ICLR 2023 Note that the proposition suggests to regularize G(ˆv) using e.g. Gλ(ˆv) = G(ˆv) + λ Z 1 Rd ˆvt(x) 2ρt(x)dxdt = G(ˆv) + λE h ˆvt(It(x0, x1) 2i (E.7) with some small λ > 0. In the numerical results presented in the paper no such regularization was included. F PROOF OF PROPOSITION 4 AND LINK WITH SCORE-BASED DIFFUSION MODELS Assume that the interpolant is of the type (B.2) so that t It(x0, x1) = atx0 + btx1. For t (0, 1) let us write expression (14) for the probability current as Rd Rd( atx0 + btx1)δ(x atx0 + btx1)ρ0(x0)ρ1(x1)dx0dx1 bt (atx0 + btx1) + at bt bt at x0 δ(x atx0 + btx1)ρ0(x0)ρ1(x1)dx0dx1 = bt bt xρt(x) + at bt bt at Z Rd Rd x0δ(x atx0 + btx1)ρ0(x0)ρ1(x1)dx0dx1 (F.1) If ρ0(x0) = (2π) d/2e 1 2 |x0|2, we have the identity x0ρ0(x0) = x0ρ0(x0). Inserting this equality in the last integral in (F.1) and integrating by part using x0δ(x atx0 + btx1) = at xδ(x atx0 + btx1) (F.2) gives jt(x) = bt bt xρt(x) at at bt bt at ρt(x) (F.3) This means that vt(x) = bt bt x at at bt bt at log ρt(x) (F.4) Solving this expression in log ρt(x) and specializing it to the trigonometric interpolant with at, bt given in (B.4) gives the first equation in (28). The second one can be obtained by taking the limit of this first equation using vt=1(x) = 0 from (B.20) and l Hˆopital s rule. Note that (F.3) shows that, when the interpolant is of the type (B.2) and ρ0(x0) = (2π) d/2e 1 2 |x0|2, the continuity equation (3) can also be written as the diffusion equation tρt(x) + bt bt (xρt(x)) = at at bt bt at ρt(x) (F.5) Since we assume that at 0 and bt 0 (see (B.3), the diffusion coefficient in this equation is negative at at bt bt at 0 (F.6) This means that (F.5) is well-posed backward in time, i.e. it corresponds to backward diffusion from ρt=1 = ρ1 to ρt=0 = ρ0 = (2π) d/2e 1 2 |x0|2. Therefore, reversing this backward diffusion, similar to what is done in score-based diffusion models, gives an SDE that transforms samples from ρ0 to ρ1. Interestingly, these forward and backward diffusion processes arise on the finite time interval t [0, 1]; notice however that both the drift and the diffusion coefficient are singular at t = 1. This is unlike the velocity vt(x) which is finite at t = 0, 1 and is given by (B.19). G GENERALIZED INTERPOLANTS Our construction can be easily generalized in various ways, e.g. by making the interpolant depend on additional latent variables to be averaged upon. This enlarges the class of interpolant density we can construct, which may prove useful to get simpler (or more optimal) velocity fields in the continuity equation (3). Let us consider one specific generalization of this type: Published as a conference paper at ICLR 2023 Factorized interpolants. Suppose that we decompose both ρ0 and ρ1 as k=1 pkρk 0(x), ρ1(x) = k=1 pkρk 1(x), (G.1) where K N, ρk 0 and ρk 1 are normalized PDF for each k = 1, . . . , K, and pk > 0 with PK k=1 pk = 1. We can then introduce K interpolants Ik t (x0, x1) with k = 1, . . . , K, each satisfying (4), and define the stochastic interpolant xt = Ik t (x0, x1), k pk, x0 ρk 0, x1 ρk 1 (G.2) This corresponds to splitting the samples from ρ0 and ρ1 into K (soft) clusters, and only interpolating between samples in cluster k in ρ0 and samples in cluster k in ρ1. This clustering can either be done beforehand, based on some prior information we may have about ρ0 and ρ1, or be learned (more on this point below). It is easy to see that the PDF ρt of xt is formally given by Rd Rd δ(x Ik t (x0, x1))ρk 0(x0)ρk 1(x1)dx0dx1, (G.3) and that this density satisfies the continuity equation (B.11) for the current Rd Rd t Ik t (x0, x1)δ(x Ik t (x0, x1))ρk 0(x0)ρk 1(x1)dx0dx1, (G.4) Therefore this equation can be written as (3) with the velocity vt(x) which is the unique minimizer of a generalization of the objective (9). We state this as: Proposition G.1. The stochastic interpolant xt defined in (G.2) with Ik t (x0, x1) satisfying (4) for each k = 1, . . . , K has a probability density ρt(x) that satisfies the continuity equation (3) with a velocity vt(x) which is the unique minimizer over ˆvt(x) of the objective |ˆvt(Ik t )|2 2 t Ik t ˆvt(Ik t )) ρk 0(x0)ρk 1(x1)dx0dx1dt (G.5) where we used the shorthand notations Ik t = Ik t (x0, x1) and t Ik t = t Ik t (x0, x1). In addition the minimum value of this objective is given by GK(v) = Z 1 Rd |vt(x)|2ρt(x)dxdt > (G.6) and both these statements remain true if G(ˆv) is minimized over velocities that are gradient fields, in which case the minimizer is of the form vt(x) = ϕt(x) for some potential ϕt : Rd R uniquely defined up to a constant. We omit the proof of this proposition since it is similar to the one of Proposition 1. The advantage of this construction is that it gives us the option to make the transport more optimal by maximizing GK(v) over Ik t and/or the partitioning used to define ρk 0, ρk 1, and pk. For example, if we know that the target density ρ1 has K clusters with relative mass pk, we can define ρ0 as a Gaussian mixture with K modes, set the weight of mode k to pk, and maximize GK(v) = minˆv GK(ˆv) over the mean mk Rd and and the covariance Ck Rd Rd of each mode k = 1, . . . , K in the mixture density ρ0. H EXPERIMENTS FOR OPTIMAL TRANSPORT, PARAMETERIZING It, PARAMETERIZING ρ0 As discussed in Section 2, the minimizer of the objective in equation (10) can be maximized with respect to the interpolant It as a route toward optimal transport. We motivate this by choosing a parametric class for the interpolant It and demonstrating that the max-min optimization in equation Published as a conference paper at ICLR 2023 (D.10) can give rise to velocity fields which are easier to learn, resulting in better likelihood estimation. The 2D checkerboard example is an appealing test case because the transport is nontrivial. In this case, we train the same flow as in Section 3.1, with and without optimizing the interpolant. We choose a simple parametric class for the interpolant given by a Fourier series expansion ˆIt(x0, x1) = ˆatx0 + ˆbtx1 (H.1) where parameters {α, β}M m=1 define learnable ˆat,ˆbt via ˆat = cos 1 m=1 αm sin(mπt), ˆbt = sin 1 m=1 βm sin(mπt). (H.2) This is but one set of possible parameterizations. Another, for example, could be a rational quadratic spline, so that the endpoints for ˆat, ˆbt can be properly constrained as they are above in the Fourier expansion. In Figure H.1, we plot the log likelihood, the learned interpolants ˆat, ˆbt compared to their initializations, as well as the path length as it evolves over training epochs. With the learned interpolants, the path length is reduced, and the resultant velocity field under the same number of training epochs endows a model with better likelihood estimation, as shown in the left plot of the figure. For the path optimality experiments, M = 7 Fourier coefficients were used to parameterize the interpolant. This suggests that minimizing the transport cost can create models which are, practically, easier to learn. 0 5000 10000 Epoch 3.4 log-likelihood 0.0 0.5 1.0 t 0 5000 10000 Epoch 6.0 [|vt(x)|2] nonparametric It parametric It Figure H.1: Comparison of characteristics and performance of the learned vs nonparametric interpolant on the checkerboard density estimation. Left: Comparison of the evolution of the loglikelihood. Middle: The initial versus learned interpolant terms ˆat,ˆbt. Right: The path length of the learned vs nonparametric interpolant. In addition to parameterizing the interpolant, one can also parameterize the base density ρ0 in some simple parametric class. We show that including this in the min-max optimization given in equation (D.10) can further reduce the transport cost and improve the final log likelihood. The results are given in Figure H.2. We train an interpolant just as described above, but also allow the base density ρ0 to be parameterized as a Gaussian with mean ˆµ and covariance ˆΣ. The inclusion of the learnable base density results in a significantly reduced path length, thereby bringing the model closer to optimal transport. I IMPLEMENTATION DETAILS FOR NUMERICAL EXPERIMENTS Let {xi 0}N i=1 be N samples from the base density ρ0, {xj 1}n i=1 n samples from the target density ρ1, and {tk}K k=1 K samples from the uniform density on [0, 1]. Then an empirical estimate of the objective function in (9) is given by GN,n,K(ˆv) = 1 Kn N ˆvtk Itk(xi 0, xj 1)) 2 2 t Itk(xi 0, xj 1) ˆvtk(Itk(xi 0, xj 1)). (I.1) This calculation is parallelizable. Published as a conference paper at ICLR 2023 0 5000 10000 Epoch 3.4 log-likelihood 0.0 0.5 1.0 t 5.0 2.5 0.0 2.5 5.0 of 0 N( , ) 0 5000 10000 Epoch 6 [|vt(x)|2] nonparametric It and 0 parametric It and 0 Figure H.2: Comparison of characteristics and performance of the learned vs nonparametric interpolant on the checkerboard density estimation, while also optimizing the base density ˆρ0. The parametric ˆρ0 is given as bivariate Gaussian N(ˆµ, ˆΣ). Left: Comparison of the evolution of the loglikelihood. Center left: The initial versus learned interpolant terms ˆat,ˆbt. Center right: The learned covariance ˆΣ in red compared to the original identity covariance matrix. Right: The path length of the learned vs nonparametric interpolant. POWER GAS HEPMASS MINIBOONE BSDS300 Dimension 6 8 21 43 63 # Training point 1, 615, 917 852,174 315,123 29,556 1, 000, 000 Target batch size 800 800 800 800 300 Base batch size 150 150 150 150 200 Time batch size 10 10 20 10 20 Training Steps 105 105 105 105 105 Learning Rate (LR) 0.003 0.003 0.003 0.003 0.002 LR decay (4k epochs) 0.8 0.8 0.8 0.8 0.8 Hidden layer width 512 512 512 512 1024 # Hidden layers 4 5 4 4 5 Inner activation functions Re LU Re LU Re LU Re LU ELU Beta α, β, time samples (1.0,0.5) (1.0,0.5) (1.0,0.5) (1.0,1.0) (1.0,0.7) Table 3: Hyperparameters and architecture for tabular datasets. The architectural information and hyperparameters of the models for the resulting likelihoods in Table 2 is presented in Table 3. Re LU (Nair & Hinton, 2010) activations were used throughout, barring the BSDS300 dataset, where ELU (Clevert et al., 2016) was used. Table formatting based on (Durkan et al., 2019). In addition, reweighting of the uniform sampling of time values in the empirical calculation of (I.1) was done using a Beta distribution under the heuristic that the flow should be well trained near the target. This is in line with the statements under Proposition 1 that any weight w(t) maintains the same minimizer. The details for the image datasets are provided in Table 4. We built our models based off of the U-Net implementation provided by lucidrains public diffusion code, which we are grateful for https://github.com/lucidrains/denoising-diffusion-pytorch. We use the sinusoidal time embedding, but otherwise use the default implementation other than changing the U-Net dimension multipliers, which are provided in the table. Like in the tabular case, we reweight the time sampling to be from a beta distribution. All models were implemented on a single A100 GPU. I.1 DETAILS ON COMPUTATIONAL EFFICIENCY AND DEMONSTRATION OF CONVERGENCE GUARANTEE Below, we show that the results achieved in 3 are driven by a model that can train significantly more efficiently than the maximum likelihood approach to ODEs. Following that, we provide an illustration of the convergence requirements on the objective defined in (11). Published as a conference paper at ICLR 2023 CIFAR-10 Image Net 32 32 Flowers Dimension 32 32 32 32 128 128 # Training point 5 104 1,281,167 315,123 Batch Size 400 512 50 Training Steps 5 105 6 105 1.5 105 Hidden dim 256 256 256 Learning Rate (LR) 0.0001 0.0002 0.0002 LR decay (1k epochs) 0.985 0.985 0.985 U-Net dim mult [1,2,2,2,2] [1,2,2,2] [1,1,2,3,4] Beta α, β, time samples (1.0,0.75) (1.0,0.75) (1.0,0.75) Learned t sinusoidal embedding Yes Yes Yes # GPUs 1 1 1 Table 4: Hyperparameters and architecture for image datasets. 0 100 200 300 400 Epoch Time per epoch (s) Training time comparison 0 100 200 300 400 Epoch log p(x) comparison Ours, Gaussian FFJORD, Gaussian Ours, Miniboone FFJORD, Miniboone Figure I.1: Left: Training speed for ours vs. MLE ODE flows, with 400x speedup on the Mini Boo NE. Right: Inter Flow shows more efficient likelihood ascent. We briefly describe the experimental setup for testing the computational efficiency of our model as compared to the FFJORD maximum likelihood learning method. We use the same network architectures for the interpolant flow and the FFJORD implementation, taking the architectures used in that paper. For the Gaussian case, this is a 3 layer neural network with hidden widths of 64 units; for the 43-dimensional Mini Boo NE target, this is a 3 layer neural network with hidden widths of 860 units. Figure I.1 shows a comparison of both the cost per training epoch and the convergence of the log likelihood across epochs. We take the architecture of the vector field as defined in the FFJORD paper for the 2-dimensional Gaussian and Mini Boo NE, and use it to define the vector field for the interpolant flow. The left side of Figure I.1 shows that the cost per iteration is constant for the interpolant flow, while it grows for MLE based approaches as the ODE gets more complicated to solve. The speedup grows with dimension, 400x on Mini Boo NE. 0 250 500 750 1000 1250 1500 1750 Epoch G = G(v) + |vt(xt)|2 tdxtdt Convergence metric for POWER data shifted loss G(v) convergence Figure I.2: Demonstration of the convergence diagnostic on POWER dataset. This is necessary but not sufficient for convergence. See Section 2 for definition of G. The right side of Figure I.1 shows that, under equivalent conditions, the interpolant flow can converge faster in number of training steps, in addition to being cheaper per step. The extent of this benefit is dependent on both hyperparameters and the dataset, so a general statement about convergence speeds is difficult to make. For the sake of this comparison, we averaged over 5 trials for each model and dataset, with variance shown shaded around the curves. As described in Section 1, the minimum of G(ˆv) is bounded by the square of the path taken by the map Xt(x). The shifted value of the objective G(ˆv) in (11) can be tracked to ensure that the model velocity ˆvt(x) meets the requirement of the objective. It must be the case that G(ˆv) = 0 if ˆvt(x) is taken to be the minimizer of G(ˆv), so we can look for this signature during the training of the interpolant flow. Figure I.2 displays this phenomenon for an interpolant flow trained on the POWER dataset. Here, the shifted loss converges to the target G(v) = 0 and remains there throughout training. This suggests that the dynamics of the stochastic optimization of G(ˆv) are dual to the squared path length of the map. Published as a conference paper at ICLR 2023 Figure I.3: Uncurated samples from Oxford Flowers 128x128 model. Figure I.4: Uncurated samples from Image Net 32 32 model. Figure I.5: Uncurated samples from CIFAR-10 model.