# linear_optimal_partial_transport_embedding__dc917117.pdf Linear Optimal Partial Transport Embedding Yikun Bai 1 Ivan Vladimir Medri 1 Rocio Diaz Martin 2 Rana Muhammad Shahroz Khan 1 Soheil Kolouri 1 Optimal transport (OT) has gained popularity due to its various applications in fields such as machine learning, statistics, and signal processing. However, the balanced mass requirement limits its performance in practical problems. To address these limitations, variants of the OT problem, including unbalanced OT, Optimal partial transport (OPT), and Hellinger Kantorovich (HK), have been proposed. In this paper, we propose the Linear optimal partial transport (LOPT) embedding, which extends the (local) linearization technique on OT and HK to the OPT problem. The proposed embedding allows for faster computation of OPT distance between pairs of positive measures. Besides our theoretical contributions, we demonstrate the LOPT embedding technique in point-cloud interpolation and PCA analysis. Our code is available at https://github.com/ Baio0/Linear OPT. 1. Introduction The Optimal Transport (OT) problem has found numerous applications in machine learning (ML), computer vision, and graphics. The probability metrics and dissimilarity measures emerging from the OT theory, e.g., Wasserstein distances and their variations, are used in diverse applications, including training generative models (Arjovsky et al., 2017; Genevay et al., 2017; Liu et al., 2019), domain adaptation (Courty et al., 2014; 2017), bayesian inference (Kim et al., 2013), regression (Janati et al., 2019), clustering (Ye et al., 2017), learning from graphs (Kolouri et al., 2020) and point sets (Naderializadeh et al., 2021; Nguyen et al., 2023), to name a few. These metrics define a powerful geometry for comparing probability measures with numerous desirable 1Department of Computer Science, Vanderbilt University, 366 Featheringill, Nashville, TN 37240, USA 2Department of Mathematics, Vanderbilt University, 1326 Stevenson Center, Nashville, TN 37240, USA. Correspondence to: Yikun Bai . Proceedings of the 40 th International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). properties, for instance, parameterized geodesics (Ambrosio et al., 2005), barycenters (Cuturi & Doucet, 2014), and a weak Riemannian structure (Villani, 2003). In large-scale machine learning applications, optimal transport approaches face two main challenges. First, the OT problem is computationally expensive. This has motivated many approximations that lead to significant speedups (Cuturi, 2013; Chizat et al., 2020; Scetbon & marco cuturi, 2022). Second, while OT is designed for comparing probability measures, many ML problems require comparing non-negative measures with varying total amounts of mass. This has led to the recent advances in unbalanced optimal transport (Chizat et al., 2015; 2018b; Liero et al., 2018) and optimal partial transport (Caffarelli & Mc Cann, 2010; Figalli, 2010; Figalli & Gigli, 2010). Such unbalanced/partial optimal transport formulations have been recently used to improve minibatch optimal transport (Nguyen et al., 2022) and for point-cloud registration (Bai et al., 2022). Comparing K (probability) measures requires the pairwise calculation of transport-based distances, which, despite the significant recent computational speed-ups, remains to be relatively expensive. To address this problem, Wang et al. (2013) proposed the Linear Optimal Transport (LOT) framework, which linearizes the 2-Wasserstein distance utilizing its weak Riemannian structure. In short, the probability measures are embedded into the tangent space at a fixed reference measure (e.g., the measures Wasserstein barycenter) through a logarithmic map. The Euclidean distances between the embedded measures then approximate the 2Wasserstein distance between the probability measures. The LOT framework is computationally attractive as it only requires the computation of one optimal transport problem per input measure, reducing the otherwise quadratic cost to linear. Moreover, the framework provides theoretical guarantees on convexifying certain sets of probability measures (Moosm uller & Cloninger, 2023; Aldroubi et al., 2021), which is critical in supervised and unsupervised learning from sets of probability measures. The LOT embedding has recently found diverse applications, from comparing collider events in physics (Cai et al., 2020) and comparing medical images (Basu et al., 2014; Kundu et al., 2018) to permutation invariant pooling for comparing graphs (Kolouri et al., 2020) and point sets (Naderializadeh et al., 2021). Linear Optimal Partial Transport Embedding Many applications in ML involve comparing non-negative measures (often empirical measures) with varying total amounts of mass, e.g., domain adaptation (Fatras et al., 2021). Moreover, OT distances (or dissimilarity measures) are often not robust against outliers and noise, resulting in potentially high transportation costs for outliers. Many recent publications have focused on variants of the OT problem that allow for comparing non-negative measures with unequal mass. For instance, the optimal partial transport (OPT) problem (Caffarelli & Mc Cann, 2010; Figalli, 2010; Figalli & Gigli, 2010), Kantorovich Rubinstein norm (Guittet, 2002; Lellmann et al., 2014), and the Hellinger Kantorovich distance (Chizat et al., 2018a; Liero et al., 2018). These methods fall under the broad category of unbalanced optimal transport (Chizat et al., 2018b; Liero et al., 2018). The existing solvers for unbalanced optimal transport problems are generally as expensive or more expensive than the OT solvers. Hence, computation time remains a main bottleneck of these approaches. To reduce the computational burden for comparing unbalanced measures, Cai et al. (2022) proposed a clever extension for the LOT framework to unbalanced nonnegative measures by linearizing the Hellinger-Kantorovich, denoted as Linearized Hellinger-Kantorovich (LHK), distance, with many desirable theoretical properties. However, an unintuitive caveat about HK and LHK formulation is that the geodesic for the transported portion of the mass does not resemble the OT geodesic. In particular, the transported mass does not maintain a constant mass as it is transported (please see Figure 1). In contrast, OPT behaves exactly like OT for the transported mass with the trade-off of losing the Riemannian structure of HK. Contributions: In this paper, inspired by OT geodesics, we provide an OPT interpolation technique using its dynamic formulation and explain how to compute it for empirical distributions using barycentric projections. We use this interpolation to embed the space of measures in a euclidean space using optimal partial transport concerning a reference measure. This allows us to extend the LOT framework to LOPT, a linearized version of OPT. Thus, we reduce the computational burden of OPT while maintaining the decoupling properties between noise (created and destroyed mass) and signal (transported mass) of OPT. We propose a LOPT discrepancy measure and a LOPT interpolating curve and contrast them with their OPT counterparts. Finally, we demonstrate applications of the new framework in point cloud interpolation and PCA analysis, showing that the new technique is more robust to noise. Organization: In section 2, we review Optimal Transport Theory and the Linear Optimal Transport framework to set the basis and intuitions on which we build our new techniques. In Section 3 we review Optimal Partial Transport 1.0 0.5 0.0 0.5 1.0 0.0 Hellinger Kantorovich 1.0 0.5 0.0 0.5 1.0 0.0 1.0 0.5 0.0 0.5 1.0 0.0 1.0 0.5 0.0 0.5 1.0 0.0 1.0 0.5 0.0 0.5 1.0 0.0 1.0 0.5 0.0 0.5 1.0 0.0 Optimal Partial Transport 1.0 0.5 0.0 0.5 1.0 0.0 1.0 0.5 0.0 0.5 1.0 0.0 1.0 0.5 0.0 0.5 1.0 0.0 1.0 0.5 0.0 0.5 1.0 0.0 Figure 1. The depiction of the HK and OPT geodesics between two measures, at times t {0, 0.25, 0.5, 0.75, 1}. The top row (Blue) represents two initial deltas of mass one located at positions -1.2 and -1. The bottom row (Purple) shows two final deltas of mass one located at 1 and 1.2. At intermediate time steps t = 0.25, 0.5, 0.75, the transported part (middle delta moving from -1 to 1) changes mass for HK while its mass remains constant for OPT. Outer masses (located at -1.2 for initial time t = 0, and at 1.2 for final time t = 1) are being destroyed and created, so mass changes are expected. Notably, mass is created/destroyed with a linear rate for OPT and a nonlinear rate for HK. See Appendix H.4 for further analysis. Theory and present an explicit solution to its Dynamic formulation that we use to introduce the Linear Optimal Partial Transport framework (LOPT). We define LOPT Embedding, LOPT Discrepancy, LOPT interpolation and give explicit ways to work with empirical data. In Section 4 we show applications of the LOPT framework to approximate OPT distances, to interpolate between point cloud datasets, and to preprocess data for PCA analysis. In the appendix, we provide proofs for all the results, new or old, for which we could not find a proof in the literature. 2. Background: OT and LOT 2.1. Static Formulation of Optimal Transport Let P(Ω) be the set of Borel probability measures defined in a convex compact subset Ωof Rd, and consider µ0, µj P(Ω). The Optimal Transport (OT) problem between µ0 and µj is that of finding the cheapest way to transport all the mass distributed according to the reference measure µ0 onto a new distribution of mass determined by the target measure µj. Mathematically, it was stated by Kantorovich as the minimization problem OT(µ0, µj) := inf γ Γ(µ0,µj) C(γ; µ0, µj) (1) for C(γ; µ0, µj) := Z Ω2 x0 xj 2dγ(x0, xj), (2) Linear Optimal Partial Transport Embedding where Γ(µ0, µj) is the set of all joint probability measures in Ω2 with marginals µ0 and µj. A measure γ Γ(µ0, µj) is called a transportation plan, and given measurable sets A, B Ω, the coupling γ(A B) describes how much mass originally in the set A is transported into the set B. The squared of the Euclidean distance1 x0 xj 2 is interpreted as the cost of transporting a unit mass located at x0 to xj. Therefore, C(γ; µ0, µj) represents the total cost of moving µ0 to µj according to γ. Finally, we will denote the set of all plans that achieve the infimum in (1), which is non-empty (Villani, 2003), as Γ (µ0, µj). Under certain conditions (e.g. when µ0 has continuous density), an optimal plan γ can be induced by a rule/map T that takes all the mass at each position x to a unique point T(x). If that is the case, we say that γ does not split mass and that it is induced by a map T. In fact, it is concentrated on the graph of T in the sense that for all measurable sets A, B Ω, γ(A B) = µ0({x A : T(x) B}), and we will write it as the pushforward γ = (id T)#µ0. Hence, (1) reads as OT(µ0, µj) = Z Ω x T(x) 2dµ0(x) (3) The function T : Ω Ωis called a Monge map, and when µ0 is absolutely continuous it is unique (Brenier, 1991). Finally, the square root of the optimal value OT( , ) is exactly the so-called Wasserstein distance, W2, in P(Ω) (Villani, 2003, Th.7.3), and we will call it also as OT squared distance. In addition, with this distance, P(Ω) is not only a metric space but also a Riemannian manifold (Villani, 2003). In particular, the tangent space of any µ P(Ω) is Tµ = L2(Ω; Rd, µ) = {u : Ω Rd : u 2 µ < }, where Ω u(x) 2dµ(x). (4) 2.2. Dynamic Formulation of Optimal Transport To understand the framework of Linear Optimal Transport (LOT) we will use the dynamic formulation of the OT problem. Optimal plans and maps can be viewed as a static way of matching two distributions. They tell us where each mass in the initial distribution should end, but they do not tell the full story of how the system evolves from initial to final configurations. In the dynamic formulation, we consider ρ P([0, 1] Ω) a curve of measures parametrized in time that describes the distribution of mass ρt := ρ(t, ) P(Ω) at each instant 0 t 1. We will require the curve to be sufficiently smooth, to have boundary conditions ρ0 = µ0, ρ1 = µj, 1More general cost functions might be used, but they are beyond the scope of this article. and to satisfy the conservation of mass law. Then, it is well known that there exists a velocity vector field vt := v(t, ) such that ρt satisfies the continuity equation2 with boundary conditions tρ + ρv = 0, ρ0 = µ0, ρ1 = µj. (5) The length3 of the curve can be stated as R [0,1] Ω v 2dρ := R 1 0 vt 2 ρtdt, for ρt as in (4), and OT(µ0, µj) coincides with the length of the shortest curve between µ0 and µj (Benamou & Brenier, 2000). Hence, the dynamical formulation of the OT problem (1) reads as OT(µ0, µj) = inf (ρ,v) CE(µ0,µj) [0,1] Ω v 2dρ, (6) where CE(µ0, µj) is the set of pairs (ρ, v), where ρ P([0, 1] Ω), and v : [0, 1] Ω Rd, satisfying (5). Under the assumption of existence of an optimal Monge map T, an optimal solution for (6) can be given explicitly and is pretty intuitive. If a particle starts at position x and finishes at position T(x), then for 0 < t < 1 it will be at the point Tt(x) := (1 t)x + t T(x). (7) Then, varying both the time t and x Ω, the mapping (7) can be interpreted as a flow whose time velocity4 is vt(x) = T(x0) x0, for x = Tt(x0). (8) To obtain the curve of probability measures ρt, one can evolve µ0 through the flow Tt using the formula ρt(A) = µ0(T 1 t (A)) for any measurable set A. That is, ρt is the push-forward of µ0 by Tt ρt = (Tt)#µ0, 0 t 1. (9) The pair (ρ, v) defined by (9) and (8) satisfies the continuity equation (5) and solves (6). Moreover, the curve ρt is a constant speed geodesic in P(Ω) between µ0 and µj (Figalli & Glaudo, 2021), i.e., it satisfies that for all 0 s t 1 p OT(ρs, ρt) = (t s) p OT(ρ0, ρ1). (10) A confirmation of this comes from comparing the OT cost (3) with (8) obtaining OT(µ0, µj) = Z Ω v0(x) 2dµ0(x) (11) which tells us that we only need the speed at the initial time to compute the total length of the curve. Moreover, OT(µ0, µj) coincides with the squared norm of the tangent vector v0 in the tangent space Tµ0 of P(Ω) at µ0. 2The continuity equation is satisfied weakly or in the sense of distributions. See (Villani, 2003; Santambrogio, 2015). 3Precisely, the length of the curve ρ, with respect to the Wasserstein distance, should be R 1 0 vt ρtdt, but this will make no difference in the solutions of (6) since they are constant speed geodesics. 4For each (t, x) (0, 1) Ω, the vector vt(x) is well defined as Tt is invertible. See (Santambrogio, 2015, Lemma 5.29). Linear Optimal Partial Transport Embedding 2.3. Linear Optimal Transport Embedding Inspired by the induced Riemannian geometry of the OT squared distance, Wang et al. (2013) proposed the so-called Linear Optimal Transportation (LOT) framework. Given two target measures µi, µj, the main idea relies on considering a reference measure µ0 and embed these target measures into the tangent space Tµ0. This is done by identifying each measure µj with the curve (9) minimizing OT(µ0, µj) and computing its velocity (tangent vector) at t = 0 using (8). Formally, let us fix a continuous probability reference measure µ0. Then, the LOT embedding (Moosm uller & Cloninger, 2023) is defined as µj 7 uj := T j id µj P(Ω) (12) where T j is the optimal Monge map between µ0 and µj. Notice that by (3), (4) and (11) we have uj 2 µ0 = OT(µ0, µj). (13) After this embedding, one can use the distance in Tµ0 between the projected measures to define a new distance in P(Ω) that can be used to approximate OT(µi, µj). The LOT squared distance is defined as LOTµ0(µi, µj) := ui uj 2 µ0. (14) 2.4. LOT in the Discrete Setting For discrete probability measures µ0, µj of the form n=1 p0 nδx0n, µj = n=1 pj nδxj n, (15) a Monge map T j for OT(µ0, µj) may not exist. Following Wang et al. (2013), in this setting, the target measure µj can be replaced by a new measure ˆµj for which an optimal transport Monge map exists. For that, given an optimal plan γj Γ (µ0, µj), it can be viewed as a N0 Ni matrix whose value at position (n, m) represents how much mass from x0 n should be taken to xj m. Then, we define the OT barycentric projection5 of µj with respect to µ0 as n=1 p0 nδˆxj n, where ˆxj n := 1 m=1 γj n,mxj m. (16) The new measure ˆµj is regarded as an N0-point representation of the target measure µj. The following lemma guarantees the existence of a Monge map between µ0 and ˆµj. Lemma 2.1. Let µ0 and µj be two discrete probability measures as in (15), and consider an OT barycentric projection ˆµj of µj with respect to µ0 as in (16). Then, the map x0 n 7 ˆxj n given by (16) solves the OT problem OT(µ0, ˆµj). 5We refer to (Ambrosio et al., 2005) for the rigorous definition. It is easy to show that if the optimal transport plan γj is induced by a Monge map, then ˆµj = µj. As a consequence, the OT barycentric projection is an actual projection in the sense that it is idempotent. Similar to the continuous case (12), given a discrete reference measure µ0, we can define the LOT embedding for a discrete measure µj as the rule µj 7 uj := [(ˆxj 1 x0 1), . . . , (ˆxj N0 x0 N0)].6 (17) The range Tµ0 of this application is identified with Rd N0 with the norm u µ0 := PN0 n=1 u(n) 2p0 n, where u(n) Rd denotes the nth entry of u. We call (Rd N0, µ0) the embedding space. By the discussion above, if the optimal plan γj for problem OT(µ0, µj) is induced by a Monge map, then the discrete embedding is consistent with (13) in the sense that uj 2 µ0 = OT(µ0, ˆµj) = OT(µ0, µj). (18) Hence, as in section 2.3, we can use the distance between embedded measures in (Rd N0, µ0) to define a discrepancy in the space of discrete probabilities that can be used to approximate OT(µi, µj). The LOT discrepancy7 is defined as LOTµ0(µi, µj) := ui uj 2 µ0. (19) We call it a discrepancy because it is not a squared metric between discrete measures. It does not necessarily satisfy that LOT(µi, µj) = 0 for every distinct µi, µj. Nevertheless, ui uj µ0 is a metric in the embedding space. 2.5. OT and LOT Geodesics in Discrete Settings Let µi, µj be discrete probability measures as in (15) (with i in place of 0). If an optimal Monge map T for OT(µi, µj) exists, a constant speed geodesic ρt between µi and µj, for the OT squared distance, can be found by mimicking (9). Explicitly, with Tt as in (7), ρt = (Tt)#µi = n=1 pi nδ(1 t)xi n+t T (xi n). (20) In practice, one replaces µj by its OT barycentric projection with respect to µi (and so, the existence of an optimal Monge map is guaranteed by Lemma 2.1). 6In (Wang et al., 2013), the embedded vector is defined as the the element-wise multiplication uj J p p0, In addition, ˆµj, uj are determined by γj, but we ignore the subscript γj for convenience. 7In (Wang et al., 2013), LOT is defined by the infimum over all possible optimal pairs (γi, γj). We do not distinguish these two formulations for convenience in this paper. Additionally, (19) is determined by the choice of (γi, γj). Linear Optimal Partial Transport Embedding Now, given a discrete reference µ0, the LOT discrepancy provides a new structure to the space of discrete probability densities. Therefore, we can provide a substitute for the OT geodesic (20) between µi and µj. Assume we have the embeddings µi 7 ui, µj 7 uj as in (17). The geodesic between ui and uj in the LOT embedding space Rd N0 has the simple form ut = (1 t)ui + tuj. This correlates with the curve ˆρt in P(Ω) induced by the map ˆT : ˆxi n 7 ˆxj n ˆρt := ( ˆTt)#ˆµi = n=1 p0 nδx0n+ut(n). (21) By abuse of notation, we call this curve the LOT geodesic between µi and µj. Nevertheless, it is a geodesic between their barycentric projections since it satisfies the following. Proposition 2.2. Let ˆρt be defined as (21), and ˆρ0 = ˆµi, ˆρ1 = ˆµj, then for all 0 s t 1 q LOTµ0(ˆρs, ˆρt) = (t s) q LOTµ0(ˆρ0, ˆρ1). (22) 3. Linear Optimal Partial Transport Embedding 3.1. Static Formulation of Optimal Partial Transport In addition to mass transportation, the OPT problem allows mass destruction at the source and mass creation at the target. Let M+(Ω) denote the set of all positive finite Borel measures defined on Ω. For λ 0 the OPT problem between µ0, µj M+(Ω) can be formulated as OPTλ(µ0, µj) := inf γ Γ (µ0,µj) C(γ; µ0, µj, λ) (23) for C(γ; µ0, µj, λ) := Z Ω2 x0 xj 2dγ(x0, xj) + λ(|µ0 γ0| + |µj γ1|) (24) where |µ0 γ0| is the total mass of µ0 γ0 (resp. |µj γ1|), and Γ (µ0, µj) denotes the set of all measures in Ω2 with marginals γ0 and γ1 satisfying γ0 µ0 (i.e., γ0(E) µ0(E) for all measurable set E), and γ1 µj. Here, the mass destruction and creation penalty is linear, parametrized by λ. The set of minimizers Γ (µ0, µj) of (23) is nonempty (Figalli, 2010). One can further restrict Γ (µ0, µj) to the set of partial transport plans γ such that x0 xj 2 < 2λ for all (x0, xj) supp(γ) (Bai et al., 2022, Lemma 3.2). This means that if the usual transportation cost is greater than 2λ, it is better to create/destroy mass. 3.2. Dynamic Formulation of Optimal Partial Transport Adding a forcing term ζ to the continuity equation (6), one can take into account curves that allow creation and destruction of mass. That is, those who break the conservation of 8This map can be understood as the one that transports ˆµi onto ˆµj pivoting on the reference: ˆµi 7 µ0 7 ˆµj. mass law. Thus, it is natural that the minimization problem (23) can be rewritten (Chizat et al., 2018b, Th. 5.2) into a dynamic formulation as OPTλ(µ0, µj) = inf (ρ,v,ζ) FCE(µ0,µj) [0,1] Ω v 2dρ+λ|ζ| (25) where FCE(µ0, µj) is the set of tuples (ρ, v, ζ) such that ρ M+([0, 1] Ω), ζ M([0, 1] Ω) (where M stands for signed measures) and v : [0, 1] Ω Rd, satisfying tρ + ρv = ζ, ρ0 = µ0, ρ1 = µj. (26) As in the case of OT, under certain conditions on the minimizers γ of (23), one curve ρt that minimizes the dynamic formulation (25) is quite intuitive. We show in the next proposition that it consists of three parts γt, (1 t)ν0 and tνj (see (27), (28), and (29) below). The first is a curve that only transports mass, and the second and third destroy and create mass at constant rates |ν0|, |νj|, respectively. Proposition 3.1. Let γ Γ (µ0, µj) be of the form γ = (id T)#γ 0 for T : Ω Ωa (measurable) map. Let ν0 := µ0 γ 0, νj := µj γ 1, (27) Tt(x) := (1 t)x + t T(x), γt := (Tt)#γ 0. (28) Then, an optimal solution (ρ, v, ζ) for (25) is given by ρt := γt + (1 t)ν0 + tνj, (29) vt(x) := T(x0) x0, if x = Tt(x0). (30) ζt := νj ν0 (31) Moreover, plugging in (ρ, v, ζ) into (25), it holds that OPT λ(µ0, µj) = v0 2 γ 0 ,2λ + λ(|ν0| + |νj|), (32) where v0(x) = T(x) x (i.e., vt at time t = 0), and v 2 µ,2λ := Z Ω min( v 2, 2λ)dµ, for v : Ω Rd. In analogy to the OT squared distance, we also call the optimal partial cost (32) as the OPT squared distance. 3.3. Linear Optimal Partial Transport Embedding Definition 3.2. Let µ0, µj M+(Ω) such that OPTλ(µ0, µj) is solved by a plan induced by a map. The LOPT embedding of µj with respect to µ0 is defined as µj 7 (uj, µj, νj) := (v0, γ0, νj) (33) where v0, γ0, νj are defined as in Proposition 3.1. Let us compare the LOPT (33) and LOT (12) embeddings. The first component v0 represents the tangent of the curve Linear Optimal Partial Transport Embedding that transports mass from the reference to the target. This is exactly the same as the LOT embedding. In contrast to LOT, the second component γ0 is necessary since we need to specify what part of the reference is being transported. The third component νj can be thought of as the tangent vector of the part that creates mass. There is no need to save the destroyed mass because it can be inferred from the other quantities. Now, let µ0 µj be the minimum measure9 between µ0 and µj. By the above definition, µ0 7 (u0, µ0, ν0) = (0, µ0, 0). Therefore, (32) can be rewritten OPTλ(µ0, µj) = u0 uj 2 µ0 µj,2λ + λ(| µ0 µj| + |ν0 νj|) (34) This motivates the definition of the LOPT discrepancy.10 Definition 3.3. Consider a reference µ0 M+(Ω) and target measures µi, µj M+(Ω) such that OPTλ(µ0, µi) and OPTλ(µ0, µj) can be solved by plans induced by mappings as in the hypothesis of Proposition 3.1. Let (ui, µi, νi) and (uj, µj, νj) be the LOPT embeddings of µi and µj with respect to µ0. The LOPT discrepancy between µi and µj with respect to µ0 is defined as LOPTµ0,λ(µi, µj) := ui uj 2 µi µj,2λ + λ(| µi µj| + |νi νj|) (35) Similar to the LOT framework, by equation (34), LOPT can recover OPT when µi = µ0. That is, LOPTµ0,λ(µ0, µj) = OPTλ(µ0, µj). 3.4. LOPT in the Discrete Setting If µ0, µj are N0, Nj size discrete non-negative measures as in (15) (but not necessarily with total mass 1), the OPT problem (23) can be written as min γ Γ (µ0,µj) n,m x0 n xj m 2γn,m + λ(|p0| + |pj| 2|γ|) where the set Γ (µ0, µj) can be viewed as the subset of N0 Nj matrices with non-negative entries Γ (µ0, µj) := {γ RN0 Nj + : γ1Nj p0, γT 1N0 pj}, where 1N0 denotes the N0 1 vector whose entries are 1 (resp. 1Nj), p0 = [p0 1, . . . , p0 N0] is the vector of weights of µ0 (resp. pj), γ1Nj p0 means that component-wise 9Formally, µ0 µj(B) := inf µ0 (B1) + µj (B2) for every Borel set B, where the infimum is taken over all partitions of B., i.e. B = B1 B2, B1 B2 = , given by Borel sets B1, B2. 10LOPTλ is not a rigorous metric. holds the (resp. γT 1N0 pj, where γT is the transpose of γ), and |p0| = PN0 n=1 |p0 n| is the total mass of µ0 (resp. |pj|, |γ|). The marginals are γ0 := γ1Nj, and γ1 := γT 1N0. Similar to OT, when an optimal plan γj for OPTλ(µ0, ˆµj) is not induced by a map, we can replace the target measure µj by an OPT barycentric projection ˆµj for which a map exists. Therefore, allowing us to apply the LOPT embedding (see (33) and (40) below). Definition 3.4. Let µ0 and µj be positive discrete measures, and γj Γ (µ0, µj). The OPT barycentric projection11 of µj with respect to µ0 is defined as n=1 ˆpj nδˆxj n, where (36) m=1 γj n,m, 1 n N0, (37) ˆpj n PNj m=1 γj n,mxj m if ˆpj n > 0 x0 n if ˆpj n = 0. (38) Theorem 3.5. In the same setting of Definition 3.4, the map x0 n 7 ˆxj n given by (38) solves the problem OPTλ(µ0, ˆµj), in the sense that induces the partial optimal plan ˆγj = diag(ˆpj 1, . . . , ˆpj N0). It is worth noting that when we take a barycentric projection of a measure, some information is lost. Specifically, the information about the part of µj that is not transported from the reference µ0. This has some minor consequences. First, unlike (18), the optimal partial transport cost OPTλ(µ0, µj) changes when we replace µj by ˆµj. Nevertheless, the following relation holds. Theorem 3.6. In the same setting of Definition 3.4, if γj is induced by a map, then OPTλ(µ0, µj) = OPTλ(µ0, ˆµj) + λ(|µj| |ˆµj|) (39) The second consequence12 is that the LOPT embedding of ˆµj will always have a null third component. That is, ˆµj 7 ([ˆxj 1 x0 1, . . . , ˆxj N0 x0 N0], n=1 ˆpj nδx0 n, 0). (40) Therefore, we represent this embedding as ˆµj 7 (uj, ˆpj), for uj = [ˆxj 1 x0 1, . . . , ˆxj N0 x0 N0] and ˆpj = [ˆpj 1, . . . , ˆpj N0]. The last consequence is given in the next result. 11Notice that in (16) we had p0 n = PNj m=1 γj n,m. This leads to introducing ˆpj n as in (37). That is, ˆpj n plays the role of p0 n in the OPT framework. However, here ˆpj n depends on γj (on its first marginal γj 0) and not only on µ0, and so we add a superscript j . 12This is indeed an advantage since it allows the range of the embedding to always have the same dimension N0 (d + 1). Linear Optimal Partial Transport Embedding Proposition 3.7. If µ0, µi, µj are discrete and satisfy the conditions of Definition 3.3, then LOPTµ0,λ(µi, µj) = LOPTµ0,λ(ˆµi, ˆµj) + λCi,j (41) where Ci,j = |µi| |ˆµi| + |µj| |ˆµj|. As a byproduct we can define the LOPT discrepancy for any pair of discrete measures µi, µj as the right-hand side of (41). In practice, unless to approximate OPTλ(µi, µj), we set Ci,j = 0 in (41). That is, LOPTµ0,λ(µi, µj) := LOPTµ0,λ(ˆµi, ˆµj). (42) 3.5. OPT and LOPT Interpolation Inspired by OT and LOT geodesics as defined in section 2.5, but lacking the Riemannian structure provided by the OT squared norm, we propose an OPT interpolation curve and its LOPT approximation. For the OPT interpolation between two measures µi, µj for which exists γ Γ (µi, µj) of the form γ = (id T)#γ0 , a natural candidate is the solution ρt of the dynamic formulation of OPTλ(µi, µj). The exact expression is given by Proposition 3.1. When working with general discrete measures µi, µj (as in (15), with i in place of 0) such γ is not guaranteed to exist. Then, we replace the latter with its OPT barycentric projection with respect to µi. And by Theorem 3.5 the map T : xi n 7 ˆxj n solves OPTλ(µi, ˆµj) and the OPT interpolating curve is13 n=1 ˆpj nδ(1 t)xin+t T (xin) + (1 t) n=1 (pi n ˆpj n)δxin. When working with a multitude of measures, it is convenient to consider a reference µ0 and embed the measures in R(d+1) N0 using LOPT. Hence, doing computations in a simpler space. Below we provide the LOPT interpolation. Definition 3.8. Given discrete measures µ0, µi, µj, with µ0 as the reference, let (ui, ˆpi), (uj, ˆpi) be the LOPT embeddings of µi, µj. Let ˆpij := ˆpi ˆpj, and ut := (1 t)ui+tuj. We define the LOPT interpolating curve between µi and µj by k DT ˆpij k δx0 k+ut(k) + (1 t) X k DD (ˆpi k ˆpij k )δx0 k+ui k k DC (ˆpj k ˆpij k )δx0 k+uj k where DT = {k : ˆpij k > 0}, DD = {k : ˆpi k > ˆpij k )} and DC = {k : ˆpij k < ˆpj k)} are respectively the sets where we transport, destroy and create mass. 13ˆpj n are the coefficients of ˆµj with respect to µi analogous to (36). (a) Mean of the error (b) Medean of the error Figure 2. Graphs of the mean and median relative errors between OPTλ and LOPTλ,µ0 as a function of the parameter λ. 4. Applications Approximation of OPT Distance: Similar to LOT (Wang et al., 2013), and Linear Hellinger Kantorovich (LHK) (Cai et al., 2022), we test the approximation performance of OPT using LOPT. Given K empirical measures {µi}K i=1, for each pair (µi, µj), we compute OPTλ(µi, µj) and LOPTµ0,λ(µi, µj) and the mean or median of all pairs (µi, µj) of relative error defined as |OPTλ(µi, µj) LOPTµ0,λ(µi, µj)| OPTλ(µi, µj) . Similar to LOT and LHK, the choice of µ0 is critical for the accurate approximation of OPT. If µ0 is far away from {µi}K i=1, the linearization is a poor approximation because the mass in µi and µ0 would only be destroyed or created. In practice, one candidate for µ0 is the barycenter of the set of measures {µi}. The OPT can be converted into OT problem (Caffarelli & Mc Cann, 2010), and one can use OT barycenter (Cuturi & Doucet, 2014) to find µ0. For our experiments, we created K point sets of size N = 500 for K different Gaussian distributions in R2. In particular, µi N(mi, I), where mi is randomly selected such that mi = 3 for i = 1, ..., K. For the reference, we picked an N point representation of µ0 N(m, I) with m = P mi/K. We repeated each experiment 10 times. To exhibit the effect of the parameter λ in the approximation, the relative errors are shown in Figure 2. For the histogram of the relative errors for each value of λ and each number of measures K, we refer to Figure 6 in the Appendix H. For large λ, most mass is transported and OT(µi, µj) OPTλ(µi, µj), the performance of LOPT is close to that of LOT, and the relative error is small. In Figure 3 we report wall clock times of OPT vs LOPT for λ = 5. We use linear programming (Karmarkar, 1984) to solve each OPT problem with a cost of O(N 3log(N)) each. Thus, computing the OPT distance pair-wisely for {µi}K i=1 requires O(K2N 3log(N)). In contrast, to compute LOPT, we only need to solve K optimal partial transport problems for the embeddings (see (33) or (40)). Computing LOPT discrepancies after the embeddings is linear. Thus, the total Linear Optimal Partial Transport Embedding Figure 3. Wall-clock time between OPT and LOPT. The LP solver in Python OT (Flamary et al., 2021) is applied to each individual OPT problem, with 100N maximum number of iterations. computational cost is O(KN 3log(N) + K2N). The experiment was conducted on a Linux computer with AMD EPYC 7702P CPU with 64 cores and 256GB DDR4 RAM. Point Cloud Interpolation: We test OT geodesic, LOT geodesic, HK geodesic, LHK geodesic, OPT interpolation, and LOPT interpolation on the point cloud MNIST dataset. We compute different transport curves between point sets of the digits 0 and 9. Each digit is a weighted point set {xj n, pj n}Nj n=1, j = 1, 2, that we consider as a discrete measure of the form µj = PNj n=1 pj nδxj n + 1/Nj PηNj m=1 δyj m, where the first sum corresponds to the clean data normalized to have total mass 1, and the second sum is constructed with samples from a uniform distribution acting as noise with total mass η. Each xj n is a point in the square region [0, 20]2 and each yj m is a point in the square region [ 10, 30]2. For HK, OPT, LHK and LOPT, we use the distributions µj without re-normalization, while for OT and LOT, we renormalize them. The reference in LOT, LHK and LOPT is taken as the OT barycenter of a sample of the digits 0, 1, and 9 not including the ones used for interpolation, and normalized to have unit total mass. We test for η = 0, 0.5, 0.75 (see Figure 8 in the Appendix H). The results for η = 0.5 are shown in Figure 4. We can see that OT and LOT do not eliminate noise points. HK, OPT still retains much of the noise because interpolation is essentially between µ1 and ˆµ2 (with respect to µ1). So µ1 acts as a reference that still has a lot of noise. In LHK, LOPT, by selecting the same reference as LOT we see that the noise significantly decreases. In the HK and LHK cases, we notice not only how the masses vary, but also how their relative positions change obtaining a very different configuration at the end of the interpolation. OPT and LOPT instead returns a more natural interpolation because of the mass preservation of the transported portion and the decoupling between transport and destruction/creation of mass. Figure 4. We demonstrate the OT geodesic, HK geodesic, OPT interpolation, LOT geodesic, LHK geodesic and LOPT interpolation in MNIST dataset. In LOT geodesic and LOPT interpolation, we use the same reference measure. The percentage of noise η is set to 0.5. In OPT and LOPT interpolation, we set λ = 20; in HK and LHK, we set the scaling to be 2.5. PCA analysis: We compare the results of performing PCA on the embedding space of LOT, LHK and LOPT for point cloud MNIST. We take 900 digits from the dataset corresponding to digits 0, 1 and 3 in equal proportions. Each element is a weighted point set {xj n, pj n}Nj n=1 that we consider as a discrete measure with added noise as in the previous experiment. The reference, µ0, is set to the OT barycenter of 30 samples from the clean data. For LOT we re-normalize each µj to have a total mass of 1, while we do not renormalize for LOPT. Let Sη := {µj : noise level = η}900 j=1. We embed Sη using LOT, LHK and LOPT and apply PCA on the embedded vectors {uj}. In Figure 5 we show the first two principal components of the set of embedded vectors based on LOT, LHK and LOPT for noise levels η = 0, 0.75. It can be seen that when there is no noise, the PCA dimension reduction technique works well for all three embedding methods. When η = 0.75, the method fails for LOT embedding, but the dimension-reduced data is still separable for LOPT and LHK. For the running time, LOT, LOPT requires 60-80 seconds and LHK requires about 300-350 seconds. The experiments are conducted on a Linux computer with AMD EPYC 7702P CPU with 64 cores and 256GB DDR4 Linear Optimal Partial Transport Embedding 40 20 0 20 40 60 75 50 25 0 25 LOT 20 0 20 LOPT 20 0 20 LHK Figure 5. We plot the first two principal components of each uj based on LOT and LOPT. For LOPT, we set λ = 20.0, and for LHK, we set the scaling to be 2.5. We refer the reader to Appendix H for further details and analysis. We proposed a Linear Optimal Partial Transport (LOPT) technique that allows us to embed distributions with different masses into a fixed dimensional space in which several calculations are significantly simplified. We show how to implement this for real data distributions allowing us to reduce the computational cost in applications that would benefit from the use of optimal (partial) transport. We finally provide comparisons with previous techniques and show some concrete applications. In particular, we show that LOPT is more robust and computational efficient in the presence of noise than previous methods. For future work, we will continue to investigate the comparison of LHK and LOPT, and the potential applications of LOPT in other machine learning and data science tasks, such as Barycenter problems, graph embedding, task similarity measurement in transfer learning, and so on. Acknowledgements This work was partially supported by the Defense Advanced Research Projects Agency (DARPA) under Contracts No. HR00112190132 and No. HR00112190135. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the United States Air Force, DARPA, or other funding agencies. Aldroubi, A., Li, S., and Rohde, G. K. Partitioning signal classes using transport transforms for data analysis and machine learning. Sampling Theory, Signal Processing, and Data Analysis, 19(1):1 25, 2021. Ambrosio, L., Gigli, N., and Savar e, G. Gradient flows: in metric spaces and in the space of probability measures. Springer Science & Business Media, 2005. Arjovsky, M., Chintala, S., and Bottou, L. Wasserstein generative adversarial networks. In International conference on machine learning, pp. 214 223. PMLR, 2017. Bai, Y., Schmitzer, B., Thorpe, M., and Kolouri, S. Sliced optimal partial transport. ar Xiv preprint ar Xiv:2212.08049, 2022. Basu, S., Kolouri, S., and Rohde, G. K. Detecting and visualizing cell phenotype differences from microscopy images using transport-based morphometry. Proceedings of the National Academy of Sciences, 111(9):3448 3453, 2014. Benamou, J.-D. and Brenier, Y. A computational fluid mechanics solution to the monge-kantorovich mass transfer problem. Numerische Mathematik, 84(3):375 393, 2000. Brenier, Y. Polar factorization and monotone rearrangement of vector-valued functions. Communications on pure and applied mathematics, 44(4):375 417, 1991. Caffarelli, L. A. and Mc Cann, R. J. Free boundaries in optimal transport and monge-ampere obstacle problems. Annals of mathematics, pp. 673 730, 2010. Cai, T., Cheng, J., Craig, N., and Craig, K. Linearized optimal transport for collider events. Physical Review D, 102(11):116019, 2020. Cai, T., Cheng, J., Schmitzer, B., and Thorpe, M. The linearized hellinger kantorovich distance. SIAM Journal on Imaging Sciences, 15(1):45 83, 2022. Chizat, L., Schmitzer, B., Peyre, G., and Vialard, F.-X. An interpolating distance between optimal transport and fisher-rao. ar Xiv:1506.06430 [math], 2015. URL http: //arxiv.org/abs/1506.06430. Chizat, L., Peyr e, G., Schmitzer, B., and Vialard, F.-X. An interpolating distance between optimal transport and fisher rao metrics. Foundations of Computational Mathematics, 18(1):1 44, 2018a. Chizat, L., Peyr e, G., Schmitzer, B., and Vialard, F.-X. Unbalanced optimal transport: Dynamic and kantorovich formulations. Journal of Functional Analysis, 274(11): 3090 3123, 2018b. Linear Optimal Partial Transport Embedding Chizat, L., Roussillon, P., L eger, F., Vialard, F.-X., and Peyr e, G. Faster wasserstein distance estimation with the sinkhorn divergence. Advances in Neural Information Processing Systems, 33:2257 2269, 2020. Courty, N., Flamary, R., and Tuia, D. Domain adaptation with regularized optimal transport. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 274 289. Springer, 2014. Courty, N., Flamary, R., Habrard, A., and Rakotomamonjy, A. Joint distribution optimal transportation for domain adaptation. Advances in Neural Information Processing Systems, 30, 2017. Cuturi, M. Sinkhorn distances: Lightspeed computation of optimal transport. Advances in neural information processing systems, 26, 2013. Cuturi, M. and Doucet, A. Fast computation of wasserstein barycenters. In International conference on machine learning, pp. 685 693. PMLR, 2014. Fatras, K., S ejourn e, T., Flamary, R., and Courty, N. Unbalanced minibatch optimal transport; applications to domain adaptation. In International Conference on Machine Learning, pp. 3186 3197. PMLR, 2021. Figalli, A. The optimal partial transport problem. Archive for rational mechanics and analysis, 195(2):533 560, 2010. Figalli, A. and Gigli, N. A new transportation distance between non-negative measures, with applications to gradients flows with dirichlet boundary conditions. Journal de math ematiques pures et appliqu ees, 94(2):107 130, 2010. Figalli, A. and Glaudo, F. An Invitation to Optimal Transport, Wasserstein Distances, and Gradient Flows. European Mathematical Society, 2021. Flamary, R., Courty, N., Gramfort, A., Alaya, M. Z., Boisbunon, A., Chambon, S., Chapel, L., Corenflos, A., Fatras, K., Fournier, N., Gautheron, L., Gayraud, N. T., Janati, H., Rakotomamonjy, A., Redko, I., Rolet, A., Schutz, A., Seguy, V., Sutherland, D. J., Tavenard, R., Tong, A., and Vayer, T. Pot: Python optimal transport. Journal of Machine Learning Research, 22(78):1 8, 2021. URL http: //jmlr.org/papers/v22/20-451.html. Genevay, A., Peyr e, G., and Cuturi, M. Gan and vae from an optimal transport point of view. ar Xiv preprint ar Xiv:1706.01807, 2017. Guittet, K. Extended Kantorovich norms: a tool for optimization. Ph D thesis, INRIA, 2002. Janati, H., Cuturi, M., and Gramfort, A. Wasserstein regularization for sparse multi-task regression. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 1407 1416. PMLR, 2019. Karmarkar, N. A new polynomial-time algorithm for linear programming. In Proceedings of the sixteenth annual ACM symposium on Theory of computing, pp. 302 311, 1984. Kim, S., Ma, R., Mesa, D., and Coleman, T. P. Efficient bayesian inference methods via convex optimization and optimal transport. In 2013 IEEE International Symposium on Information Theory, pp. 2259 2263. IEEE, 2013. Kolouri, S., Naderializadeh, N., Rohde, G. K., and Hoffmann, H. Wasserstein embedding for graph learning. In International Conference on Learning Representations, 2020. Kundu, S., Kolouri, S., Erickson, K. I., Kramer, A. F., Mc Auley, E., and Rohde, G. K. Discovery and visualization of structural biomarkers from mri using transportbased morphometry. Neuro Image, 167:256 275, 2018. Lellmann, J., Lorenz, D. A., Schonlieb, C., and Valkonen, T. Imaging with kantorovich rubinstein discrepancy. SIAM Journal on Imaging Sciences, 7(4):2833 2859, 2014. Liero, M., Mielke, A., and Savare, G. Optimal entropytransport problems and a new hellinger kantorovich distance between positive measures. Inventiones mathematicae, 211(3):969 1117, 2018. URL http://link.springer.com/10.1007/ s00222-017-0759-8. Liu, H., Gu, X., and Samaras, D. Wasserstein gan with quadratic transport cost. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 4832 4841, 2019. Moosm uller, C. and Cloninger, A. Linear optimal transport embedding: Provable wasserstein classification for certain rigid transformations and perturbations. Information and Inference: A Journal of the IMA, 12(1):363 389, 2023. Naderializadeh, N., Comer, J. F., Andrews, R., Hoffmann, H., and Kolouri, S. Pooling by sliced-wasserstein embedding. Advances in Neural Information Processing Systems, 34:3389 3400, 2021. Nguyen, K., Nguyen, D., Pham, T., Ho, N., et al. Improving mini-batch optimal transport via partial transportation. In International Conference on Machine Learning, pp. 16656 16690. PMLR, 2022. Linear Optimal Partial Transport Embedding Nguyen, K., Nguyen, D., and Ho, N. Self-attention amortized distributional projection optimization for sliced wasserstein point-cloud reconstruction. ar Xiv preprint ar Xiv:2301.04791, 2023. Santambrogio, F. Optimal Transport for Applied Mathematicians. Calculus of Variations, PDEs and Modeling. Birkh auser, 2015. Scetbon, M. and marco cuturi. Low-rank optimal transport: Approximation, statistics and debiasing. In Oh, A. H., Agarwal, A., Belgrave, D., and Cho, K. (eds.), Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum? id=4bt Ne XKFAQ. Villani, C. Topics in Optimal Transportation, volume 58. American Mathematical Society, 2003. URL http:// www.ams.org/gsm/058. Wang, W., Slepˇcev, D., Basu, S., Ozolek, J. A., and Rohde, G. K. A linear optimal transportation framework for quantifying and visualizing variations in sets of images. International journal of computer vision, 101(2):254 269, 2013. Ye, J., Wu, P., Wang, J. Z., and Li, J. Fast discrete distribution clustering using wasserstein barycenter with sparse support. IEEE Transactions on Signal Processing, 65(9): 2317 2332, 2017. Linear Optimal Partial Transport Embedding We refer to the main text for references. A. Notation (Rd, ), d-dimensional Euclidean space endowed with the standard Euclidean norm x = q Pd k=1 |x(k)2|. d( , ) is the associated distance, i.e., d(x, y) = x y . x y canonical inner product in Rd. 1N, column vector of size N 1 with all entries equal to 1. diag(p1, . . . , p N), diagonal matrix. w T , transpose of a matrix w. R+, non-negative real numbers. Ω, convex and compact subset of Rd. Ω2 = Ω Ω. P(Ω), set of probability Borel measures defined in Ω. M+(Ω), set of positive finite Borel measures defined in Ω. M, set of signed measures. π0 : Ω2 Ω, π0(x0, x1) := x0; π1 : Ω2 Ω, π1(x0, x1) := x1, standard projections. F#µ, push-forward of the measure µ by the function. F#µ(B) = µ(F 1(B)) for all measurable set B, where F 1(B) = {x : F(x) B}. δx, Dirac measure concentrated on x. µ = PN n=1 pnδxn, discrete measure (xn Ω, pn R+). The coefficients pn are called the weights of µ. supp(µ), support of the measure µ. µi µj minimum measure between µi and µj; pi pj vector having at each n entry the minimum value pi n and pj n. µ0 reference measure. µi, µj target measures. In the OT framework, they are in P(Ω). In the OPT framework, they are in M+(Ω). Γ(µ0, µj) = {γ P(Ω2) : π0#γ = µ0, π1#γ = µj}, set of Kantorovich transport plans. C(γ; µ0, µj) = R Ω2 x0 xj 2dγ(x0, xj), Kantorovich cost given by the transportation plan γ between µ0 and µj. Γ (µ0, µj), set of optimal Kantorovich transport plans. T, optimal transport Monge map. id, identity map id(x) = x. Tt(x) = (1 t)x + t T(x) for T : Ω Ω. Viewed as a function of (t, x), it is a flow. In section 2: ρ P([0, 1] Ω) curve of measures. At each 0 t , ρt P(Ω). In section 3: analogous, replacing P by M+. v : [0, 1] Ω Rd. at each time vt : Ω Rd is a vector field. v0, initial velocity. V , divergence of the vector field V with respect to the spatial variable x. tρ + ρv = 0, continuity equation. Linear Optimal Partial Transport Embedding CE(µ0, µj), set of solutions of the continuity equation with boundary conditions µ0 and µj. tρ + ρv = ζ, continuity equation with forcing term ζ. FCE(µ0, µj), set solutions (ρ, v, ζ) of the continuity equation with forcing term with boundary conditions µ0 and µj. OT(µ0, µj), optimal transport minimization problem. See (1) for the Kantorovich formulation, and (6) for the dynamic formulation. W2(µ0, µj) = p OT(µ0, µj), 2-Wasserstein distance. Tµ = L2(Ω; Rd, µ), where u 2 µ = R Ω u(x) 2dµ(x) (if µ is discrete, Tµ is identified with Rd N and u 2 µ = PN n=1 u(n) 2pn). LOTµ0(µi, µj), see (14) for continuous densities, and (19) for discrete measures. λ > 0, penalization in OPT. µ ν if µ(B) ν(E) for all measurable set E and we say that µ is dominated by ν. Γ (µ0, µj) = {γ M+(Ω2) : π0#γ µ0, π1#γ µj}, set of partial transport plans. γ0 := π0#γ, γ1 := π1#γ, marginals of γ M+(Ω2). ν0 = µ0 γ0 (for the reference µ0), νj = µj γ1 (for the target µj). C(γ; µ0, µj, λ) = R x0 xj 2dγ(x0, xj) + λ(|µ0 γ0| + |µj γ1|), partial transport cost given by the partial transportation plan γ between µ0 and µj with penalization λ. Γ (µ0, µj), set of optimal partial transport plans. OPT(µ0, µj), partial optimal transport minimization problem between µ0 and µj. See (23) for the static formulation, and (25) for the dynamic formulation. v 2 µ,2λ := R Ωmin( v 2, 2λ)dµ LOPTλ(µi, µj), see (35), and (42) and the discussion above. µj 7 uj, LOT embedding (fixing first a reference µ0 P(Ω)). If µ0 has continuous density, uj := vj 0 = T j id, and so it is a map from measures µj P(Ω) to vector fields ui defined in Ω, see (12). For discrete measures (µ0, µj discrete), LOT embedding is needed first, and in this case, the embedding is a map from discrete probability to Rd N0, see (17). µ, OT and OPT barycentric projection of µ with respect to a reference µ0 (in P or M+, resp.), see (16) and (36), respectively. ˆxn denote the new locations where ˆµ is concentrated. p0 n and ˆpn denote the wights of ˆµ for OT and OPT, respectively. ut = (1 t)ui + tuj, geodesic in LOT embedding space. ˆρt, LOT geodesic. LOPT embeding, see (33), and (40). OPT interpolating curve, and LOPT interpolating curve are defined in section 3.5. Linear Optimal Partial Transport Embedding B. Proof of lemma 2.1 Proof. We fix γ Γ (µ0, µj)14, and then we compute the map x0 n 7 ˆxj n according to (16). This map induces a transportation plan ˆγ := diag(p0 1, . . . , p0 N0) RN0 N0 + . We will prove that ˆγ Γ (µ0, ˆµj). By definition, it is easy to see that its marginals are µ0 and ˆµj. The complicated part is to prove optimality. Let ˆγ Γ(µ0, ˆµj) be an arbitrary transportation plan between µ0 and µj (i.e. γ1N0 = γT 1N0 = p0). We need to show C(ˆγ ; µ0, ˆµj) C(ˆγ; µ0, ˆµj). (43) Since ˆγ is a diagonal matrix with positive diagonal, its inverse matrix is (γ ) 1 = diag(1/p0 1, . . . , 1/p0 N0). We define γ := ˆγ(γ ) 1γ RN0 Nj + . We claim γ Γ(µ0, µj). Indeed, γ1Nj = ˆγ(γ ) 1γ 1Nj = ˆγ(γ ) 1p0 = ˆγ1N0 = p0 (44) γT 1N0 = (γ )T (γ ) 1ˆγT 1N0 = (γ )T (γ ) 1p0 = (γ )T 1N0 = pj, (45) where the second equality in (44) and the third equality in (45) follow from the fact γ Γ(µ0, µ1), and the fourth equality in (44) and the second equality in (45) hold since ˆγ Γ(µ0, ˆµj). Since γ is optimal for OT(µ, µj),we have C(γ ; µ0, µj) C(γ; µ0, µj) to denote the transportation cost for γ and γ, respectively. We write C(γ ; µ0, µj) = m=1 x0 n xj m 2γ n,m n=1 x0 n 2p0 n + m=1 xj m 2pj m 2 m=1 xn xj m γ n,m m=1 ˆxn xj m γ n,m, where K1 is a constant (which only depends on µ0, µj). Similarly, C(γ; µ0, µj) = K1 2 m=1 x0 n xj m γn,m ˆγn,ℓγ ℓ,m p0 ℓ x0 n xj m Analogously, we have C(ˆγ ; µ0, ˆµj) = m=1 ˆxn ˆxj m 2ˆγ n,m n=1 x0 n 2p0 n + m=1 ˆxj m 2p0 m 2 m=1 x0 n ˆxj m ˆγ n,m m=1 x0 n xj m γ n,m = K2 K1 + C(γ ; µ0, µj) (46) 14Although in subsection 2.4 we use the notation γj Γ (µ0, µj), here we use the notation γ to emphasize that we are fixing an optimal plan. Linear Optimal Partial Transport Embedding where K2 is constant (which only depends on µ0 and ˆµj) and similarly C(ˆγ; µ0, ˆµj) = K2 2 ˆγn,ℓγ ℓ,m p0 ℓ x0 n xj m = K2 K1 + C(γ; µ0, µ1) (47) Therefore, by (46) and (47), we have that C(γ ; µ0, µj) C(γ; µ0, µj) if and only if C(ˆγ ; µ0, ˆµj) C(ˆγ; µ0, ˆµj). So, we conclude the proof by the fact γ is optimal for OT(µ0, µj). C. Proof of Proposition 2.2 Lemma C.1. Given discrete measures µ0 = PN0 k=1 p0 kδx0 k, µ1 = PN0 k=1 p0 kδx1 k, µ2 = PN0 k=1 p0 kδx2 k, suppose that the maps x0 k 7 x1 k and x0 k 7 x2 k solve OT(µ0, µ1) and OT(µ0, µ2), respectively. For t [0, 1], define µt := PN0 k=1 p0 kδ(1 t)x1 k+tx2 k. Then, the mapping Tt : x0 k 7 (1 t)x1 k + tx2 k solves the problem OT(µ0, µt). Proof. Let γ = diag(p0 1, . . . , p0 N0) be the corresponding transportation plan induced by Tt. Consider an arbitrary γ RN0 N0 + such that γ Γ(µ0, µt). We need to show C(γ ; µ0, µt) C(γ; µ0, µt). C(γ ; µ0, µt) = k,k =1 x0 k (1 t)x1 k tx2 k 2 γ k,k k=1 x0 k 2p0 k + k=1 (1 t)x1 k + tx2 k 2p0 k 2 k,k =1 x0 k ((1 t)x1 k + tx2 k ) k,k =1 ˆxk x1 k γ k,k + t k,k =1 ˆxk x2 k γ k,k where in third equation, K is a constant which only depends on the marginals µ0, µt. Similarly, C(γ; µ0, µt) = K 2 k,k =1 ˆxk x1 k γk,k + t k,k =1 ˆxk x2 k γk,k By the fact that γ, γ Γ(ˆµ, µt) = Γ(µ0, µ1) = Γ(µ0, µ2), and that γ is optimal for OT(ˆµ, µ1), OT(ˆµ, µ2), we have k,k =1 ˆxk x1 k γ k,k k,k =1 ˆxk x1 k γk,k and k,k =1 ˆxk x2 k γ k,k k,k =1 ˆxk x2 k γk,k . Thus, by (48) and (49), we have C(γ ; ˆµ, µt) C(γ; ˆµ, µt), and this completes the proof. Proof of Proposition 2.2. Consider 0 s t 1. By Lemma 2.1, the maps T i : x0 k 7 ˆxi k, T j : x0 k 7 ˆxj k solve OT(µ0, ˆµi), OT(µ0, ˆµj), respectively. Moreover, by Lemma C.1 (under the appropriate renaming), we have the mapping x0 k 7 (1 s)ˆxi k + sˆxj k = x0 k + (1 s)ui k + suj k, 1 k N0 solves OT(µ0, ˆρs), and similarly x0 k 7 (1 t)ˆxi k + tˆxj k = x0 k + (1 t)ui k + tuj k, 1 k N0 Linear Optimal Partial Transport Embedding solves OT(µ0, ˆρt). LOTµ0(ˆρs, ˆρt) = k=1 ((1 s)ˆxi k + sˆxj k) ((1 t)ˆxi k + tˆxj k) 2 p0 k k=1 (t s)2 ˆxi k ˆxj k 2 p0 k = (t s)2LOTµ0(µi, µj), and this finishes the proof. D. Proof of proposition 3.1 This result relies on the definition (29) of the curve ρt, which is inspired by the fact that solutions of non-homogeneous equations are given by solutions of the associated homogeneous equation plus a particular solution of the non-homogeneous. We choose the first term of ρt as a solution of a (homogeneous) continuity equation (γt defined in (28)), and the second term (γt := (1 t)ν0 + tνj) as an appropriate particular solution of the equation (26) with forcing term ζ defined in (31). Moreover, the curve ρt will become optimal for (25) since both γt and γt are optimal in different senses. On the one hand, γt is optimal for a classical optimal transport problem15. On the other hand, γt is defined as a line interpolating the new part introduced by the framework of partial transportation, destruction and creation of mass . Although this is the core idea behind the proposition, to prove it we need several lemmas to deal with the details and subtleties. First, we mention that, the push-forward measure F#µ can be defined satisfying the formula of change of variables Z A g(x) d F#µ(x) = Z F 1(A) g(F(x)) dµ(x) for all measurable set A, and all measurable function g, where F 1(A) = {x : F(x) A}. This is a general fact, and as an immediate consequence, we have conservation of mass |F#µ| = |µ|. Then, in our case, the second term in the cost function (24) we can be written as λ(|µ0 γ0| + |µj γ1| = λ(|µ0| + |µ1| 2|γ|) since γ0 and γ1 are dominated by µ0 and µj, respectively, in the sense that γ0 µ0 and γ1 µj. Finally, we would like to point out that when we say that (ρ, v, ζ) is a solution for equation (26), we mean that it is a weak solution or, equivalently, it is a solution in the distributional sense (Santambrogio, 2015, Section 4 4). That is, for any test function ψ : Ω R continuous differentiable with compact support, (ρ, v, ζ) satisfies Ω ψ vt dρt = Z or, equivalently, for any test function ϕ : [0, 1] Ω R continuous differentiable with compact support, (ρ, v, ζ) satisfies Z [0,1] Ω tϕ dρ + Z [0,1] Ω ϕ v dρ + Z [0,1] Ω dζ = Z Ω ϕ(1, ) dµj Z Ω ϕ(0, ) dµ0. Lemma D.1. If γ is optimal for OPTλ(µ0, µ1), and has marginals γ 0 and γ 1, then γ is optimal for OPTλ(µ0, γ 1), OPTλ(γ 0, µ1) and OT(γ 0, γ 1). 15Here we will strongly use that the fixed partial optimal transport plan γ Γ (µ0, µj) in the hypothesis of the proposition has the form γ = (I T)#γ 0, for a map T : Ω Ω Linear Optimal Partial Transport Embedding Proof. Let C(γ; µ0, µ1, λ) denote the objective function of the minimization problem OPT λ(µ0, µ1), and similarly consider C(γ; γ 0, µ1, λ), C(γ; µ0, γ 1, λ). Also, let C(γ, γ 0, γ 1) be the objective function of OT(γ 0, γ 1). First, given an arbitrary plan γ Γ(γ 0, γ 1) Γ (µ0, µ1), we have C(γ; µ0, µ1, λ) = Z Ω2 x0 x1 2dγ(x0, x1) + λ(|µ0 γ0| + |µ1 γ1|) = C(γ; γ 0, γ 1) + λ(|µ0 γ0| + |µ1 γ1|). Since C(γ ; µ0, µ1, λ) C(γ; µ0, µ1, λ), we have C(γ ; γ 0, γ 1) C(γ; γ 0, γ 1). Thus, γ is optimal for OT(γ 0, γ 1). Similarly, for every γ Γ (γ 0, µ1), we have C(γ; µ0, µ1, λ) = C(γ; γ 0, µ1) + λ|µ0 γ 0| and thus γ is optimal for OPT λ(γ 0, µ1). Analogously, γ is optimal for OPT λ(µ0, γ 1). Lemma D.2. If γ Γ (µ0, µ1), then d supp(ν0), supp(ν1) 2λ, where ν0 = µ0 γ0, ν1 = µ1 γ1, and d is the Euclidean distance in Rd. Proof. We will proceed by contradiction. Assume that d supp(ν0), supp(ν1) < 2λ. Then, there exist ex0 supp(ν0) and ex1 supp(ν1) such that ex0 ex1 < 2λ. Moreover, we can choose ε > 0 such ν0(B(ex0, ε)) > 0 and ν1(B(ex1, ε)) > 0, where B(exi, ε) denotes the ball in Rd of radius ε centered at exi. We will construct eγ Γ (µ0, µ1) with transportation cost strictly less than OPTλ(µ0, µ1), leading to a contradiction. For i = 0, 1 we denote by eνi the probability measure given by 1 νi(B(exi,ε))νi restricted to B(exi, ε). Let δ = min{ν0(B(ex0, ε)), ν1(B(ex1, ε))}. Now, we consider eγ := γ + δ( eν0 eν1). Then, eγ Γ (µ0, µ1) because πi#eγ = γi + δeνi γi + νi = µi for i = 0, 1. The partial transport cost of eγ is C(eγ, µ0, µ1, λ) = Z Ω2 x0 x1 2 deγ + λ(|µ0| + |µ1| 2|eγ|) Ω2 x0 x1 2 dγ + δ Z Ω2 x0 x1 2 d(eν0 eν1) + λ(|µ0| + |µ1| 2|γ| 2δ|eν0 eν1|) = OPTλ(µ0, µ1) + δ Z B(ex0,ε) B(ex1,ε) x0 x1 2 d(eν0 eν1) 2λδ < OPTλ(µ0, µ1) + 2λδ 2λδ = OPTλ(µ0, µ1), which is a contradiction. Lemma D.3. Using the notation and hypothesis of Proposition 3.1, for t (0, 1) let Dt := {x : vt(x) = 0}. Then, γt ν0 γt νj 0 over Dt. Proof. We will prove this for γt ν0. The other case is analogous. Suppose by contradiction that γt ν0 is not the null measure. By Lemma D.1, we know that γ is optimal for OT(γ 0, γ 1). Since we are also assuming that γ is induced by a map T, i.e. γ = (I T)#γ 0, then T is an optimal Monge map between γ 0 and γ 1. Therefore, the map Tt : Ω Ωis invertible (see for example (Santambrogio, 2015, Lemma 5.29)). For ease of notation we will denote eν0 := γt ν0 as a measure in Dt. The idea of the proof is to exhibit a plan eγ with smaller OPTλ cost than γ , which will be a contradiction since γ Γ (µ0µj). Let us define eγ := γ + (I, T T 1 t )#eν0 (I, T)#(T 1 t )#eν0. Linear Optimal Partial Transport Embedding Then, eγ Γ (µ0, µj) since π0#eγ = π0#γ + eν0 (T 1 t )#eν0 γ 0 + eν0 µ0, π1#eγ = π1#γ + (T T 1 t )#eν0 (T T 1 t )#eν0 = γ 1. (and we know γ 1 µj). From the last equation we can also conclude that |eγ| = |γ 1| = |γ |. Therefore, the partial transportation cost for eγ is Z Ω2 x y 2deγ(x, y) + λ(|µ0| + |µj| 2|eγ|) Ω2 x y dγ (x, y) + λ(|µ0| + |µj| 2|γ|) | {z } OP Tλ(µ0,µj) x T(T 1 t (x)) 2 T 1 t (x) T(T 1 t (x)) 2 deν0(x) < OPTλ(µ0, µ1) where in the last inequality we used that if y = T 1 t (x) we have Tt(y) T(y) = (1 t)y + t T(y) T(y) = (1 t) y T(y) < y T(y) for t (0, 1). Lemma D.4. Using the notation and hypothesis of Proposition 3.1, for t (0, 1) the vector-valued measures vt ν0 and vt νj are exactly zero. Proof. We prove this for vt ν0. The other case is analogous. We recall that the measure vt ν0 is determined by Z Ω Φ(x) vt(x) dν0(x) for all measurable functions Φ : Ω Rd, where Φ(x) vt(x) denotes the usual dot product in Rd between the vectors Φ(x) and vt(x). From Lemma D.3 we know that γt ν0 0 over Dt. Therefore, γt and ν0 are mutually singular in that set. This implies that we can decompose Dt into two disjoint sets D1, D2 such that γt(D1) = 0 = ν0(D2). Since vt is a function defined γt almost everywhere (up to sets of null measure with respect to γt), we can assume without loss of generality that vt 0 on D1. Let Φ : Ω Rd be a measurable vector-valued function over Ω. Using that vt 0 in Ω\ Dt and in D1, and that ν0(D2) = 0, we obtain Z Ω Φ(x) vt(x) dν0(x) = Z Ω\Dt Φ(x) vt(x) dν0(x) + Z D1 Φ(x) vt(x) dν0(x) + Z D2 Φ(x) vt(x) dν0(x) = 0. Since Φ was arbitrary, we can conclude that vt ν0 0. Lemma D.5. Using the notation and hypothesis of Proposition 3.1, the measure γt = (1 t)ν0 + tν1 satisfies the equation ( tγ + γv = ζ γ0 = ν0, γ1 = ν1. (50) Proof. From Lemma D.4 we have that vt γt = (1 t)vt ν0 +tvt ν1 = 0, then γv = 0. It is easy to see that γ satisfies the boundary conditions by replacing t = 0 and t = 1 in its definition. Also, tγ = ν1 ν0. Then, since ζt = ν1 ν0 for every t, we get tγ + γv = ν1 ν0 + 0 = ζt. Proof of Proposition 3.1. First, we address that the v : [0, 1] Ω Rd is well defined. Indeed, by Lemma D.1, we have that γ = (id T)#γ 0 is optimal for OT(γ 0, γ 1). Thus, for each (t, x) [0, 1] Ω, by so-called cyclical monotonicity property of the support of Linear Optimal Partial Transport Embedding classical optimal transport plans, there exists at most one x0 supp(γ 0) such that Tt(x0) = x, see (Santambrogio, 2015, Lemma 5.29). So, vt(x) in (30) is well defined by understanding its definition as ( T(x0) x0 if x = Tt(x0), for x0 supp(γ 0) 0 elsewhere. Now, we check that (ρ, v, ζ) defined in (29), (30), (31) is a solution for (26). In fact, ρt = γt + γt where γt is given by (28), and γt is as in Lemma D.5. Then, from section 2.2, (γ, v) solves the continuity equation (5) since γ = (I T)#γ 0 Γ (γ 0, γ 1) (Lemma D.1), and from its definition γt = (Tt)#γ 0 coincides with (9) in this context, as well as vt coincides with (8) (the support of vt lies on the support of γ 0). On the other hand, from Lemma D.5, γt solves (50). Therefore, by linearity, tρt + ρtvt = tγt + γtvt | {z } 0 + tγt + γtvt | {z } ζt Finally, by plugging in (ρ, v, ζ) into the objective function in (25), we have: Z [0,1] Ω v 2dρ + λ|ζ| = Z [0,1] Ω v 2 dγt dt + λ|ζ| Ω T(x0) x0 2 dγ 0(x0) + λ(|ν0| + |ν1|) Ω2 x0 xj 2 dγ (x0, xj) + λ(|µ0 γ 0| + |µj γ 1|) = OPTλ(µ0, µj) since γ Γ (µ0, µj). So, this shows that (ρ, v, ζ) is minimum for (25). The moreover part holds from the above identities, using that Z Ω v0 2dγ 0(x0) + λ(|ν0| + |ν1|) = Z Ω min( v0 2, 2λ)dγ 0(x0) + λ(|ν0| + |ν1|) for v0(x0) = T(x0) x0, since γ is such that x0 xj 2 < 2λ for all (x0, xj) supp(γ) (Bai et al., 2022, Lemma 3.2). E. Proof of Theorem 3.5 The proof will be similar to that of Lemma 2.1. Proof. We fixed γj Γ (µ0, µj). We will understand the second marginal distribution π1#γj induced by γj, either as the vector γj 1 := (γj)T 1N0 or as the measure PNj m=1(γj 1)mδxj m, and, by abuse of notation, we will write π1#γj = γj 1 (analogously for π0#γj). Let ˆγj := diag(ˆpj 1, . . . , ˆpj N0) denote the transportation plan induced by mapping x0 k 7 ˆxj k. Let D := {k {1, . . . N0} : ˆpj k > 0}. Given any ˆγ Γ (µ0, ˆµj), we need to show C(ˆγj; µ0, ˆµj, λ) C(ˆγ; µ0, ˆµj, λ). (51) We recall that we are denoting by p0 the vector of weights of the discrete measure µ0 (analogously, pj and ˆpj are the vectors of weights of µj and ˆµj, resp). Linear Optimal Partial Transport Embedding In general, if we consider any transportation plan γ Γ (µ0, γj 1), we have that C(γ ; µ0, γj 1, λ) = i=1 x0 k xj i 2γ k,i + λ(|p0| + |γj 1| 2|γ |) i=1 x0 k xj i 2γ k,i + λ(|p0| + |pj| 2|γ |) + λ( |pj| |γj 1|) = C(γ ; µ0, µj, λ) + λ( |pj| |γj 1|). C(γj; µ0, γj 1, λ) = C(γj; µ0, µj, λ) + λ( |pj| |γj 1|) Since γj is optimal for OPTλ(µ0, µj), then C(γj; µ0, µj, λ) C(γ ; µ0, µj, λ) and thus C(γj; µ0, γj 1, λ) C(γ ; µ0, γj 1, λ). However, we need to have more control over the bounds in order to compare different costs. Therefore, as we did in Lemma 2.1, we will define a particular plan γ to use a pivot and obtain (51). With a slight abuse of notation, we define where we mean that 1 ( γj k,i/ˆγj k,k if k D 0 elsewhere. First, we claim γ Γ (µ0, γj 1). Indeed, we have γ1Nj = ˆγ 1 ˆγj γj1Nj = ˆγ 1 ˆγj ˆpj = ˆγβ p0 (52) γT 1N0 = (γj)T 1 ˆγj ˆγT 1N0 (γj)T 1 ˆγj ˆpj = (γj)T β = γj 1, (53) where β RN0 1 is defined with βk = 1 if k D and βk = 0 elsewhere. In fact, in (52), the first inequality follows from the fact each entry in the N0 1 vector 1 ˆγj ˆpj is either 0 or 1; and the second inequality holds since ˆγ Γ (µ0, ˆµj). And in (53), the inequality follows from the fact ˆγT Γ (µ0, ˆµj). In addition, since (ˆγj)T 1N0 ˆpj, we have ˆγk,k = 0, k / D, k {1, . . . , N0}. Thus we have γ1Nj = ˆγβ = ˆγ1N0 that is the first marginals of γ and ˆγ are same and thus |γ| = |ˆγ|. We compute the transportation costs induced by γj, γ, ˆγj, ˆγ: C(γj; µ0, γj 1, λ) = k D x0 k xj i 2γj k,i + λ(|p0| + |γj 1| 2|γj|) k D x0 k 2(γj 0)k + i=1 xj i 2(γj 1)i 2 X i=1 x0 k xj i γj k,i + λ(|p0| + |γj 1| 2|γj|) k D x0 k 2ˆpj k + i=1 xj i 2(γj 1)i 2 X i=1 x0 k xj i γj k,i + λ(|p0| + |γj 1| 2|γj|) Linear Optimal Partial Transport Embedding C(ˆγj; µ0, ˆµj, λ) = X k D x0 k 2(ˆγj 0)k + X k D ˆxj k 2ˆpj k 2 X k D x0 k ˆxj k ˆγj k,k + λ(|p0| + |ˆpj| 2|ˆγj|) k D x0 k 2ˆpj k + X k D ˆxj k 2ˆpj k 2 X i=1 x0 k xj i γj k ,i ˆγj k,k ˆγj k ,k + λ(|p0| + |γj 1| 2|γj|) k D x0 k 2ˆpj k + X k D ˆxj k 2ˆpj k 2 X i=1 x0 k xj i γj k,i + λ(|p0| + |γj 1| 2|γj|) = C(γj; µ0, γj 1, λ) i=1 xj i 2(γj 1)i + X k D ˆxj k 2ˆpj k (54) C(γ; µ0, γj 1, λ) = k=1 x0 k 2(γ0)k + i=1 xj i 2(γ1)i 2 k D x0 k xj i ˆγk,k γj k ,i ˆγj k ,k + λ(|p0| + |γj 1| 2|γ|), C(ˆγ; µ0, ˆµj, λ) = k=1 x0 k 2(ˆγ0)k + X k D ˆxj k 2(ˆγ1)k 2 i=1 x0 k xj i ˆγj k,k γj k ,i ˆγj k ,k + λ(|p0| + |ˆpj| 2|ˆγ|) = C(γ; µ0, γj 1, λ) i=1 xj i 2(γ1)i + X k D ˆxj k 2(ˆγ1)k (55) Let p := π1#ˆγ (vector of weights). We have p ˆpj. Combining with (54) and (55), we have C(ˆγj; µ0, ˆµj, λ) C(ˆγ; µ0, ˆµj, λ) = C(γj; µ0, γj 1, λ) C(γ; µ0, γj 1, λ) + i=1 xj i 2 X k D ( pk ˆpj k ˆpj k ) + k =1 |ˆxj k|2(ˆpj k p) i=1 xj i 2 X k D ( pk ˆpj k ˆpj k ) + k =1 |ˆxj k|2(ˆpj k p) (56) where the inequality holds sine γj is optimal for OPTλ(µ0, γj 1). i=1 xj i 2 X k D ( pk ˆpj k ˆpj k ) + k =1 |ˆxj k|2(ˆpj k p) Note that if k / D, then pk = ˆpk = 0; and for k D, we have i: γj k,i>0 xj i 2 γj k,i ˆpj k ˆxj k 2 0 where the inequality holds by the fact P i: γj k,i>0 γj k,i ˆpj k = 1 and Jensen s inequality (in particular, E( X 2) E[X] 2 for a random vector X). Combining with the fact F( p) = 0 when pk achieves its maximum ˆpj k for each k, we have F( p) 0, 0 p pj. Thus C(ˆγj; µ0, ˆµj, λ) C(ˆγ; µ0, ˆµj, λ) by (56), and we conclude the proof. Linear Optimal Partial Transport Embedding F. Proof of Theorem 3.6 Proof. Without loss of generality, we assume that γj Γ (µ0, µj) is induced by a 1-1 map. We can suppose this since the trick is that we will end up with an N0-point representation of µj when we get ˆµj. For example, if two x0 n and x0 m are mapped to the same point T(x0 n) = T(x0 m), when performing the barycentric, we can split this into two different labels with corresponding labels. (The moral is that the labels are important, rather than where the mass is located.) Therefore, γj RN0 Nj is such that in each row and column there exists at most one positive entry, and all others are zero. Let ˆγ := diag(ˆpj 1, . . . , ˆpj N0). Then, by the definition of ˆµj, we have C(γj; µ0, µj, λ) = C(ˆγ; µ0, ˆµj, λ). Thus, OPTλ(µ0, µj) = C(γj; µ0, µj, λ) = C(γj; µ0, γj, λ) + λ(|µj| |γj 1|) = C(ˆγ; µ0, ˆµj, λ) + λ(|µj| |ˆµj|) = OPTλ(µ0, ˆµj) + λ(|µj| |ˆµj|), where the last equality holds from Theorem 3.5. This concludes the proof. Moreover, from the above identities (for discrete measure) we can express OPTλ(µ0, µj) = k=1 ˆpk x0 k ˆxj k 2 + λ|p0 ˆpj k| + λ(|pj| |ˆpj|). G. Proof of Proposition 3.7 Proof. The result follows from (40) and (35). Indeed, we have LOPTλ,µ0(ˆµj, ˆµj) = ui uj ˆpi ˆpj,2λ + λ(|ˆpi ˆpj|) and LOPTλ,µ0(µj, µj) = ui uj ˆpi ˆpj,2λ + λ(|ˆpi ˆpj|) + λ(|νi νj|), since the LOPT barycentric projection of µi is basically µi without the mass that needs to be created from the reference (analogously for µj). H. Applications For completeness, we will expand on the experiments and discussion presented in Section 4, as well as on Figure 1 which contrasts the HK technique with OPT from the point of view of interpolation of measures. First, we recall that similar to LOT (Wang et al., 2013) and (Moosm uller & Cloninger, 2023), the goal of LOPT is not exclusively to approximate OPT, but to propose new transport-based metrics (or discrepancy measures) that are computationally less expensive and easier to work with than OT or OPT, specifically when many measures must be compared. Also, one of the main advantages of the linearization step is that it allows us to embed sets of probability (resp. positive finite) measures into a linear space (Tµ0 space). Moreover, it does it in a way that allows us to use the Tµ0-metric in that space as a proxy (or replacement) for more complicated transport metrics while preserving the natural properties of transport theory. As a consequence, data analysis can be performed using Euclidean metric in a simple vector space. H.1. Approximation of OPT Distance For a better understanding of the errors plotted in Figure 2, the following Figure 6 shows the histograms of the relative errors for different values of λ and each number of measures K = 5, 10, 15. Linear Optimal Partial Transport Embedding Figure 6. Histogram of relatives errors (depicted in Figure 2) between OPTλ and LOPTλ,µ0. (Number of samples N = 500. Number of repetitions = 10. Dimension = 2. measures on R2 .) As said in Section 4, we recall that for these experiments, we created K point sets of size N for K different Gaussian distributions in R2. In particular, µi N(mi, I), where mi is randomly selected such that mi = 3 for i = 1, ..., K. For the reference, we picked an N point representation of µ0 N(m, I) with m = P mi/K. For figures 2 and 6, the sample size N was set equal to 500. In what follows, we include tests for N = 200, 250, 300, 350, ..., 900, 950, 1000 and K = 2, 4. For each (N, K), we repeated each experiment 10 times. The relative errors are shown in Figure 7. For large λ, most mass is transported and OT(µi, µj) OPTλ(µi, µj), the performance of LOPT is close to that of LOT, and the relative error is small. For small λ, almost no mass is transported, OPTλ(µi, µj) λ(|µi| + |µj|) λ(|µi| |ˆµi| + |µj| |ˆµj|) LOPT µ0,λ(µi, µj), and we still have a small error. In between, e.g., λ = 5, we have the largest relative error. Similar results were obtained by setting the reference as the OT barycenter. 200 300 400 500 600 700 800 900 1000 N: size of each i relative error K=2, =10 K=2, =5 K=2, =1 K=4, =10 K=4, =5 K=4, =1 Figure 7. The average relative error between OPTλ and LOPTλ,µ0 between all pairs of discrete measures µi and µj. H.2. PCA analysis For problems where doing pair-wise comparisons between K distributions is needed, in the classical optimal (partial) transport setting we have to solve K 2 OT (resp. OPT) problems. In the LOT (resp. LOPT) framework, however, one only needs to perform K OT (resp. OPT) problems (matching each distribution with a reference measure). Linear Optimal Partial Transport Embedding One very ubiquitous example is to do clustering or classification of measures. For this case, the number of target measures K representing different samples is usually very large. For the particular case of data analysis on Point Cloud MNIST, after using PCA in the embedding space (see Figure 5), it can be observed that the LOT framework is a natural option for separating different digits, but the equal mass requirement is too restrictive in the presence of noise. LOPT performs better since it does not have this restriction. This is one example where the number of measures K = 900 is much larger than 2. H.3. Point Cloud Interpolation Here, we add the results of the experiments mentioned in Section 4 which complete Figure 4. In the new Figure 8, in fact, Figure 4 corresponds to the subfigure 8b. We conducted experiments using three digits (0, 1, and 9) from the Point Cloud MNIST dataset, with 300 point sets per digit. We utilized LOPT embedding to calculate and visualize the interpolation between pairs of digits. We chose the barycenter between 0, 1, and 9 as the reference for our experiment. However, to avoid redundant results, in the main paper, we only demonstrated the interpolation between 0 and 9 in our main paper. The remaining plots for the other digit pairs using different levels of noise η are included here for completeness. Later, in Section H.4, Figure 11 will add the plots for the HK technique and its linearized version. (b) η = 0.5 (c) η = 0.75 (e) η = 0.5 (f) η = 0.75 Figure 8. Interpolation between two point-clouds at different times t {0, 0.25, 0.5, 0.75, 1}. Different values of noise η were considered for the different interpolation approaches (OT, LOP, OPT, LOPT). For LOT and LOPT the reference measure is the barycenter between the Point Clouds 0, 1, and 9 with no noise. Top row: Interpolation between digits 0 and 9. Bottom row: Interpolation between digits 0 and 1. Linear Optimal Partial Transport Embedding H.4. Preliminary comparisons between LHK and LOPT The contrasts between the Linearized Hellinger-Kantorovich (LHK) (Cai et al., 2022) and the LOPT approaches come from the differences between HK (Hellinger-Kantorovich) and OPT distances. The main issue is that for HK transportation between two measures, the transported portion of the mass does not resemble the OT-geodesic where mass is preserved. In other words, HK changes the mass as it transports it, while OPT preserves it. This issue is depicted in Figure 1. In that figure, both the initial (blue) and final (purple) distributions of masses are two unit-mass delta measures at different locations. Mass decreases and then increases while being transported for HK, while it remains constant for OPT. For HK, the transported portion of the mass does not resemble the OT-geodesic where mass is preserved. In other words, HK changes the mass as it transports it, while OPT preserves it. To illustrate better this point, we incorporate here Figure 9 which is the two-dimensional analog of Figure 1. Figure 9. HK vs. OPT interpolation between two delta measures of unit mass located at different positions. Two scenarios are shown for each transport framework varying the respective parameters (s for HK, and λ for OPT). Blue circles stand for the cases when the geodesic transports mass. Triangular shapes represent destruction and creation of mass. Triangles pointing down in orange indicate the locations where mass will be destroyed. Triangles pointing up in green indicate the locations where mass will be created. On top of that shadows are added to emphasize the change of mass from initial and final configurations. The Top two plots exhibit two extreme cases when performing HK geodesic. When s is small, everything is created/destroyed, when s is large everything is transported without mass-preservation. On the other side, the Bottom two plots show the two analogous cases for OPT geodesics. When λ is small we observe creation/destruction, when λ is large we have mass-preserving transportation. The intermediate cases are treated in the following. In addition, in Figure 10 we not only compare HK and OPT, but also LHK and LOPT. The top subfigure 10a shows the measures µ1 (blue dots) and µ2 (orange crosses) to be interpolated with the different techniques in the next subfigures 10b and 10c. Also, in 10a we plot the reference µ0 (green triangles) which is going to be used only in experiment 10c. The measures µ1 and µ2 are interpreted as follows. The three masses on the interior, forming a triangle, can be considered as the signal information and the two masses on the corners can be considered noise. That is, we can assume they are just two noisy point cloud representations of the same distribution shifted. We want to transport the three masses in the middle without affecting their mass and relative positions too much. Subfigure 10b shows HK and OPT interpolation between the two measures µ1 and µ2. In the HK cases, we notice not only how the masses vary, but also how their relative positions change obtaining a very different configuration at the end of the interpolation. OPT instead returns a more natural interpolation because of the mass preservation of the transported portion and the decoupling between transport and destruction/creation of mass. Finally, subfigure 10c shows the interpolation of µ1 and µ2 for LHK and LOPT. The reference measure µ0 is the Linear Optimal Partial Transport Embedding same for both and we can see the denoising effect due to the fact that, for the three measures, the mass is concentrated in the triangular region in the center. However, while mass and structure-preserving transportation can be seen for LOPT, for LHK the shape of the configuration changes. On top of that, as is often the case on quantization, point cloud representations of measures are given as samples with uniform mass. OPT/LOPT interpolation will not change the mass of each transported point. Therefore, the intermediate steps of an algorithm using OPT/LOPT transport benefit from conserving the same type of representation. That is, as a uniform set of points. Linear Optimal Partial Transport Embedding (a) Plot of two measures (µ1 as blue circles and µ2 as orange crosses, where each point has mass one), and a reference measure µ0 (concentrated on the three green triangular locations, unit mass each). (b) HK and OPT interpolation between the two measures µ1 and µ2 at different times. The color code is the same as for Figure 9. (c) LHK and LOPT interpolation between the two measures µ1 and µ2 at different times using for both cases the same reference µ0. Figure 10. HK vs OPT and LHK vs LOPT Linear Optimal Partial Transport Embedding For comparison on Point Cloud interpolation using MNIST, we include Figure 11 that illustrates OPT, LOPT, HK, and LHK interpolation for digit pairs (0,1) and (0,9). In the visualizations, the size of each circle is plotted according to amount of the mass at each location. (a) Samples of digits 0, 1, and 9 from MNIST Data Set. Top row: clean point cloud. Bottom row: 50% of noise. (b) Noise level η = 0. (c) Noise level η = 0.5 (d) Noise level η = 0. (e) Noise level η = 0.5. Figure 11. Point cloud interpolation using all the techniques unbalanced transportation HK, OPT, LHK, and LOPT. However, we do not claim the presented LOPT tool to be a one size fits all kind of tool. We are working on the subtle differences between LHK and LOPT and expect to have a complete and clear picture in the future. The aim of this article Linear Optimal Partial Transport Embedding was to present a new tool with an intuitive introduction and motivation so that the OT community would benefit from it. H.5. Barycenter computation Can the linear embedding technique be used to compute the barycenter of a set of measures, e.g., by computing the barycenter of the embeddings and performing an inversion to the original space? One can first calculate the mean of the embedded measures, and recover a measure from this mean. The recovered measure is not necessarily the OPT barycenter, however, one can repeat this process and obtain better approximations of the barycenter. Similar to LOT, we have numerically observed that such a process will converge to a measure that is close to the barycenter. However, there are several technical considerations that one needs to pay close attention. For instance, the choice of λ and the choice of the initial reference measure are critical in this process. Figure 12. The depiction of barycenters between digits 0 and 1, and between 0 and 9 using the LOPT technique. Left panel: original measures (point-clouds) µ1 (digit 0), µ2 (digit 1), and µ3 (digit 9). We considered them with no noise on the top left panel (η = 0), and with corrupted under noise on the bottom left panel (level of noise η = 0.5). Right panel: The first row is the result that both initial measure and data µ1, µ2, µ3 are clean data; the second row is the result of clean initial measure and noise corrupted data; the third row is the result for noise corrupted initial measure and data.