# fast_computation_of_wasserstein_barycenters__447ffeea.pdf Fast Computation of Wasserstein Barycenters Marco Cuturi MCUTURI@I.KYOTO-U.AC.JP Graduate School of Informatics, Kyoto University Arnaud Doucet DOUCET@STAT.OXFORD.AC.UK Department of Statistics, University of Oxford We present new algorithms to compute the mean of a set of empirical probability measures under the optimal transport metric. This mean, known as the Wasserstein barycenter, is the measure that minimizes the sum of its Wasserstein distances to each element in that set. We propose two original algorithms to compute Wasserstein barycenters that build upon the subgradient method. A direct implementation of these algorithms is, however, too costly because it would require the repeated resolution of large primal and dual optimal transport problems to compute subgradients. Extending the work of Cuturi (2013), we propose to smooth the Wasserstein distance used in the definition of Wasserstein barycenters with an entropic regularizer and recover in doing so a strictly convex objective whose gradients can be computed for a considerably cheaper computational cost using matrix scaling algorithms. We use these algorithms to visualize a large family of images and to solve a constrained clustering problem. 1. Introduction Comparing, summarizing and reducing the dimensionality of empirical probability measures defined on a space Ωare fundamental tasks in statistics and machine learning. Such tasks are usually carried out using pairwise comparisons of measures. Classic information divergences (Amari and Nagaoka, 2001) are widely used to carry out such comparisons. Unless Ωis finite, these divergences cannot be directly applied to empirical measures, because they are ill-defined for measures that do not have continuous densities. They also fail to incorporate prior knowledge on the geometry Proceedings of the 31 st International Conference on Machine Learning, Beijing, China, 2014. JMLR: W&CP volume 32. Copyright 2014 by the author(s). Figure 1. (Top) 30 artificial images of two nested random ellipses. Mean measures using the (a) Euclidean distance (b) Euclidean after re-centering images (c) Jeffrey centroid (Nielsen, 2013) (d) RKHS distance (Gaussian kernel, σ = 0.002) (e) 2-Wasserstein distance. Fast Computation of Wasserstein Barycenters of Ω, which might be available if, for instance, Ωis also a Hilbert space. Both of these issues are usually solved using Parzen s approach (1962) to smooth empirical measures with smoothing kernels before computing divergences: the Euclidean (Gretton et al., 2007) and χ2 distances (Harchaoui et al., 2008), the Kullback-Leibler and Pearson divergences (Kanamori et al., 2012a;b) can all be computed fairly efficiently by considering matrices of kernel evaluations. The choice of a divergence defines implicitly the mean element, or barycenter, of a set of measures, as the particular measure that minimizes the sum of all its divergences to that set of target measures (Veldhuis, 2002; Banerjee et al., 2005; Teboulle, 2007; Nielsen, 2013). The goal of this paper is to compute efficiently barycenters (possibly in a constrained subset of all probability measures on Ω) defined by the optimal transport distance between measures (Villani, 2009, 6). We propose to minimize directly the sum of optimal transport distances from one measure (the variable) to a set of fixed measures by gradient descent. These gradients can be computed for a moderate cost by solving smoothed optimal transport problems as proposed by Cuturi (2013). Wasserstein distances have many favorable properties, documented both in theory (Villani, 2009) and practice (Rubner et al., 1997; Pele and Werman, 2009). We argue that their versatility extends to the barycenters they define. We illustrate this intuition in Figure 1, where we consider 30 images of nested ellipses on a 100 100 grid. Each image is a discrete measure on [0, 1]2 with normalized intensities. Computing the Euclidean, Gaussian RKHS mean-maps or Jeffrey centroid of these images results in mean measures that hardly make any sense, whereas the 2-Wasserstein mean on that grid (defined in 3.1) produced by Algorithm 1 captures perfectly the structure of these images. Note that these results were recovered without any prior knowledge on these images other than that of defining a distance in [0, 1]2, here the Euclidean distance. Note also that the Gaussian kernel smoothing approach uses the same distance, in addition to a bandwidth parameter σ which needs to be tuned in practice. This paper is organized as follows: we provide background on optimal transport in 2, followed by the definition of Wasserstein barycenters with motivating examples in 3. Novel contributions are presented from 4: we present two subgradient methods to compute Wasserstein barycenters, one which applies when the support of the mean measure is known in advance and another when that support can be freely chosen in Ω. These algorithms are very costly even for measures of small support or histograms of small size. We show in 5 that the key ingredients of these approaches the computation of primal and dual optimal transport solutions can be bypassed by solving smoothed optimal transport problems. We conclude with two applications of our algorithms in 6. 2. Background on Optimal Transport Let Ωbe an arbitrary space, D a metric on that space and P(Ω) the set of Borel probability measures on Ω. For any point x Ω, δx is the Dirac unit mass on x. Definition 1 (Wasserstein Distances). For p [1, ) and probability measures µ, ν in P(Ω), their p-Wasserstein distance (Villani, 2009, 6) is inf π Π(µ,ν) Ω2 D(x, y)pdπ(x, y) where Π(µ, ν) is the set of all probability measures on Ω2 that have marginals µ and ν. 2.1. Restriction to Empirical Measures We will only consider empirical measures throughout this paper, that is measures of the form µ = $n i=1 aiδxi where n is an integer, X = (x1, . . . , xn) Ωn and (a1, . . . , an) lives in the probability simplex Σn, def = {u Rn | i n, ui 0, Let us introduce additional notations: Measures on a Set X with Constrained Weights. Let Θ be a non-empty closed subset Θ of Σn. We write aiδxi, a Θ}. Measures supported on up to k points. Given an integer k and a subset Θ of Σk, we consider the set Pk(Ω, Θ) of measures of Ωthat have discrete support of size up to k and weights in Θ, When no constraints on the weights are considered, namely when the weights are free to be chosen anywhere on the probability simplex, we use the shorter notations P(X) def = P(X, Σn) and Pk(Ω) def = Pk(Ω, Σk). 2.2. Wasserstein & Discrete Optimal Transport Consider two families X = (x1, . . . , xn) and Y = (y1, . . . , ym) of points in Ω. When µ = $n i=1 aiδxi and ν = $n i=1 biδyi, the Wasserstein distance Wp(µ, ν) between µ and ν is the pth root of the optimum of a network flow problem known as the transportation problem (Bertsimas and Tsitsiklis, 1997, 7.2). This problem builds upon two elements: the matrix MXY of pairwise distances between elements of X and Y raised to the power p, which Fast Computation of Wasserstein Barycenters acts as a cost parameter, def = [D(xi, yj)p]ij Rn m, (1) and the transportation polytope U(a, b) of a Σn and b Σm, which acts as a feasible set, defined as the set of n m nonnegative matrices such that their row and column marginals are equal to a and b respectively. Writing 1n for the n-dimensional vector of ones, def = {T Rn m + | T 1m = a, T T1n = b}. (2) def = tr(AT B) be the Frobenius dot-product of matrices. Combining Eq. (1) & (2), we have that W p p (µ, ν) the distance Wp(µ, ν) raised to the power p can be written as the optimum of a parametric linear program p on n m variables, parameterized by the marginals a, b and a (cost) matrix MXY : p (µ, ν) = p(a, b, MXY ) def = min T U(a,b) T, MXY . (3) 3. Wasserstein Barycenters We present in this section the Wasserstein barycenter problem, a variational problem involving all Wasserstein distances from one to many measures, and show how it encompasses known problems in clustering and approximation. 3.1. Definition and Special Cases Definition 2 (Agueh and Carlier, 2011). A Wasserstein barycenter of N measures {ν1, . . . , νN} in P P(Ω) is a minimizer of f over P, where p (µ, νi). (4) Agueh and Carlier consider more generally a non-negative weight λi in front of each distance W p p (µ, νi). The algorithms we propose extend trivially to that case but we use uniform weights in this work to keep notations simpler. We highlight a few special cases where minimizing f over a set P is either trivial, relevant to data analysis and/or has been considered in the literature with different tools or under a different name. In what follows X Ωn and Y Ωm are arbitrary finite subsets of Ω. N = 1, P = P(X) When only one measure ν, supported on Y Ωm is considered, its closest element µ in P(X) if no constraints on weights a are given can be computed by defining a weight vector a on the elements of X that results from assigning all of the mass bi to the closest neighbor in metric D of yi in X. Centroids of Histograms: N > 1, Ωfinite, P = P(Ω). When Ωis a set of size d and a matrix M Rd d + describes the pairwise distances between these d points (usually called in that case bins or features), the 1-Wasserstein distance is known as the Earth Mover s Distance (EMD) (Rubner et al., 1997). In that context, Wasserstein barycenters have also been called EMD prototypes by Zen and Ricci (2011). Euclidean Ω: N = 1, D(x, y) = x y 2, p = 2, P = Pk(Ω). Minimizing f on Pk(Ω) when (Ω, D) is a Euclidean metric space and p = 2 is equivalent to the k-means problem (Pollard, 1982; Canas and Rosasco, 2012). Constrained k-Means: N = 1, P = Pk(Ω, {1k/k}). Consider a measure ν with support Y Ωm and weights b Σm. The problem of approximating this measure by a uniform measure with k atoms a measure in Pk(Ω, {1k/k}) in 2-Wasserstein sense was to our knowledge first considered by Ng (2000), who proposed a variant of Lloyd s algorithm (1982) for that purpose. More recently, Reich (2013) remarked that such an approximation can be used in the resampling step of particle filters and proposed in that context two ensemble methods inspired by optimal transport, one of which reduces to a single iteration of Ng s algorithm. Such approximations can also be obtained with kernel-based approaches, by minimizing an information divergence between the (smoothed) target measure ν and its (smoothed) uniform approximation as proposed recently by Chen et al. (2010) and Sugiyama et al. (2011). 3.2. Recent Work Agueh and Carlier (2011) consider conditions on the νi s for a Wasserstein barycenter in P(Ω) to be unique using the multi-marginal transportation problem. They provide solutions in the cases where either (i) Ω= R; (ii) N = 2 using Mc Cann s interpolant (1997); (iii) all the measures νi are Gaussians in Ω= Rd, in which case the barycenter is a Gaussian with the mean of all means and a variance matrix which is the unique positive definite root of a matrix equation (Agueh and Carlier, 2011, Eq.6.2). Rabin et al. (2012) were to our knowledge the first to consider practical approaches to compute Wasserstein barycenters between point clouds in Rd. To do so, Rabin et al. (2012) propose to approximate the Wasserstein distance between two point clouds by their sliced Wasserstein distance, the expectation of the Wasserstein distance between the projections of these point clouds on lines sampled randomly. Because the optimal transport between two point clouds on the real line can be solved with a simple sort, the sliced Wasserstein barycenter can be computed very efficiently, using gradient descent. Although their approach seems very effective in lower dimensions, it may not work for d 4 and does not generalize to non-Euclidean metric spaces. Fast Computation of Wasserstein Barycenters 4. New Computational Approaches We propose in this section new approaches to compute Wasserstein barycenters when (i) each of the N measures νi is an empirical measure, described by a list of atoms Yi Ωmi of size mi 1, and a probability vector bi in the simplex Σmi; (ii) the search for a barycenter is not considered on the whole of P(Ω) but restricted to either P(X, Θ) (the set of measures supported on a predefined finite set X of size n with weights in a subset Θ of Σn) or Pk(Ω, Θ) (the set of measures supported on up to k atoms with weights in a subset Θ of Σk). Looking for a barycenter µ with atoms X and weights a is equivalent to minimizing f (see Eq. 3 for a definition of p), p(a, bi, MXYi), (5) over relevant feasible sets for a and X. When X is fixed, we show in 4.1 that f is convex w.r.t a regardless of the properties of Ω. A subgradient for f w.r.t a can be recovered through the dual optimal solutions of all problems p(a, bi, MXYi), and f can be minimized using a projected subgradient method outlined in 4.2. If X is free, constrained to be of cardinal k, and Ωand its metric D are both Euclidean, we show in 4.4 that f is not convex w.r.t X but we can provide subgradients for f using the primal optimal solutions of all problems p(a, bi, MXYi). This in turn suggests an algorithm to reach a local minimum for f w.r.t. a and X in Pk(Ω, Θ) by combining both approaches. 4.1. Differentiability of p(a, b, MXY ) w.r.t a Dual transportation problem. Given a matrix M Rn m, the optimum p(a, b, M) admits the following dual Linear Program (LP) form (Bertsimas and Tsitsiklis, 1997, 7.6, 7.8), known as the dual optimal transport problem: d(a, b, M) = max (α,β) CM αT a + βT b , (6) where the polyhedron CM of dual variables is CM = {(α, β) Rn+m | αi + βj mij}. By LP duality, d(a, b, M) = p(a, b, M). The dual optimal solutions which can be easily recovered from the primal optimal solution (Bertsimas and Tsitsiklis, 1997, Eq.7.10) define a subgradient for p as a function of a: Proposition 1. Given b Σm and M Rn m, the map a , p(a, b, M) is a polyhedral convex function. Any optimal dual vector α of d(a, b, M) is a subgradient of p(a, b, M) with respect to a. Proof. These results follow from sensitivity analysis in LP s (Bertsimas and Tsitsiklis, 1997, 5.2). d is bounded and is also the maximum of a finite set of linear functions, each indexed by the set of extreme points of CM, evaluated at a and is therefore polyhedral convex. When the dual optimal vector is unique, α is a gradient of p at a, and a subgradient otherwise. Because for any real value t the pair (α + t1n, β t1m) is feasible if the pair (α, β) is feasible, and because their objective are identical, any dual optimum (α, β) is determined up to an additive constant. To remove this degree of freedom which arises from the fact that one among all n+m row/column sum constraints of U(a, b) is redundant we can either remove a dual variable or normalize any dual optimum α so that it sums to zero, to enforce that it belongs to the tangent space of Σn. We follow the latter strategy in the rest of the paper. 4.2. Fixed Support: Minimizing f over P(X) Let X Ωn be fixed and let Θ be a closed convex subset of Σn. The aim of this section is to compute weights a Θ such that f(a, X) is minimal. Let α i be the optimal dual variable of d(a, bi; MXYi) normalized to sum to 0. f being a sum of terms p(a, bi, MXYi), we have that: Corollary 1. The function a , f(a, X) is polyhedral convex, with subgradient Assuming Θ is closed and convex, we can consider a naive projected subgradient minimization of f. Alternatively, if there exists a Bregman divergence B(a, b) = ω(b) ω(a) ω(a), b a for a, b Θ defined by a proxfunction ω, we can define the proximal mapping Pa(b) = argminc Θ ( b, c a + B(a, c)) and consider accelerated gradient approaches (Nesterov, 2005). We summarize this idea in Algorithm 1. Algorithm 1 Wasserstein Barycenter in P(X, Θ) Inputs: X Ωn, Θ Σn. For i N : Yi Ωmi, bi Σmi, p [1, ), t0 > 0. Form all n mi matrices Mi = MXYi, see Eq. (1). Set ˆa = a = argminΘ ω. while not converged do β = (t + 1)/2, a (1 β 1)ˆa + β 1 a. Form subgradient α N 1 $N i using all dual optima α i of d(a, bi, Mi). a Pa(t0βα). ˆa (1 β 1)ˆa + β 1 a, t t + 1. end while Notice that when Θ = Σn and B is the Kullback-Leibler divergence (Beck and Teboulle, 2003), we can initialize a Fast Computation of Wasserstein Barycenters with 1n/n and use the multiplicative update to realize the proximal update: a a e t0βα; a a/ a T 1n, where is Schur s product. Alternative sets Θ for which this projection can be easily carried out include, for instance, all (convex) level set of the entropy function H, namely Θ = {a Σn|H(a) τ} where 0 τ log n. 4.3. Differentiability of p(a, b, MXY ) w.r.t X We consider now the case where Ω= Rd with d 1, D is the Euclidean distance and p = 2. When Ω= Rd, a family of n points X and a family of m points Y can be represented respectively as a matrix in Rd n and another in Rd m. The pairwise squared-Euclidean distances between points in these sets can be recovered by writing x def = diag(XT X) and y def = diag(Y T Y ), and observing that m + 1ny T 2XTY Rn m. Transport Cost as a function of X. Due to the margin constraints that apply if a matrix T is in the polytope U(a, b), we have: T, MXY = T, x1T = tr T Tx1T d + tr T T1T d y 2 T, XTY = x T a + y T b 2 T, XTY . Discarding constant terms in y and b, we have that minimizing p(a, b, MXY ) with respect to locations X is equivalent to solving min X Rd k x T a + p(a, b, XTY ). (7) As a function of X, that objective is the sum of a convex quadratic function of X with a piecewise linear concave function, since p(a, b, XTY ) = min T U(a,b) X, Y T T is the minimum of linear functions indexed by the vertices of the polytope U(a, b). As a consequence, p(a, b, MXY ) is not convex with respect to X. Quadratic Approximation. Suppose that T is optimal for problem p(a, b, MXY ). Updating Eq. (7), x T a T , XT Y = X diag(a1/2) Y T T diag(a 1/2) 2 Y T T diag(a 1/2) 2. Minimizing a local quadratic approximation of p at X yields thus the Newton update X Y T T diag(a 1). (8) A simple interpretation of this update is as follows: the matrix T T diag(a 1) has n column-vectors in the simplex Σm. The suggested update for X is to replace it by n barycenters of points enumerated in Y with weights defined by the optimal transport T . Note that, because the minimization problem we consider in X is not convex to start with, one could be fairly creative when it comes to choosing D and p among other distances and exponents. This substitution would only involve more complicated gradients of MXY w.r.t. X that would appear in Eq. (7). 4.4. Free Support: Minimizing f over Pk(Rd, Θ) We now consider, as a natural extension of 4.2 when Ω= Rd, the problem of minimizing f over a probability measure µ that is (i) supported by at most k atoms described in X, a matrix of size d k, (ii) with weights in a Θ Σk. Alternating Optimization. To obtain an approximate minimizer of f(a, X) we propose in Algorithm 2 to update alternatively locations X (with the Newton step defined in Eq. 8) and weights a (with Algorithm 1). Algorithm 2 2-Wasserstein Barycenter in Pk(Rd, Θ) Input: Yi Rd mi, bi Σmi for i N. initialize X Rd k and a Θ while X and a have not converged do a a using Algorithm 1. for i (1, . . . , N) do i optimal solution of p(a, bi; MXYi) end for X (1 θ)X + θ diag(a 1), setting θ [0, 1] with line-search or a preset value. end while Algorithm 2 and Lloyd/Ng Algorithms. As mentioned in 2, minimizing f defined in Eq. (5) over Pk(Rd), with N = 1, p = 2 and no constraints on the weights (Θ = Σk), is equivalent to solving the k-means problem applied to the set of points enumerated in ν1. In that particular case, Algorithm 2 is also equivalent to Lloyd s algorithm. Indeed, the assignment of the weight of each point to its closest centroid in Lloyd s algorithm (the maximization step) is equivalent to the computation of a in ours, whereas the re-centering step (the expectation step) is equivalent to our update for X using the optimal transport, which is in that case the trivial transport that assigns the weight (divided by N) of each atom in Yi to its closest neighbor in X. When the weight vector a is constrained to be uniform (Θ = {1k/k}), Ng (2000) proposed a heuristic to obtain uniform k-means that is also equivalent to Algorithm 2, and which also relies on the repeated computation of optimal transports. For more general sets Θ, Algorithm 1 ensures that the weights a remain in Θ at each iteration of Algorithm 2, which cannot be guaranteed by neither Lloyd s nor Ng s approach. Fast Computation of Wasserstein Barycenters Algorithm 2 and Reich s (2013) Transform. Reich (2013) has recently suggested to approximate a weighted measure ν by a uniform measure supported on as many atoms. This approximation is motivated by optimal transport theory, notably asymptotic results by Mc Cann (1995), but does not attempt to minimize, as we do in Algorithm 2, any Wasserstein distance between that approximation and the original measure. This approach results in one application of the Newton update defined in Eq. (8), when X is first initialized to Y and a = 1m/m to compute the optimal transport T . Summary We have proposed two original algorithms to compute Wasserstein barycenters of probability measures: one which applies when the support of the barycenter is fixed and its weights are constrained to lie in a convex subset Θ of the simplex, another which can be used when the support can be chosen freely. These algorithms are relatively simple, yet to the best of our knowledge novel. We suspect these approaches were not considered before because of their prohibitive computational cost: Algorithm 1 computes at each iteration the dual optima of N transportation problems to form a subgradient, each with n + mi variables and n mi inequality constraints. Algorithm 2 incurs an even higher cost, since it involves running Algorithm 1 at each iteration, in addition to solving N primal optimal transport problems to form a subgradient to update X. Since both objectives rely on subgradient descent schemes, they are also likely to suffer from a very slow convergence. We propose to solve these issues by following Cuturi s approach (2013) to smooth the objective f and obtain strictly convex objectives whose gradients can be computed more efficiently. 5. Smoothed Dual and Primal Problems To circumvent the major computational roadblock posed by the repeated computation of primal and dual optimal transports, we extend Cuturi s approach (2013) to obtain smooth and strictly convex approximations of both primal and dual problems p and d. The matrix scaling approach advocated by Cuturi was motivated by the fact that it provided a fast approximation pλ to p. We show here that the same approach can be used to smooth the objective f and recover for a cheap computational price its gradients w.r.t. a and X. 5.1. Regularized Primal and Smoothed Dual A n m transport T , which is by definition in the nmsimplex, has entropy h(T ) i,j=1 tij log(tij). Cuturi (2013) has recently proposed to consider, for λ > 0, a regularized primal transport problem pλ as pλ(a, b; M) = min T U(a,b) X, M 1 We introduce in this work its dual problem, which is a smoothed version of the original dual transportation problem, where the positivity constraints of each term mij αi βj have been replaced by penalties 1 λe λ(mij αi βj): dλ(a, b; M) = max (α,β) Rn+m αT a + βT b e λ(mij αi βj) These two problems are related below in the sense that their respective optimal solutions are linked by a unique positive vector u Rn +: Proposition 2. Let K be the elementwise exponential of λMXY , K def = e λMXY . Then there exists a pair of vectors (u, v) Rn + such that the optimal solutions of pλ and dλ are respectively given by λ = diag(u)K diag(v), α λ + log(u)T 1n Proof. The result follows from the Lagrange method of multipliers for the primal as shown by Cuturi (2013, Lemma 2), and a direct application of first-order conditions for the dual, which is an unconstrained convex problem. The term λn 1n in the definition of α λ is used to normalize α λ so that it sums to zero as discussed in the end of 4.1. 5.2. Matrix Scaling Computation of (u, v) The positive vectors (u, v) mentioned in Proposition 2 can be computed through Sinkhorn s matrix scaling algorithm applied to K, as outlined in Algorithm 3: Lemma 1 (Sinkhorn, 1967). For any positive matrix A in Rn m + and positive probability vectors a Σn and b Σm, there exist positive vectors u Rn +, unique up to scalar multiplication, such that diag(u)A diag(v) U(a, b). Such a pair (u, v) can be recovered as a fixed point of the Sinkhorn map (u, v) , (Av 1./b, AT u 1./a). The convergence of the algorithm is linear when using Hilbert s projective metric between the scaling factors (Franklin and Lorenz, 1989, 3). Although we use this algorithm in our experiments because of its simplicity, other algorithms exist (Knight and Ruiz, 2012) which are known to be more reliable numerically when λ is large. Summary: Given a smoothing parameter λ > 0, using Sinkhorn s algorithm on matrix K, defined as the elementwise exponential of λM (the pairwise Gaussian kernel matrix between the supports X and Y when p = 2, using bandwidth σ = 1/ 2λ) we can recover smoothed optima α λ for both smoothed primal pλ and dual dλ transport problems. To take advantage of this, we simply propose to substitute the smoothed optima α λ to the original optima α and T that appear in Algorithms 1 and 2. Fast Computation of Wasserstein Barycenters Figure 2. (top) For each digit, 15 out of the 5.000 scaled and translated images considered for each barycenter. (bottom) Barycenters after t = 1, 10, 60 gradient steps. For t = 60, images are cropped to show the 30 30 central pixels. Algorithm 3 Smoothed Primal T λ and Dual α Input M, λ, a, b K = exp( λM); )K = diag(a 1)K % use bsxfun(@rdivide,K,a) Set u =ones(n,1)/n; while u changes do u = 1./( )K(b./(KT u))). end while v = b./(KTu). α λ log(u) + log(u)T 1n λ = diag(u)K diag(v). % use bsxfun(@times, v, (bsxfun(@times, K, u)) ); 6. Applications We present two applications, one of Algorithm 1 and one of Algorithm 2, that both rely on the smooth approximations presented in 5. The settings we consider involve computing respectively tens of thousands or tens of high-dimensional optimal transport problems 2.500 2.500 for the first application, 57.647 48 for the second which cannot be realistically carried out using network flow solvers. Using network flow solvers, the resolution of a single transport problem of these dimensions could take between several minutes to several hours. We also take advantage in the first application of the fact that Algorithm 3 can be run efficiently on GPGPUs using vectorized code (Cuturi, 2013, Alg.1). 6.1. Visualization of Perturbed Images We use 50.000 images of the MNIST database, with approximately 5.000 images for each digit from 0 to 9. Each image (originally 20 20 pixels) is scaled randomly, uniformly between half-size and double-size, and translated randomly within a 50 50 grid, with a bias towards corners. We dis- play intermediate barycenter solutions for each of these 10 datasets of images for t = 1, 10, 60 gradient iterations. λ is set to 60/median(M), where M is the squared-Euclidean distance matrix between all 2,500 pixels in the grid. Using a Quadro K5000 GPU with close to 1500 cores, the computation of a single barycenter takes about 2 hours to reach 100 iterations. Because we use warm starts to initialize u in Algorithm 3 at each iteration of Algorithm 1, the first iterations are typically more computationally intensive than those carried out near the end. 6.2. Clustering with Uniform Centroids In practice, the k-means cost function applied to a given empirical measure could be minimized with a set of centroids X and weight vector a such that the entropy of a is very small. This can occur when most of the original points in the dataset are attributed to a very small subset of the k centroids, and could be undesirable in applications of k-means where a more regular attribution is sought. For instance, in sensor deployment, when each centroid (sensor) is limited in the number of data points (users) it can serve, we would like to ensure that the attributions agree with those limits. Whereas the original k-means cannot take into account such limits, we can ensure them using Algorithm 2. We illustrate the difference between looking for optimal centroids with free assignments (Θ = Σk), and looking for optimal uniform centroids with constrained assignments (Θ = {1k/k}) using US census data for income and population repartitions across 57.647 spatial locations in the 48 contiguous states. These weighted points can be interpreted as two empirical measures on R2 with weights directly proportional to these respective quantities. We initialize both free and uniform clustering with the actual 48 state capitals. Results displayed in Figure 3 show that by forcing our approximation to be uniform, we recover centroids that induce a Fast Computation of Wasserstein Barycenters Figure 3. Comparison of two Wasserstein barycenters on spatial repartitions of income and population in the 48 contiguous states, using as many (k = 48) centroids. The size of each of the 57.647 blue crosses is proportional to the local average of the relevant variable (income above and population below) at that location, normalized to sum to 1. Each downward triangle is a centroid of the k-means clustering (equivalent to a Wasserstein barycenter with Θ = Σk) whose size is proportional to the portion of mass captured by that centroid. Red dots indicate centroids obtained with a uniform constraint on the weights, Θ = {1k/k}. Since such centroids are constrained to carry a fixed portion of the total weight, one can observe that they provide a more balanced clustering than the k-means solution. more balanced clustering. Indeed, each cell of the Voronoi diagram built with these centroids is now constrained to hold the same aggregate wealth or population. These centroids could form the new state capitals of equally rich or equally populated states. On an algorithmic note, we notice in Figure 4 that Algorithm 2 converges to its (local) optimum at a speed which is directly comparable to that of the k-means in terms of iterations, with a relatively modest computational overhead. Unsurprisingly, the Wasserstein distance between the clusters and the original measure is higher when adding uniform constraints on the weights. 2 4 6 8 10 12 14 Wasserstein Distance Uniform Wasserstein Barycenter K Means (Free Was. Barycenter) Figure 4. Wasserstein distance of the uniform Wasserstein barycenter (weights constrained to be in Θ = {1k/k}) and its unconstrained equivalent (k-means) to the income empirical measure. Note that, because of the constraints on weights, the Wasserstein distance of the uniform Wasserstein barycenter is necessarily larger. On a single CPU core, these computations require 12.5 seconds for the constrained case, using Sinkhorn s approximation, and 1.55 seconds for the regular k-means algorithm. Using a regular transportation solver, computing the optimal transport from the 57.647 points to the 48 centroids would require about 1 hour for a single iteration Conclusion We have proposed in this paper two original algorithms to compute Wasserstein barycenters of empirical measures. Using these algorithms in practice for measures of large support is a daunting task for two reasons: they are inherently slow because they rely on the subgradient method; the computation of these subgradients involves solving optimal and dual optimal transport problems. Both issues can be substantially alleviated by smoothing the primal optimal transport problem with an entropic penalty and considering its dual. Both smoothed problems admit gradients which can be computed efficiently using only matrix vector products. Our aim in proposing such algorithms is to demonstrate that Wasserstein barycenters can be used for visualization, constrained clustering, and hopefully as a core component within more complex data analysis techniques in future applications. We also believe that our smoothing approach can be directly applied to more complex variational problems that involve multiple Wasserstein distances, such as Wasserstein propagation (Solomon et al., 2014). Acknowledgements We thank reviewers for their comments and Gabriel Peyr e for fruitful discussions. MC was supported by grant 26700002 from JSPS. AD was partially supported by EPSRC. Fast Computation of Wasserstein Barycenters M. Agueh and G. Carlier. Barycenters in the Wasserstein space. SIAM Journal on Mathematical Analysis, 43(2):904 924, 2011. S.-I. Amari and H. Nagaoka. Methods of Information Geometry. AMS vol. 191, 2001. A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh. Clustering with bregman divergences. The Journal of Machine Learning Research, 6:1705 1749, 2005. A. Beck and M. Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 31(3):167 175, 2003. D. Bertsimas and J. Tsitsiklis. Introduction to Linear Optimization. Athena Scientific, 1997. G. Canas and L. Rosasco. Learning probability measures with re- spect to optimal transport metrics. In Adv. in Neural Infor. Proc. Systems 25, pages 2501 2509. 2012. Y. Chen, M. Welling, and A. J. Smola. Supersamples from kernel- herding. In Uncertainty in Artificial Intelligence (UAI), 2010. M. Cuturi. Sinkhorn distances: Lightspeed computation of opti- mal transport. In Advances in Neural Information Processing Systems 26, pages 2292 2300, 2013. J. Franklin and J. Lorenz. On the scaling of multidimensional ma- trices. Linear Algebra and its applications, 114:717 735, 1989. A. Gretton, K. M. Borgwardt, M. Rasch, B. Schoelkopf, and A. J. Smola. A kernel method for the two-sample-problem. In Advances in Neural Information Processing Systems 19, pages 513 520. MIT Press, 2007. Z. Harchaoui, F. Bach, and E. Moulines. Testing for homogeneity with kernel fisher discriminant analysis. In Advances in Neural Information Processing Systems 20, pages 609 616. MIT Press, 2008. T. Kanamori, T. Suzuki, and M. Sugiyama. f-divergence estimation and two-sample homogeneity test under semiparametric density-ratio models. IEEE Trans. on Information Theory, 58 (2):708 720, 2012a. T. Kanamori, T. Suzuki, and M. Sugiyama. Statistical analysis of kernel-based least-squares density-ratio estimation. Machine Learning, 86(3):335 367, 2012b. P. A. Knight and D. Ruiz. A fast algorithm for matrix balancing. IMA Journal of Numerical Analysis, 2012. S. Lloyd. Least squares quantization in pcm. IEEE Trans. on In- formation Theory, 28(2):129 137, 1982. R. J. Mc Cann. Existence and uniqueness of monotone measure- preserving maps. Duke Mathematical Journal, 80(2):309 324, 1995. R. J. Mc Cann. A convexity principle for interacting gases. Ad- vances in Mathematics, 128(1):153 179, 1997. Y. Nesterov. Smooth minimization of non-smooth functions. Math. Program., 103(1):127 152, May 2005. M. K. Ng. A note on constrained k-means algorithms. Pattern Recognition, 33(3):515 519, 2000. F. Nielsen. Jeffreys centroids: A closed-form expression for pos- itive histograms and a guaranteed tight approximation for frequency histograms. IEEE Signal Processing Letters (SPL), 2013. E. Parzen. On estimation of a probability density function and mode. The Annals of Mathematical Statistics, 33(3):1065 1076, 1962. O. Pele and M. Werman. Fast and robust earth mover s distances. In ICCV 09, 2009. D. Pollard. Quantization and the method of k-means. IEEE Trans. on Information Theory, 28(2):199 205, 1982. J. Rabin, G. Peyr e, J. Delon, and M. Bernot. Wasserstein barycen- ter and its application to texture mixing. In Scale Space and Variational Methods in Computer Vision, volume 6667 of Lecture Notes in Computer Science, pages 435 446. Springer, 2012. S. Reich. A non-parametric ensemble transform method for bayesian inference. SIAM Journal of Scientific Computing, 35 (4):A2013 A2024, 2013. Y. Rubner, L. Guibas, and C. Tomasi. The earth movers distance, multi-dimensional scaling, and color-based image retrieval. In Proceedings of the ARPA Image Understanding Workshop, pages 661 668, 1997. R. Sinkhorn. Diagonal equivalence to matrices with prescribed row and column sums. The American Mathematical Monthly, 74(4): 402 405, 1967. J. Solomon, R. Rustamov, G. Leonidas, and A. Butscher. Wasser- stein propagation for semi-supervised learning. In Proceedings of The 31st International Conference on Machine Learning, pages 306 314, 2014. M. Sugiyama, M. Yamada, M. Kimura, and H. Hachiya. On information-maximization clustering: Tuning parameter selection and analytic solution. In Proceedings of the 28th International Conference on Machine Learning, pages 65 72, 2011. M. Teboulle. A unified continuous optimization framework for center-based clustering methods. The Journal of Machine Learning Research, 8:65 102, 2007. R. Veldhuis. The centroid of the symmetrical kullback-leibler dis- tance. Signal Processing Letters, IEEE, 9(3):96 99, 2002. C. Villani. Optimal transport: old and new, volume 338. Springer Verlag, 2009. G. Zen and E. Ricci. Earth mover s prototypes: A convex learning approach for discovering activity patterns in dynamic scenes. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 3225 3232. IEEE, 2011.