# nearly_isometric_embedding_by_relaxation__ca241e82.pdf Nearly Isometric Embedding by Relaxation James Mc Queen Department of Statistics University of Washington Seattle, WA 98195 jmcq@u.washington.edu Marina Meil a Department of Statistics University of Washington Seattle, WA 98195 mmp@stat.washington.edu Dominique Perrault-Joncas Google Seattle, WA 98103 dcpjoncas@gmail.com Many manifold learning algorithms aim to create embeddings with low or no distortion (isometric). If the data has intrinsic dimension d, it is often impossible to obtain an isometric embedding in d dimensions, but possible in s > d dimensions. Yet, most geometry preserving algorithms cannot do the latter. This paper proposes an embedding algorithm to overcome this. The algorithm accepts as input, besides the dimension d, an embedding dimension s d. For any data embedding Y, we compute a Loss(Y), based on the push-forward Riemannian metric associated with Y, which measures deviation of Y from from isometry. Riemannian Relaxation iteratively updates Y in order to decrease Loss(Y). The experiments confirm the superiority of our algorithm in obtaining low distortion embeddings. 1 Introduction, background and problem formulation Suppose we observe data points sampled from a smooth manifold M with intrinsic dimension d which is itself a submanifold of D-dimensional Euclidean space M RD. The task of manifold learning is to provide a mapping φ : M N (where N Rs) of the manifold into lower dimensional space s D. According to the Whitney Embedding Theorem [11] we know that M can be embedded smoothly into R2d using one homeomorphism φ. Hence we seek one smooth map φ : M Rs with d s 2d D. Smooth embeddings preserve the topology of the original M. Nevertheless, in general, they distort the geometry. Theoretically speaking1, preserving the geometry of an embedding is embodied in the concepts of Riemannian metric and isometric embedding. A Riemannian metric g is a symmetric positive definite tensor field on M which defines an inner product <, >g on the tangent space Tp M for every point p M. A Riemannian manifold is a smooth manifold with a Riemannian metric at every point. A diffeomorphism φ : M N is called an isometry iff for all p M, u, v Tp M we have < u, v >gp=< dφpu, dφpv >hφ(p). By Nash s Embedding Theorem [13], it is known that any smooth manifold of class Ck, k 3 and intrinsic dimension d can be embedded isometrically in the Euclidean space Rs with s polynomial in d. In unsupervised learning, it is standard to assume that (M, g0) is a submanifold of RD and that it inherits the Euclidean metric from it2. An embedding φ : M φ(M) = N defines a metric g on N given by < u, v >g(φ(p))=< dφ 1u, dφ 1v >g0(p) called the pushforward Riemannian metric; (M, g0) and (N, g) are isometric. Much previous work in non-linear dimension reduction[16, 20, 19] has been driven by the desire to find smooth embeddings of low dimension that are isometric in the limit of large n. This work has met with mixed success. There exists the constructive implementation [19] of Nash s proof 1For a more complete presentation the reader is referred to [8] or [15] or [10]. 2Sometimes the Riemannian metric on M is not inherited, but user-defined via a kernel or distance function. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. technique, which guarantees consistence and isometry. However, the algorithm presented falls short of being practical, as the embedding dimension s it requires is significantly higher than the minimum necessary, a major drawback in practice. Overall, the algorithm leads to mappings φ that, albeit having the desired properties, are visually unintuitive, even for intrinsic dimensions as low as d = 1. There are many algorithms, too many for an exhaustive list, which map the data using a cleverly chosen reconstruction criterion. The criterion is chosen so that the mapping φ can be obtained as the unique solution of a classic optimization problem, e.g. Eigendecomposition for Laplacian Eigenmaps [2], Diffusion Maps [12] and LTSA [21], Semidefinite Programming for Maximum Variance Unfolding [20] or Multidimensional Scaling for Isomap [3]. These embedding algorithms sometimes come with guarantees of consistency [2] and, only in restricted cases, isometry [3]. In this paper we propose an approach which departs from both these existing directions. The main difference, from the algorithmic point of view, is that the loss function we propose does not have a form amenable to a standard solver (and is not even guaranteed to be convex or unimodal). Thus, we do not obtain a mapping φ in one shot , as the previous algorithms do, but by the gradual improvements of an initial guess, i.e. by gradient descent. Nevertheless, the loss we define directly measures the deviation from isometry; therefore, when this loss is (near) 0, (near) isometry is achieved. The algorithm is initialized with a smooth embedding Y = φ(M) Rs, s d; we define the objective function Loss(Y) as the averaged deviation of the pushforward metric from isometry. Then Y is iteratively changed in a direction that decreases Loss. To construct this loss function, we exploit the results of [15] who showed how a pushforward metric can be estimated, for finite samples and in any given coordinates, using a discrete estimator of the Laplace-Beltrami operator M. The optimization algorithm is outlined in Algorithm 1. Input :data X Rn D, kernel function Kh(), weights w1:n, intrinsic dimension d, embedding dimension s Initial coordinates Y Rn s, with Yk,: representing the coordinates of point k. Init :Compute Laplacian matrix L Rn n using X and Kh(). while not converged do Compute H = [Hk]k=1:n Rn s s the (dual) pushforward metric at data points from Y and L. Compute Loss(H1:n) and Y Loss(H) Take a gradient step Y Y η Y Loss(H) end Output:Y Algorithm 1: Outline of the Riemannian Relaxation Algorithm. A remark on notation is necessary. Throughout the paper, we denote by M, p M, Tp M, M a manifold, a point on it, the tangent subspace at p, and the Laplace-Beltrami operator in the abstract, coordinate free form. When we describe algorithms acting on data, we will use coordinate and finite sample representations. The data is X Rn D, and an embedding thereof is denoted Y Rn s; rows k of X, Y, denoted Xk, Yk are coordinates of data point k, while the columns, e.g Yj represent functions of the points, i.e restrictions to the data of functions on M. The construction of L (see below) requires a kernel, which can be the (truncated) gaussian kernel Kh(z) = exp(z2/h), |z| < rh for some fixed r > 0 [9, 17]. Besides these, the algorithm is given a set of weights w1:n, P k wk = 1. The construction of the loss is based on two main sets of results that we briefly review here. First, an estimator L of the Laplace-Beltrami operator M of M, and second, an estimator of the pushforward metric g in the current coordinates Y. To construct L we use the method of [4], which guarantees that, if the data are sampled from a manifold M, L converges to M [9, 17]. Given a set of points in high-dimensional Euclidean space RD, represented by the n D matrix X, construct a weighted neighborhood graph G = ({1 : n}, W) over them, with W = [Wkl]k,l=1:n. The weight Wkl between Xk: and Xl: is the heat kernel [2] Wkl Kh(||Xk: Xl:||) with h a bandwidth parameter fixed by the user, and || || the Euclidean norm. Next, construct L = [Lkl]ij of G by D= diag(W1) , W = D 1WD 1 , D = diag( W1) , and L = D 1 W (1) Equation (1) represents the discrete versions of the renormalized Laplacian construction from [4]. Note that W, D, D, W, L all depend on the bandwidth h via the heat kernel. The consistency of L has been proved in e.g [9, 17]. The second fact we use is the relationship between the Laplace-Beltrami operator and the Riemannian metric on a manifold [11]. Based on this, [15] gives a a construction method for a discrete estimator of the Riemannian metric g, in any given coordinate system, from an estimate L of M. In a given coordinate representation Y, a Riemannian metric g at each point is an s s positive semidefinite matrix of rank d. The method of [15] obtains the matrix Moore-Penrose pseudoinverse of this metric (which must be therefore inverted to obtain the pushforward metric). We denote this inverse at point k by Hk; let H = [Hk, k = 1, . . . n] be the three dimensional array containing the inverse for each data point. Note that H is itself the (discrete estimate of) a Riemannian metric, called the dual (pushforward) metric. With these preliminaries, the method of [15] computes H by h L(Yi Yj) Yi (LYj) Yj (LYi) i (2) Where here Hij is the vector whose kth entry is the ijth element of the dual pushforward metric H at the point k and denotes element-by-element multiplication. 2 The objective function Loss The case s = d (embedding dimension equals intrinsic dimension). Under this condition, it can be shown [10] that φ : M Rd is an isometry iff gp, p M expressed in a normal coordinate system equals the unit matrix Id. Based on this observation, it is natural to measure the quality of the data embedding Y as the departure of the Riemannian metric obtained via (2) from the unit matrix. This is the starting idea for the distortion measure we propose to optimize. We develop it further as follows. First, we choose to use the dual of g, evaluated by H instead of pushforward metric itself. Naturally Hk = Id iff H 1 k = Id, so the dual metric identifies isometry as well. When no isometric transformation exists, it is likely that optimizing w.r.t g and optimizing w.r.t h will arrive to different embeddings. There is no mathematically compelling reason, however, to prefer optimizing one over the other. We choose to optimize w.r.t h for three reasons; (1) it is computationally faster, (2) it is numerically more stable, and (3) in our experience users find H more interpretable. 3 Second, we choose to measure the distortion of Hk by ||Hk I|| where || || denotes the matrix spectral norm. This choice will be motivated shortly. Third, we choose the weights w1:n to be proportional to D from (1). As [4] show, these values converge to the sampling density π on M. Putting these together, we obtain the loss function Loss(Y; L, w) = k=1 wk ||Hk Id||2 . (3) To motivate the choice of a squared loss instead of simply using ||Hk Id||, notice (the proofs are straightforward) that || || is not differentiable at 0, but || ||2 is. A natural question to ask about Loss is if it is convex. The following proposition proved in the Supplement summarizes a set of relevant convexity facts. Proposition 1 Denote by λ1:d(Hk) 0 the eigenvalues of Hk, in decreasing order and assume Y is in a compact, convex set. Then 1. λ1(Hk), λ1(Hk) λd(Hk) and λ1(Hk) Pd d =1 λd (Hk) are convex in Y. 2. ||Hk Id|| is convex in Y for (λ1(Hk) + λd(Hk))/2 1 and concave otherwise. 3. ||Hk Id||2 is convex in Y whenever ||Hk Id|| is convex and differentiable in Y. This proposition shows that Loss may not be convex near its minimum, and moreover that squaring the loss only improves convexity. Choosing the right measure of distortion The norm of a Hermitian bilinear functional (i.e symmetric tensor of order 2) g : Rs Rs R is defined as supu =0 |g(u, u)|/||u||. In a fixed orthonormal base of Rs, g(u, v) = u Gv, ||g|| = supu =0 |u Gu|. One can define norms with respect to any metric g0 on Rs (where g0 is represented in coordinates by G0, a symmetric, positive definite matrix), by ||u||G0 = u G0u, respectively ||g||G0 = supu =0 |u Gu|/||u||G0 = 3Hk represents the direction & degree of distortion as opposed to the scaling required to correct" the space. sup u =0 | u G 1/2 0 GG 1/2 0 u|/|| u|| = λmax(G 1/2 0 GG 1/2 0 ). In particular, since any Riemannian metric at a point k is a g as above, setting g and g0 respectively to Hk and Id we measure the operator norm of the distortion by ||Hk Id||. In other words, the appropriate operator norm we seek can be expresed as a matrix spectral norm. The expected loss over the data set, given a distribution represented by the weights w1:n is then identical to the expression of Loss in (3). If the weights are computed as in (1), it is easy to see that the loss function in (3) is the finite sample version of the squared L2 distance between h and g0 on the space of Riemannian metrics on M, w.r.t base measure πd Vg0 ||h g0||2 g0 = Z M ||h g0||2 g0πd Vg0, with d Vg0volume element on M. (4) Defining Loss for embeddings with s > d dimensions Consider G, G0 Rs s, two symmetric matrices with G0 semipositive definite of rank d < s. We would like to extend the G0 norm of G to this case. We start with the family of norms ||||G0+εIs for ǫ > 0 and we define ||G||G0 = lim ǫ 0 ||G||G0+εIs. (5) Proposition 2 Let G, G0 Rs s be symmetric matrices, with G0 semipositive definite of rank d < s, and let ǫ > 0, γ(u, ε) = u Gu u G0u+ǫ||u||2 . Then, 1. ||G||G0+εIs = || G||2 with G = (G0 + ǫI) 1/2G(G0 + ǫI) 1/2. 2. If ||G||G0+εIs < r, then λ (G) < ǫr with λ (G) = supv Null(G0) γ(v, ε), 3. ||||G0 is a matrix norm that takes infinite values when Null G0 Null G. Hence, || ||G0+εIs can be computed as the spectral norm of a matrix. The computation of || ||G0 is similar, with the additional step of checking first if Null G0 Null G, in which case we output the value . Let Bǫ(0, r) (B(0, r)) denote the r-radius ball centered at 0 in the || ||G0+εIs (|| ||G0). From Proposition 2 it follows that if G Bǫ(0, r) then λ (G) < ǫr and if G B(0, r) then Null(G0) Null(G). In particular, if rank G = rank G0 then Null(G) = Null(G0). To define the loss for s > d we set G = Hk and G0 = Uk U k, with Uk an orthonormal basis for Tk M the tangent subspace at k. The norms || ||G0+εIs, || ||G0 act as soft and hard barrier functions constraining the span of Hk to align with the tangent subspace of the data manifold. Loss(Y; L, w, d, εorth) = k=1 wk|| (Uk U k + ε2 orth Is) 1/2 Hk Uk U k (Uk U k + ε2 orth Is) 1/2 | {z } Gk (6) 3 Optimizing the objective Let Lk denote the kth row of L, then Hk can be rewritten in the convenient form 2Y [trace(Lk) (eke k L) (eke k L) ]Y 1 2Y Lk Y (7) where ek refers to the kth standard basis vector of Rn and Lk is a symmetric positive semi-definite matrix precomputed from entries in L; Lk has non-zero rows only for the neighbors of k. Proposition 3 Let Lossk denote term k of Loss. If s = d, the gradient of Lossk as given by (3) is Lossk Y = 2wkλ k Lk Yuku k, (8) with λ k the largest eigenvalue of Hk Id and uk is the corresponding eigenvector. If s > d, the gradient of Lossk of (6) is Lossk Y = 2wkλ k Lk YΠkuku kΠ k (9) where Πk = (Uk U k + (εorth)k Is) 1/2, λ k is the largest eigenvalue of Gk of (6) and uk is the corresponding eigenvector. When embedding in s > d dimensions, the loss function depends at each point k on finding the d-dimensional subspace Uk. Mathematically, this subspace coincides with the span of the Jacobian DYk which can be identified with the d-principal subspace of Hk. When computing the gradient of Loss we assume that U1:n are fixed. Since the derivatives w.r.t Y are taken only of H and not of the tangent subspace Uk, the algorithm below is actually an alternate minimization algorithm, which reduces the cost w.r.t Y in one step, and w.r.t U1:n in the alternate step. 3.1 Algorithm We optimize the loss (3) or (6) by projected gradient descent with line search (subject to the observation above). The projection consists of imposing P k Yk = 0, which we enforce by centering Y before taking a step. This eliminates the degeneracy of the Loss in (3) and (6) w.r.t constant shift in Y. To further improve the good trade-off between time per iteration and number of iterations, we found that a heavy-ball method with parameter α is effective. At each iteration computing the gradient is O((S + s3)n) where S is the number of nonzero entries of L. Input :data X, kernel function Kh(), initial coordinates Y0, weights w1:n, intrinsic dimension d, orthonormal tolerance εorth, heavy ball parameter α [0, 1) Init :Compute: graph Laplacian L by (1), matrices L1:n as in (7). Set S = 0 while not converged do Compute Loss: for all k do 1. Calculate Hk via (2); 2. If s > d (a) Compute Uk by SVD from Hk; (b) Compute gradient of Lossk(Y) using (9); 3. Else (s = d): calculate gradient Lossk(Y) using (8); 4. Add Lossk(Y) to the total gradient; end Take a step in Y: 1. Compute projected direction S and project S (In ene n) Loss +αS; 2. Find step size η by line search and update Y Y ηS; end Output:Y Algorithm 2: RIEMANNIANRELAXATION (RR) 3.2 For large or noisy data Here we describe an extension of the RR Algorithm which can naturally adapt to large or noisy data, where the manifold assumption holds only approximately. The idea is to subsample the data, but in a highly non-uniform way that improves the estimation of the geometry. A simple peliminary observation is that, when an embedding is smooth, optimizing the loss on a subset of the data will be sufficient. Let I {1, . . . n} be set of size n < n. The subsampled loss Loss I will be computed only for the points k I. If every point k has O(d) neighbors in I, this assures that the gradient of Loss I will be a good approximation of Loss at point k, even if k I, and does not have a term containing Hk in Loss I. To optimize Loss I by RR, it is sufficient to run the for loop over k I. Algorithm PCS-RR below describes how we choose a good" subsample I, with the help of the PRINCIPALCURVES algorithm of [14]. Input :data X, kernel function Kh(), initial coordinates Y0, intrinsic dimension d, subsample size n , other parameters for RR Compute ˆX = PRINCIPALCURVES(X, Kh, d) Take a uniform sample I0 of size n from {1, . . . n} (without replacement). for k in I0 do Find Xl the nearest neigbor in X of ˆXk , and add l to I (removing duplicates) end Output:Y = RR(Y0, Kh, d, I, . . .) Algorithm 3: PRINCIPALCURVES-RIEMANNIANRELAXATION (PCS-RR) sphere + noise hourglass + noise final embedding 0 0.005 0.01 0.015 0.02 0.025 4 noise standard deviation sigma vs. (log10) loss and MSE log10(MSE) log10(loss) Figure 1: Hourglass to sphere. From left to right: target Y (noisy sphere), initialization Y0 of RR (noisy hourglass), output of RR, mean-squared error and Loss vs. noise level σ (on a log10 scale). Convergence of RR was achieved after 400 iterations. Informally speaking, PRINCIPALCURVES uses a form of Mean-Shift to obtain points in the ddimensional manifold of highest density in the data. The result is generally biased, however [7] have shown that this algorithm offers a very advantageous bias-variance trade-off in case of manifolds with noise. We use the output ˆY of PRINCIPALCURVES to find a subset of points that (1) lie in a high density region relative to most directions in RD and (2) are in the middle of their neighbors, or more formally, have neighborhoods of dimension at least d. In other words, this is a good heuristic to avoid border effects , or other regions where the d-manifold assumption is violated. 4 Experimental evaluation Hourglass to sphere illustrates how the algorithm works for s = 3, d = 2. The data X is sampled uniformly from a sphere of radius 1 with intrinsic dimension d = 2. We sample n = 10000 points from the sphere and add i.i.d. Gaussian noise with Σ = σ2/s Is4, estimating the Laplacian L on the noisy data X. We initialize with a noisy hourglass shape in s = 3 dimensions, with the same noise distribution as the sphere. If the algorithm works correctly, by using solely the Laplacian and weights from X, it should morph the hourglass Y0 back into a sphere. The results after convergence at 400 iterations are shown in Fig. 1 (and an animation of this convergence in the Supplement). We see that RR not only recovers the sphere, but it also suppresses the noise. The next two experiments compare RR to several embedding algorithms w.r.t geometric recovery. The algorithms are Isomap, Laplacian Eigenmaps, HLLE[6], MVU 5 . The embeddings YLE,MV U,HLLE need to be rescaled before being evaluated, and we use a Procrustes transformation to the original data. The algorithms are compared w.r.t the dual metric distortion Loss, and w.r.t mean squared errror in pairwise distance (the loss optimized by Isomap 6 ). This is dis(Y, Ytrue) = 2/n(n 1) X ||Yk Yk || ||Ytrue k Ytrue k || 2 (10) where Y is the embedding resulting from the chosen method and Ytrue are the true noiseless coordinates. Note that none of Isomap, MVU, HLLE could have been tested on the hourglass to sphere data of the previous example, because they work only for s = d. The sample size is n = 3000 in both experiments, and noise is added as described above. Flat swiss roll manifold, s = d = 2. The results are displayed in Fig. 2. Curved half sphere manifold, s = d = 2. Isometric embedding into 2D is not possible. We examine which of the algorithms achieves the smallest distortions in this scenario. The true distances were computed as arc-lengths on the half-sphere. The results are displayed in Fig 2. RR was initialized at each method. In almost every initalization and noise level, RR achieves a decrease in dis, in some cases significant decreases. Isomap also performs well and even though RR optimizes a different loss function it never increases dis and often improves on it. This demonstrates the ability of the Riemannian Metric to encode simultaneously all aspects of manifold geom- 4For this artificial noise, adding dimensions beyond s has no effect except to increase σ. 5embeddings were computed using drtoolbox: https://lvdmaaten.github.io/drtoolbox/ 6Isomap estimates the true distances using graph shortest path Laplacian Eigenmaps 0 0.2 0.4 0.6 0.8 1 1.36 average distortion (log10) Leigs Isomap HLLE MVU 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.9 average distortion (log10) Leigs Isomap HLLE MVU 0 0.2 0.4 0.6 0.8 1 1 average loss (log10) Leigs Isomap HLLE MVU 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.9 average loss (log10) Leigs Isomap HLLE MVU Figure 2: Swiss hole (left) & half sphere (right). Top plots display example initial embeddings and their Riemannian Relaxed versions. Middle row displays dis value vs. noise level σ. Bottom row displays Loss value vs. noise level σ. As RR was initialized at each method dashed lines indicated relaxed embeddings etry. Convergence of RR varies with the initialization but was in all cases faster than Isomap. The extension of RR to PCS-RR allows for scaling to much larger data sets. 4.1 Visualizing the main SDSS galaxy sample in spectra space The data consists of spectra of galaxies from the Sloan Digital Sky Survey7 [1]. We extracted a subset of spectra whose SNR was sufficiently high, known as the main sample. This set contains 675,000 galaxies observed in D = 3750 spectral bins, preprocessed by first moving them to a common rest-frame wavelength and filling-in missing data following [18] but using the more sophisticated weighted PCA algorithm of [5], before computing a sparse neighborhood graph and pairwise distances between neighbors in this graph. A log-log plot of the average number neighbors m(r) vs. neighborhood radius r (shown in the Supplement), indicates that the intrinsic dimension of these data varies with the scale r. In particular, in order to support m = O(d) neighbors, the radius must be above 60, in which case d 3. We embedded the whole data set by Laplacian Eigenmaps, obtaining the graph in Fig. 3 a. This figure strongly suggests that d is not constant for this data cloud, and that the embedding is not isometric (Fig 3, b). We rescaled the data along the three evident 7 www.sdss.org log10(||H||) log10(||H||) log10(H α Emission) Figure 3: a: Initial LE embedding from D = 3750 to s = 3 dimensions, with the principal curves ˆY superimposed. For clarity, we only show a small subsample of the Y0; a larger one is in the Supplement; b: same embedding, only points on principal curves, colored by log10 ||Hk|| (hence, 0 represents isometry); c: same points as in (b), after RR(color on the same scale as in (b)); d: 40,000 galaxies in the coordinates from (c), colored by the strength of Hydrogen α emission, a very nonlinear feature which requires dozens of dimensions to be captured in a linear embedding. Convergence of PCS-RR was achieved after 1000 iterations and took 2.5 hours optimizing a Loss with n = 2000 terms over the n s = 105 3 coordinates, corresponding to the highest density points. (Please zoom for better viewing) principal curves shown in Figure 3 a by running PCS-RR (Y, n = 105, n = 2000, s = 3, d = 1). In the new coordinates (Fig 3, c), Y is now close to isometric along the selected curves, while in Fig. 3,b, ||Hk|| was in the thousands on the uppermost arm . This means that, at the largest scale, the units of distance in the space of galaxy spectra are being preserved (almost) uniformly along the sequences, and that they correspond to the distances in the original D = 3750 data. Moreover, we expect the distances along the final embedding to be closer on average to the true distance, because of the denoising effect of the embedding. Interpreting the coordinates along these arms is in progress. As a next step of the analysis, RR with s = d = 3 will be used to rescale the high-density region at the confluence of the three principal curves. 5 Discussion Contributions: we propose a new, natural, way to measure the distortion from isometry of any embedding Y Rn s of a data set X Rn D, and study its properties. The distortion loss is based on an estimate of the push-forward Riemannian metric into Euclidean space Rs. The RR we propose departs from existing non-linear embedding algorithms in several ways. First, instead of a heuristically chosen loss, like pairwise distances, or local linear reconstruction error, it directly optimizes the (dual) Riemannian metric of the embedding Y. When this is successful, and the loss is 0 all geometric properties (lengths, angles, volumes) are preserved simultaneously. From the computational point of view, the non-convex loss is optimized iteratively by projected gradient. Third, our algorithm explicitly requires both an embedding dimension s and an intrinsic dimension d as inputs. Estimating the intrinsic dimension of a data set is not a solved problem, and beyond the scope of this work. However, as a rule of thumb, we propose chosing the smallest d for which Loss is not too large, for s fixed, or, if d is known (something that all existing algorithms assume), increasing s until the loss becomes almost 0. Most existing embedding algorithms, as Isomap, LLE, HLLE, MVU, LTSA only work in the case s = d, while Laplacian Eigenmaps/Diffusion Maps requires only s but does not attempt to preserve geometric relations. Finally, RR is computationally competitive with existing algorithms, and can be seamlessly adapted to a variety of situations arising in the analysis of real data sets. [1] K. N. Abazajian et al. The Seventh Data Release of the Sloan Digital Sky Survey. Astrophysical Journal Supplement Series, 182:543 558, June 2009. [2] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15:1373 1396, 2002. [3] M. Bernstein, V. de Silva, J. C. Langford, and J. Tennenbaum. Graph approximations to geodesics on embedded manifolds. Science, 290, 2000. [4] R. R. Coifman and S. Lafon. Diffusion maps. Applied and Computational Harmonic Analysis, 21(1):6 30, 2006. [5] L. Delchambre. Weighted principal component analysis: a weighted covariance eigendecomposition approach. Monthly Notices of the Royal Astronomical Society, 446(4):3545 3555, 2015. [6] David L. Donoho and Carrie Grimes. Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data. Proc Natl Acad Sci, 100(10):5591 5596, May 2003. [7] Christopher Genovese, Marco Perone-Pacifico, Isabella Verdinelli, and Larry Wasserman. Minimax manifold estimation. Journal of Machine Learning Research, 13:1263â LŠ 1291, May 2012. [8] M. Hein and J.-Y. Audibert. Intrinsic dimensionality estimation of submanifolds in Rd. In Proceedings of the 22nd international conference on Machine learning, ICML, pages 289 296, 2005. [9] M. Hein, J.-Y. Audibert, and U. von Luxburg. Graph Laplacians and their Convergence on Random Neighborhood Graphs. Journal of Machine Learning Research, 8:1325 1368, 2007. [10] J. M. Lee. Riemannian Manifolds: An Introduction to Curvature, volume M. Springer, New York, 1997. [11] J. M. Lee. Introduction to Smooth Manifolds. Springer, New York, 2003. [12] B. Nadler, S. Lafon, R. R. Coifman, and Kevrekidis. Diffusion maps, spectral clustering and reaction coordiantes of dynamical systems. Applied and Computational Harmonic Analysis, 21:113 127, 2006. [13] J. Nash. The imbedding problem for Riemannian manifolds. Annals of Mathematics, 63, pages 20 63, 1956. [14] Umut Ozertem and Deniz Erdogmus. Locally defined principal curves and surfaces. Journal of Machine Learning Research, 12:1249 1286, 2011. [15] Dominique Perrault-Joncas and Marina Meila. Non-linear dimention reduction: Riemannian metric estimation and the problem of geometric recovery. ar Xiv:1305.7255v1, 2013. [16] J. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290:2319 2323, 2000. [17] D. Ting, L Huang, and M. I. Jordan. An analysis of the convergence of graph Laplacians. In ICML, pages 1079 1086, 2010. [18] Jake Vanderplas and Andrew Connolly. Reducing the dimensionality of data: Locally linear embedding of sloan galaxy spectra. The Astronomical Journal, 138(5):1365, 2009. [19] Nakul Verma. Distance preserving embeddings for general n-dimensional manifolds. Journal of Machine Learning Research, 14:2415 2448, 2013. [20] K.Q. Weinberger and L.K. Saul. Unsupervised learning of image manifolds by semidefinite programming. International Journal of Computer Vision, 70:77 90, 2006. 10.1007/s11263005-4939-z. [21] Z. Zhang and H. Zha. Principal manifolds and nonlinear dimensionality reduction via tangent space alignment. SIAM J. Scientific Computing, 26(1):313 338, 2004.