# differentially_private_multivariate_medians__80898183.pdf Journal of Machine Learning Research 26 (2025) 1-52 Submitted 4/25; Revised 10/25; Published 11/25 Differentially Private Multivariate Medians Kelly Ramsay kramsay2@yorku.ca Department of Mathematics and Statistics York University North York, ON M3J1P3, Canada Aukosh Jagannath a.jagannath@uwaterloo.ca Department of Statistics and Actuarial Science University of Waterloo Waterloo, ON N2L3G1, Canada Shoja eddin Chenouri schenouri@uwaterloo.ca Department of Statistics and Actuarial Science University of Waterloo Waterloo, ON N2L3G1, Canada Editor: Raef Bassily Statistical tools which satisfy rigorous privacy guarantees are necessary for modern data analysis. It is well-known that robustness against contamination is linked to differential privacy. Despite this fact, using multivariate medians for differentially private and robust multivariate location estimation has not been systematically studied. We develop novel finite-sample performance guarantees for differentially private multivariate depth-based medians, which are essentially sharp. Our results cover commonly used depth functions, such as the halfspace (or Tukey) depth, spatial depth, and the integrated dual depth. We show that under Cauchy marginals, the cost of heavy-tailed location estimation outweighs the cost of privacy. We demonstrate our results numerically using a Gaussian contamination model in dimensions up to d = 100, and compare them to a state-of-the-art private mean estimation algorithm. As a by-product of our investigation, we prove concentration inequalities for the output of the exponential mechanism about the maximizer of the population objective function. This bound applies to objective functions that satisfy a mild regularity condition. Keywords: differential privacy, location estimation, robust statistics, depth function, sample complexity 1. Introduction Protecting user privacy is necessary for a safe and fair society. In order to protect user privacy, many large institutions, including the United States Census Bureau (Abowd et al., 2022), Apple (Differential Privacy Team, 2017), and Google (Guevara, 2021), utilize a strong notion of privacy known as differential privacy. Differential privacy is favored because it protects against many adversarial attacks while requiring minimal assumptions on both the data and the adversary (Dwork et al., 2017). It is now well-known that differential privacy is linked to robustness against contamination, see (Dwork and Lei, 2009; Avella- c 2025 Kelly Ramsay, Aukosh Jagannath and Shoja eddin Chenouri. License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v26/25-0763.html. Ramsay, Jagannath and Chenouri Medina, 2020; Liu et al., 2021; Georgiev and Hopkins, 2022; Hopkins et al., 2023; Asi et al., 2023) and the references therein. Surprisingly, however, multivariate medians have not been systematically studied in the differential privacy literature. We study this problem here. Note that there are several variants of differential privacy. Here, we focus on the setting of pure differential privacy, which is the strictest variant of differential privacy. To date, the literature on differentially private location estimation has largely focused on means. Foygel Barber and Duchi (2014) proved a lower bound on the minimax risk of differentially private mean estimation in terms of the number of moments possessed by the population measure. Other early works focused on the univariate setting (Karwa and Vadhan, 2017; Bun and Steinke, 2019). Kamath et al. (2019) and Bun et al. (2019) studied differentially private mean estimation for several parametric models, including the multivariate Gaussian model. Later, several authors studied differentially private mean estimation for sub-Gaussian measures (Cai et al., 2021; Biswas et al., 2020; Brown et al., 2021; Liu et al., 2022). After which, Kamath et al. (2020); Liu et al. (2021) and Hopkins et al. (2022) studied differentially private mean estimation for measures with a finite second moment. Within this framework, Kamath et al. (2020) considered heavy tails and Liu et al. (2021) and Hopkins et al. (2022) considered robustness in terms of contamination models. Kamath et al. (2020) quantified the sample complexity of their (purely) differentially private mean estimator in terms of the number of moments possessed by the population measure. Liu et al. (2021) quantified the relationship between the accuracy of an (approximately) differentially private mean estimator and the level of contamination in the observed sample. On a similar note, Hopkins et al. (2022) showed that there exists a level of contamination under which the sample complexity of their (purely) differentially private mean estimator remains unchanged. On the other hand, if one wishes to estimate location robustly, the canonical estimators are medians. The differential privacy literature on medians has focused on the univariate setting: Dwork and Lei (2009) introduced an approximately differentially private median estimator with asymptotic consistency guarantees, Avella-Medina and Brunel (2019); Brunel and Avella-Medina (2020) then introduced several median estimators which achieve sub Gaussian error rates, and Tzamos et al. (2020) introduced median estimators with minimax optimal sample complexity. Private estimation of multivariate medians, however, has not been carefully studied. In this paper, we study differentially private, multivariate median estimation through the framework of depth functions. Depth-based median estimation is the standard approach to multivariate median estimation. Here, one has a function, called the depth function, which provides a measure of centrality. Its maximizer is then the median, which is generally robust. Popular depth-based medians include the halfspace (or Tukey) median (Tukey, 1974), the spatial median (Vardi and Zhang, 2000), and the simplicial median (Liu, 1990). For more on the motivation of the depth-based medians approach, see Section 2 below and Small (1990); Vardi and Zhang (2000); Serfling (2006). Our results apply to a broad class of depth functions, including those listed above, and we make minimal distributional assumptions. In particular, we do not require concavity of the depth function or that the population measure has moments of any order. Before turning to our main results, it is important to note that several authors have recently used depth functions in the context of differential privacy, likely due to their ro- Differentially private multivariate medians bustness properties, with a heavy focus on the halfspace depth. Depth functions have been used in settings such as finding a point in a convex hull (Beimel et al., 2019; Gao and Sheffet, 2020), comparing k-norm mechanisms (Awan and Slavkovi c, 2021), and improving robustness for mean estimation for Gaussian models (Brown et al., 2021; Liu et al., 2021). Another related work is that of Ben-Eliezer et al. (2022), who focus on various problems concerning high-dimensional quantile estimation. In particular, one problem Ben-Eliezer et al. (2022) consider is that of privately obtaining one or more points which have halfspace depth above a given threshold. Though this problem is closely related, it is distinct. For instance, their algorithm can be used to estimate a point with high halfspace depth. However, it could not be used to directly estimate the halfspace median, since that would require setting the threshold to be close to the depth of the empirical halfspace median, which depends on the sample. To our knowledge, none of the aforementioned works have directly addressed depth-based median estimation. After completion of this work, Ramsay and Spicker (2025) consider approximately differentially private projection-depth-based medians. Though many popular depth-based medians are covered by our framework, projection-depth-based medians are not covered by it. As a result, their work can be thought of as complementary to this one. 1.1 Our contributions Our contributions are as follows: Our main result is a general finite-sample deviations bound for private multivariate medians based on the exponential mechanism (Theorem 5). This deviations bound applies to private versions of many popular medians arising from depth functions, such as the halfspace median (Tukey, 1974), the simplicial median (Liu, 1988) and the spatial median (Vardi and Zhang, 2000; Serfling, 2002). We use Theorem 5 to give upper bounds on the deviations (and sample complexities, see Corollary 10) of several, purely differentially private multivariate medians arising from depth functions. These bounds are essentially sharp given recent results of Kamath et al. (2019); Cai et al. (2021) and do not require any moment assumptions on the population measure. We provide a fast implementation of a smoothed version of the integrated dual depthbased median (Cuevas and Fraiman, 2009); we can compute the (non-private) median of ten thousand 100-dimensional samples in less than one second on a personal computer. We show that this smoothed version of the integrated dual depth satisfies desirable properties for a depth function and can be approximated in polynomial time with finite-sample performance guarantees, see Proposition 33. As a by-product of our analysis, we uncover a general but elementary concentration bound (Theorem 12) for the exponential mechanism. We give a novel regularity condition (K, F)-regularity (Condition 3) on the objective function. This regularity condition implies both an upper bound on the sample complexity of a draw from the exponential mechanism and an upper bound on the global sensitivity of the objective function. This bound is applicable to a broader class of problems in statistical learning beyond differential privacy, see the follow-up paper (Ramsay et al., 2025). Ramsay, Jagannath and Chenouri As a second by-product of our analysis, we uncover uniform non-asymptotic error bounds for several depth functions and a differentially private estimator of data depth values. 2. Main results In this section, we provide a general finite-sample deviation bound for differentially private multivariate median estimation. As the tools we use here combine ideas from two distinct literatures, namely the differential privacy literature and the robust statistics literature, we briefly recall some essential notions from each of these fields before stating our main results. For a more in-depth introduction to differential privacy see Dwork and Roth (2014), and for a more in-depth introduction to robust multivariate median estimation and the depth function framework see Mosler (2002); Small (1990); Liu et al. (2006). We define all mathematical objects throughout the paper as they are used, but it is also helpful to have them collected in one place. Therefore, we also provide a summary of the notation we will use throughout the paper in Appendix A. 2.1 Differential privacy Let us start first by recalling some essential notions from differential privacy. A data set of size n d is a collection of n points in Rd, Xn = (Xℓ)n ℓ=1, with repetitions allowed. Let Dn d be the set of all data sets of size n d. For a data set Yn, let D(Yn, m) = {Zn Dn d : |Zn Yn| = 2m} , that is, D(Yn, m) is the collection of data sets of size n d which differ from Yn in exactly m points. Two data sets Yn and Zn are said to be adjacent if Zn D(Yn, 1). Finally, for a data set Yn with empirical measure ˆµYn define f M(Yn) = { µ M1(Rd): µ = ˆµZn for some Zn D(Yn, 1)}, where, for a space S, M1(S) denotes the space of probability measures on S. Let us now recall the notion of differential privacy. Suppose that we observe a data set Xn comprised of n i.i.d. samples from some unknown measure µ M1(Rd) and produce a statistic, θn, whose law conditionally on the samples depends on their empirical measure, ˆµn. Let us denote this law by Pˆµn. The goal of differential privacy is to produce a statistic such that Pˆµn is non-degenerate and, more precisely, obeys a certain privacy guarantee which is defined as follows. Following the privacy literature, we call the algorithm that takes ˆµn and returns the (random) statistic, θn, a mechanism. We also follow the standard abuse of terminology and call θn the mechanism when it is clear from context. Definition 1 A mechanism θn is ϵ-differentially private if for any data set Yn, any µ f M(Yn), and any measurable set B, it holds that PˆµYn(B) eϵP µ(B). (2.1) Here, ϵ > 0 is the privacy parameter, for which smaller values enforce stricter levels of privacy. Many general purpose differentially private mechanisms rely on the concept of global sensitivity. The global sensitivity GSn of a function φ : Rd M1(Rd) R is given Differentially private multivariate medians by GSn(φ) = sup Xn Dn d, µn f M(Xn) sup x Rd |φ(x, ˆµn) φ(x, µn)|. The simplest differentially private mechanisms are the additive noise mechanisms, e.g., the Laplace and Gaussian mechanisms. These mechanisms require the non-private estimator to have finite global sensitivity. Unfortunately, virtually all standard location estimators, including both univariate and multivariate medians, have infinite global sensitivity, see e.g., (Avella-Medina and Brunel, 2019). However, depth functions, the objective functions for multivariate medians which we will discuss shortly, all have finite global sensitivity. This fact makes them a good candidate for use with the exponential mechanism, which is a general mechanism used to produce differentially private estimators from non-private estimators that are maximizers of a given objective function. For a given β > 0, base measure (i.e., prior) π M1(Rd) and objective function φ : Rd M1(Rd) R, suppose that R exp(βφ(θ, ν))dπ(θ) < for all ν M1(Rd), then the exponential mechanism with base measure π is given by Qˆµn,β exp(βφ(θ, ˆµn))dπ. (2.2) A sample from the exponential mechanism θn Qˆµn,β then provides an estimate of the population value θ0 = argmax Rd φ(θ, µ). In regard to the choice of β, it was shown by Mc Sherry and Talwar (2007) that if θn is produced by the exponential mechanism, with β ϵ/2GSn(φ) then θn is ϵ-differentially private. 2.2 The depth function framework When first studying multivariate median estimation, one might try to naively extend the notion of a univariate median to the multivariate setting, i.e., the coordinate-wise median. This, however, is well-known to be an unsatisfactory as a measure of center for many reasons (Small, 1990; Serfling, 2006; Liu et al., 2006). For example, the coordinate-wise median can reside outside the convex hull of the data (Serfling, 2006). The general framework of depth functions, or statistical depth functions, was introduced to resolve these issues and is now the standard approach to defining and studying multivariate medians. See (Small, 1990; Zuo and Serfling, 2000; Mosler, 2002; Serfling, 2006; Liu et al., 2006) for a necessarily small introduction to this broad research field. Let us now recall the basic notions of (statistical) depth functions. Roughly speaking, depth functions are functions of the form D: Rd M1(Rd) R+ which, given a probability measure and a point in the domain, assign a number. This number represents how central that point is with respect to the measure; it is called the depth of that point. Given a depth function, the corresponding median is then defined to be a maximizer of this depth, that is, the deepest point: MED(µ; D) = argmax x Rd D(x, µ). Depth-based medians are generally robust, in the sense that they are not affected by outliers. For instance, depth-based medians have a high breakdown point and favorable properties related to the influence function (Chen and Tyler, 2002; Zuo, 2004). Let us now be more Ramsay, Jagannath and Chenouri precise. For two random vectors X, Y , we write X d= Y when X is equal in distribution to Y . Recall that a measure µ M1(Rd) is centrally symmetric about a point x Rd if X x d= x X for X µ. In the following, let Cx M1(Rd) denote the set of centrally symmetric measures about x and for A Rd d and b Rd, let Aµ + b be the law of AX + b if X µ. Definition 2 A function D: Rd M1(Rd) R+ is a depth function with admissible set A M1(Rd) if, for all µ A , the following four properties hold: 1. Similarity invariance: For all orthogonal matrices A Rd d and b Rd it holds that D(x, µ) = D(Ax + b, Aµ + b). 2. Maximality at center: If µ Cθ0 then D(θ0, µ) = supx D(x, µ). 3. Decreasing along rays: Suppose D is maximized at θ0. For all p (0, 1), it holds that D(x, µ) D(pθ0 + (1 p)x, µ) D(θ0, µ). 4. Vanishing at infinity: limc D(cu, µ) = 0 for any unit vector u. The first property says that the depth function has limited dependence on the chosen coordinate system. The remaining properties ensure that D measures depth. The admissible set A is the set of measures or distributions for which the function measures depth. Remark 3 There are a few distinct definitions of the notion of a depth function in the statistics literature. See, e.g., (Zuo and Serfling, 2000; Liu et al., 2006; Ramsay et al., 2019) for some alternatives. We choose Definition 2 as it includes as many existing depth functions as possible while retaining the spirit of a depth function. We use here a weaker definition of a depth function than the influential work of Zuo and Serfling (2000), which replaces similarity invariance with affine invariance. For a discussion of desirable properties for a depth function to possess, see (Zuo and Serfling, 2000; Liu et al., 2006; Serfling, 2019). Examples of commonly used depth functions include the halfspace depth (or Tukey depth) (Tukey, 1974), the simplicial depth (Liu, 1988), the spatial depth (Vardi and Zhang, 2000; Serfling, 2002), the integrated dual depth (IDD) (Cuevas and Fraiman, 2009), and the integrated-rank-weighted (IRW) depth (Ramsay et al., 2019). For the reader s convenience, we provide a brief review of these depth functions and their basic properties in Section 6. We emphasize that this is a small selection of the large number of depth functions used in the literature, and reviewing them all is outside of the scope of this work. There are a number of desirable properties of a depth function, relating to transformation invariance, robustness, computability, ability to measure depth for a broad range of distributions and more, from which no depth function arises as the gold standard (Zuo and Serfling, 2000; Liu et al., 2006). Each depth function yields a distinct notion of a median, each with its own desirable properties. As our goal in this paper is to unify these two frameworks, we will not investigate the question of which depth is optimal, statistically or computationally, for a given problem. Instead, our main contribution is to give general finite-sample deviations bounds which applies to many of the existing depth functions, under mild conditions on the population measure. Differentially private multivariate medians 2.3 A finite-sample deviations bound Our main result relies on a regularity condition, for which we need to introduce the following notation. Let B denote the space of Borel functions from Rd to [0, 1]. For a family of functions, F B, define a pseudometric on M1(Rd), d F(µ, ν) = supg F | R gd(µ ν)|, where µ, ν M1(Rd). Recall that many standard metrics on probability measures can be written in this fashion. For example, taking F to be the set of indicator functions of Borel sets yields the Total Variation distance and taking F to be the set of indicator functions of semi-infinite rectangles yields the Kolmogorov Smirnov distance. We then define the following important regularity condition. Definition 4 We say that D is (K, F)-regular if there exists a class of functions F B such that D(x, ) is K-Lipschitz with respect to the F-pseudometric uniformly in x, i.e., for all µ, ν M1(Rd) sup x |D(x, µ) D(x, ν)| Kd F(µ, ν). Intuitively, (K, F)-regularity can be thought of as a robustness condition as it says that (K, F)-regular functions are affected by extreme observations in a very limited manner. For example, it immediately implies the bound GSn(D) K/n on the global sensitivity through the boundedness of g F, see Lemma 37. This condition is very convenient for two reasons: Firstly, it is elementary to check this condition for many popular depth functions (see, Table 2.4). Secondly, it immediately yields an ϵ differentially private estimator by choosing β = nϵ/2K, see Lemma 36. While this assumption is convenient for differential privacy, we show in a companion paper (Ramsay et al., 2025) that our main deviations bound holds under a weaker condition and can apply to other statistical learning settings. We now state the necessary regularity conditions on the pair (µ, D) that are required for our main result. In the following, let VC(F) denote the Vapnik Chervonenkis dimension of F. Condition 1 The function D( , µ) has a maximizer. Condition 2 The map x 7 D(x, µ) is L-Lipschitz a.e. for some L > 0. Condition 3 There is some K > 0 and some family of functions F B with VC(F) < such that D is (K, F)-regular. To state our main result, we must also introduce the discrepancy function. Let ED,µ = {θ : D(θ, µ) = max x Rd D(x, µ)} denote the set of maximizers of D( , µ) for fixed D and µ. For a set A and r > 0, let Br(A) = {x: min y A x y r}. The discrepancy function of the pair (D, µ) is α(t) = D(θ0, µ) sup x Bc t (ED,µ) D(x, µ), (2.3) where θ0 ED,µ. Given t > 0, the discrepancy function is the minimum distance between the maximal population depth value, and the maximal population depth value that is at least t-far from the maximizing set ED,µ. For example, often there is one maximizer, the Ramsay, Jagannath and Chenouri median, and the discrepancy function measures the minimum difference between the depth of the median, and the depth of any point at least t-far from the median. The faster the discrepancy function grows in t, the faster the depth function decreases as we move away from the median. Note that α is a non-decreasing function because the optimization problem involved is over decreasing sets. In Table 2.3 below, we explicitly calculate α for several depth functions. We are now ready to state our main result, which provides bounds on the finite-sample deviations of the proposed differentially private multivariate medians via depths. For concreteness, we present our results for two of the most popular choices of priors, namely Gaussian measures and the uniform measure on AR(y), the d-dimensional cube of sidelength R centered at y. In the following, let d R,y(x) denote the minimum distance from a point x = (x1, . . . , xd) to a face of the cube AR(y): d R,y(x) = min 1 i d |xi yi R/2|. Similarly, denote the minimum distance from a set B Rd to a face of the cube AR(y) by d R,y(B) = inf x B d R,y(x) and let d(x, B) = inf y B x y denote the usual point-to-set distance, where denotes the Euclidean norm. Define α 1(t) = sup{r 0: α(r) = t}, where one notes that α 1 exists because α is a monotone function. Lastly, we write a b if a Cb for some universal constant C > 0. Theorem 5 If Conditions 1 3 hold, then the following holds: (i) Suppose that L 1. If π = N(θπ, σ2 πI), then there exists a universal constant c > 0 such that for all n 8K/ϵ, all d > 2 and all 0 < γ < 1, with probability at least 1 γ, we have that d(ED,µ, θn) α 1 log(1/γ) VC(F) log n _ log (1/γ) d(ED,µ,θπ)2 σ2π + d log σπLnϵ (ii) If instead π 1 {x AR(θπ)} where ED,µ AR(θπ), then there exists a universal constant c > 0 such that for all n, d 1 and all 0 < γ < 1, with probability at least 1 γ, we have that d(ED,µ, θn) α 1 log(1/γ) VC(F) log n _ log (1/γ) d log Rn nd R,θπ (ED,µ) K/L The proof of Theorem 5 can be found in Section 4, and relies on an elementary concentration bound (Theorem 12), which we introduce in Section 3. We also provide a version of Theorem 5 in terms of sample complexity, see Corollary 10. Theorem 5 is a deviations bounds for private multivariate medians. Put simply, it says that α(d(ED,µ, θn)) is bounded Differentially private multivariate medians above, with high probability, by two major terms: the sampling error term, which appears first in the upper bound, and the cost of privacy term, which appears second in the upper bound. To further help interpret Theorem 5, note that we are mainly concerned about small deviations. For many choices of µ and D, one can show that α 1 is at most linear for small t > 0 under various regularity conditions that capture many popular depths, see Section 5 where we present such conditions. In this case one can remove the α 1 from the right-hand sides of (2.4)-(2.5). In addition, this fact coupled with the fact that many of the depth functions satisfy Condition 3 with K = O(1) and VC(F) = O(d), and, in many contexts, the population median is unique, recovers a bound similar to those obtained in the case of private mean estimation (Kamath et al., 2019, 2020; Cai et al., 2021). That is, the bound given in Theorem 5 then reduces to the simpler bound MED(µ; D) θn log(1/γ) d log n _ log (1/γ) d log Rn nd R,θπ (ED,µ) 1/L Under such µ and D, assuming a reasonable prior and ignoring logarithmic terms, the cost of privacy term is greater than the sampling error term only when p d/nϵ > 1. For concrete examples of Theorem 5 applied to specific depth functions and population measures ν, see Examples 1 and 2. While we state our result for general depth functions satisfying the above assumptions, our results apply to many of the most popular depths, such as halfspace depth and, more broadly, Type D depths in the sense of Zuo and Serfling (2000), spatial depth, simplicial depth and both the integrated rank-weighted and integrated dual depths. Depth functions to which these results apply and the conditions under which they apply are discussed at length in Section 6. The conditions for specific popular depths are summarized in Table 2.4. We note that our bound is essentially sharp, see Lemma 18. Given that if µ is Gaussian, then the population mean equals the population median, the bound on the sample complexity implied by Theorem 5 (see Corollary 10) matches the lower bound on the sample complexity for private Gaussian mean estimation given by Kamath et al. (2019), Theorem 6.5 and Hopkins et al. (2022), Theorem 7.2 up to logarithmic factors. Alternatively, Cai et al. (2021) give a minimax lower bound for approximately differentially private sub-Gaussian mean estimation. Unsurprisingly, given that it is designed for the approximately differentially private setting, this lower bound becomes if we apply it to the setting of pure differential privacy. However, it is still interesting to compare Theorem 5 to the bound of Cai et al. (2021) in some way. A trivial manipulation of the conclusion of Lemma 18 shows that the bound given in Theorem 5 matches the lower bound of Cai et al. (2021), up to logarithmic terms, when the approximate differential privacy parameter , i.e., δ is taken to be at least n k for some k > 0. Therefore, as stated above, the bound in Theorem 5 is essentially sharp. Furthermore, at least for the setting of Gaussian mean estimation, relaxing to the setting of approximate differential privacy will not yield faster rates than those given by Theorem 5.1 Recall that estimating a parameter from an unbounded parameter space, i.e., unbounded estimation, is non-trivial in the setting of differential privacy, e.g., see Avella-Medina and 1. It should not be difficult to extend this conclusion to the more general case of symmetric sub-Gaussian median estimation. Ramsay, Jagannath and Chenouri Brunel (2019). Theorem 5 shows that choosing σπ relatively large allows one to perform unbounded estimation at the cost of an extra log d in the second term of (2.4), or, equivalently, an extra log d to the sample complexity. To elaborate, using a depth function in the exponential mechanism with a Gaussian prior constitutes unbounded estimation of the population median. Theorem 5 then shows that d(ED,µ, θπ)/σπ = O( d log d) is sufficient for the deviations under a shifted location Gaussian prior to match the deviations under a correctly specified uniform prior, up to a logarithmic factor in d. Thus, we only require d(ED,µ, θπ) to be polynomial in d, i.e., choosing σπ to be some large polynomial in d will ensure that σπ d(ED,µ, θπ)/ d log d. Remark 6 (Projection depth) One popular notion of depth not covered by our analysis is the projection depth (Zuo, 2003), see also (Stahel, 1981; Donoho, 1982). This is because a private median drawn from the exponential mechanism based on the empirical version of this depth function, is inconsistent µ-a.s, see Lemma 38. For a private version of the projection-depth-based median, see (Ramsay and Spicker, 2025). Remark 7 (Comparison to non-private bounds) The first term in the maximum in Theorem 5 is a deviations bound for non-private depth-based medians. If we assume that θ0 AR(0) and choose the depth function to be halfspace depth (Definition 11), then this upper bound recovers the upper bound on the non-private estimation error of the halfspace median for the Gaussian measure, see Theorem 2.1 of Chen et al. (2018), see also Theorem 3 of Zhu et al. (2020). Remark 8 (Relaxing the Lipschitz requirement) The Lipschitz requirement for D can be relaxed to obtain more general deviation bounds, at the cost of complicating the presentation. Furthermore, all the depth functions we consider are Lipschitz functions a.e. under mild assumptions, see Theorem 30. It is also not necessary to require that ED,µ AR(θπ). It is sufficient to assume that ED,µ AR(θπ) = . In this case, one can replace d R,θπ(ED,µ) with d R,θπ(ED,µ AR(θπ)). Remark 9 (The univariate setting) Theorem 12 can be applied to obtain an optimal univariate private median estimator. Tzamos et al. (2020) gave a lower bound on the minimax sample complexity for estimating the one-dimensional median of a population measure whose density is bounded below by a positive constant. Applying Theorem 5 when d = 1 and D being the halfspace depth to such measures yields an upper bound with matches that lower bound. This upper bound recovers the one given by Karwa and Vadhan (2017) for one-dimensional Gaussian mean estimation. Note that Theorem 5 has a straightforward restatement in terms of sample complexity. Corollary 10 Suppose that Conditions 1 3 hold. Then the following holds (i) Suppose that D(x, µ) 1 and L 1. If π = N(θπ, σ2 πI), then there exists universal constants C, c > 0 such that for all t 0, all d > 2 and all 0 < γ < 1, we have that Differentially private multivariate medians D/µ: Nd(θ0, Σ) d-version symmetric 2 1 2 sup v =1 inf u =1 F0 t v u 2 Φ t v 1 u dν inf v =1 IDD 1 4 Z Φ t v 1 u Φ t v 1 u u Σu dν 1 4 sup v =1 a(u) F0 t v u Table 1: Table of the discrepancy function α(t) for different depth functions and underlying population measures. Recall that µ is d-version symmetric about zero if for any unit vector u, X u d= a(u)Z where X µ and Z is a random variable such that Z d= Z and a(u) = a( u). Here, v1 is the eigenvector associated with the largest eigenvalue λ1 of Σ and F0(x) = F(x, u, µ), see (2.9), and ν is the uniform measure on the (d 1)-dimensional unit sphere. d(ED,µ, θn) < t with probability at least 1 γ provided n > CK2 log(1/γ) [VC(F) log( K cα(t)) 1] _ log (1/γ) d(ED,µ,θπ)2 (ii) If instead π 1 {x AR(θπ)} where ED,µ AR(θπ), then there exists universal constants C, c > 0 such that for all t 0, all d 1 and all 0 < γ < 1, we have that d(ED,µ, θn) t with probability at least 1 γ provided n > CK2 log(1/γ) [VC(F) log( K cα(t)) 1] _ log (1/γ) d log R d R,θπ (ED,µ) α(t)/L The proof is deferred to Appendix B. 2.4 Examples Let us now turn to some concrete examples that illustrate our result. We first introduce the halfspace depth, which will be used in our examples, where we apply our main results to the halfspace median. For X µ, define F(x, u, µ) = µ X u x u . (2.8) Ramsay, Jagannath and Chenouri Halfspace depth, otherwise known as Tukey depth (Tukey, 1974), is defined as follows: Definition 11 (Halfspace depth) The halfspace depth HD of a point x Rd with respect to µ M1(Rd) is HD(x, µ) = inf u =1 F(x, u, µ). (2.9) The halfspace depth of a point x Rd is the minimum probability mass contained in a closed halfspace containing x. The halfspace median is the maximizer of the halfspace depth, when it is unique. We first consider the canonical problem of multivariate Gaussian location estimation under a Gaussian prior with shifted location, where here, the center of the prior θπ is far from the population halfspace median. We quantify the relationship between finitesample deviations, prior location misspecification, privacy budget, dimension and statistical accuracy. In this case, omitting logarithmic terms, for a location misspecification level of θ0 θπ /σπ d, the finite-sample deviations of the private halfspace median reduce to O( p λ1d/n λ1d/nϵ). Here, λ1 is the largest eigenvalue of the covariance matrix of µ. Next, we consider estimating the location parameter of a population measure µ made up of Cauchy marginals, which is known as the poly-Cauchy distribution. This setup is more difficult in the sense that the mean of the probability measure µ does not exist. For instance, theory concerning differentially private mean estimators cannot be applied in this setting. Example 1 (Multivariate Gaussians with Gaussian prior with shifted location) Consider the problem of estimating the location parameter θ0 of a multivariate Gaussian measure µ = N(θ0, Σ) and we wish to estimate θ0. For this example, let λ1 > . . . > λd > 0 be the ordered eigenvalues of the covariance matrix Σ. Suppose that in order to estimate θ0 we use the exponential mechanism paired with the halfspace depth D = HD (see Definition 11) in conjunction with a misspecified Gaussian prior: π = N(θπ, σ2 πI). In order to apply Theorem 5, we need to check Condition 1, 3 and Lipschitz continuity of HD. First, note that Condition 1 holds for any bounded depth function as defined in Definition 2. Next, one can show that (see Theorem 30) the halfspace depth is (1, F)- regular where F is the set of closed halfspaces in Rd, thus, VC(F) = d + 2. Lastly, the map x 7 HD(x, µ) is L-Lipschitz with L = (2πλd) 1/2, implying Condition 2 is satisfied. We may now apply Theorem 5, for which we must compute α(t). To this end, it holds that α(t) t CR/ λ1, for some large R > t, see Lemmas 21 and 40. Therefore, under halfspace depth, with probability at least 1 γ, it holds that log(1/γ) d log n _ log (1/γ) θ0 θπ 2 σ2π d log (nϵσπ/λd d) Suppose we would like to achieve a success rate of 1 e d, i.e., γ = e d. This reduces the deviations bound to σ2π d log (nϵσπ/λd d) Differentially private multivariate medians Observe now that in order for the prior effect to be negligible, we require θ0 θπ /σπ d log d and that σπ is at most polynomial in d. This suggests that, up to a point, a larger prior variance σ2 π produces lower deviations. In this case, when the error level, ϵ and Σ are fixed, we only need n to grow slightly faster than the dimension n Cd log d in order to maintain a consistent level of error. If the prior effect is negligible in the sense just described, then, omitting logarithmic terms for brevity, the deviations bound reduces to We then have that the cost of privacy is eclipsed by the sampling error whenever n d/ϵ2. Therefore, the cost of differential privacy is asymptotically free , provided that d is not too large. That is, the error incurred by differential privacy is much smaller than that of the sampling error. Furthermore, (2.10), makes it is easy to see that in this case, the private medians achieve the minimax lower bound for approximately differentially private sub-Gaussian mean estimation (Cai et al., 2021), if δ n k. There exists other private estimators that are minimax optimal for private Gaussian mean estimation, e.g., (Kamath et al., 2019; Biswas et al., 2020). These estimators may be computable in polynomial time, whereas the private halfspace median would take exponential time. However, existing estimators may not be robust. For example, they may not perform well if the population measure is not multivariate Gaussian but instead is multivariate Cauchy. Robustness against violations of assumptions is especially important in private data analysis, where we may not have enough privacy budget to check assumptions. In addition, the parameter of interest may be the population median. In that case, mean estimators would not be suitable if the population measure is not symmetric. Note that a bound in terms of Mahalanobis distance (with respect to Σ), denoted Σ, is implied by (2.10), through the inequality ||θ0 θn||Σ ||θ0 θn||/ λd, viz. θ0 θn Σ λ1λd However, this resulting bound is not optimal for Gaussian mean estimation, see, e.g., (Brown et al., 2021). This does not imply that this estimator is not optimal for this problem, it is just that the optimal Mahalanobis error is not implied by our bound. It is possible the techniques used to prove Theorem 12 could be used to prove a general deviations bound in terms of the Mahalanobis distance. Example 2 (Cauchy marginals) Suppose that µ is the product probability measure constructed from independent Cauchy marginals with location parameter 0 and scale parameters σ1, . . . , σd and we would like to use the exponential mechanism to privately estimate the halfspace median of µ. In this case, µ is d-version symmetric2, with a(u) = Pd j=1 σj|uj| where uj is the jth coordinate of u. A similar analysis as to that of Example 1, gives that halfspace depth (under this µ) satisfies the conditions of Theorem 5. Note that in this setting, 2. Recall that µ is d-version symmetric about zero if for any unit vector u, X u d= a(u)Z where X µ and Z ν such that Z d= Z and a(u) = a( u). Ramsay, Jagannath and Chenouri HD IDD IRW SMD SD MSD VC(F) O(d) O(d) O(d) O(d2 log d) O(d) O(d) K 1 3 4 d + 1 d 1 L supu fu 3 supu fu 2 supu fu supu fu 2L 2 A M1(Rd) C C N (see Remark 28) Table 2: Table of values for which different depth functions satisfy Theorem 5. Letting µu be the law of X u if X µ, fu is the density of µu with respect to the Lebesgue measure and L denotes supy E y X 1. Note A is the set of admissible measures for which D is a depth function, in the sense of Definition 2. Here, C M1(Rd) denotes the set of centrally symmetric measures over Rd and N M1(Rd) denotes the set of angularly symmetric measures over Rd that are absolutely continuous with respect to the Lebesgue measure. the Lipschitz constant scales with the dimension: L = O( d). The next step is to compute α(t). Let σ = d 1 Pd j=1 σj. Observe that when the scale parameters are constant in d, supu a(u) = d σ. It follows that under the halfspace depth, α(t) = 1 If t < R for some large R, then we can write α(t) t( σ d(1 + R2/d σ2)) 1. If we use a Gaussian prior with unit scale,3 then, omitting logarithmic terms, the deviations bound becomes θn d σ n d1/2 θπ 2 d3/2 Here, observe that the deviations are a factor of d larger than in the Gaussian setting. The increase in the sampling error term is due to the fact that if X follows the poly-Cauchy distribution, then there is a direction u where X u has scale proportional to d. This causes the discrepancy function to be dependent on the dimension. For the poly-Cauchy measure, considering all directions is actually a drawback of the halfspace median. On the other hand, the coordinate-wise median does not consider the direction with scale proportional to d, and does not have this issue. Indeed, when we apply Theorem 5 to the univariate private median, the resulting deviations bound is O( q d maxi [d] σ2 i /n [d3/2( θπ 2 1)/ϵn]). Observe that the sampling error term is O( p d/n). (Note that any other optimal, univariate private median estimator would achieve the same deviations bound, e.g., (Tzamos et al., 2020).) On the other hand, the cost of privacy term in the deviations bound is O(d3/2/nϵ) for both the halfspace median and coordinate-wise median. This represents the difficulty of estimating location when µ is high-dimensional and heavy-tailed. In the Gaussian setting, the discrepancy function is relatively similar between the halfspace depth to the integrated depths. However, in the Cauchy setting, the discrepancy functions differ between the halfspace depth, the integrated-rank weighted depth and the integrated 3. We don t lose generality here; we can arbitrarily shrink the scale of the µ. Differentially private multivariate medians Figure 1: Logarithm of the inverse of the value of the discrepancy function at t = 1, log(1/α(1)), as the dimension increases. Here, µ is made up of d independent standard Cauchy marginals. Notice that halfspace depth has the smallest values of 1/α(1) which implies it has a smaller bound on the sample complexity. This demonstrates the trade-offbetween accuracy and computability when choosing a depth function. For the integrated depths, the value of the integrand is approximated via Monte Carlo simulation with 10,000 unit vectors. dual depth. To see this, we must evaluate the integrals in Table 2.3. For the IRW this integral is given by tv u Pd j=1 σj|uj| We evaluate these integrals via Monte Carlo simulation with 10,000 unit vectors. Figure 1 plots the logarithm of the inverted discrepancy function computed at t = 1, i.e., log(1/α(1)), for the three depth functions as the dimension increases. Observe that the halfspace depth s discrepancy function decreases the slowest. Thus, the halfspace depth gives the smallest sample complexity for heavy tailed estimation when the dimension is large. The difference in accuracy between the halfspace depth and the integrated depths demonstrates the trade-off between computability and statistical accuracy. 2.5 Simulation To complement our theoretical results, we demonstrate the cost of privacy through simulation. In order to simulate a private median, we use (non-private) discretized Langevin dynamics. Given that the discretized Langevin dynamics approximates a draw from the exponential mechanism, we don t expect the use of discretized Langevin dynamics to have a meaningful impact on our results. We recognize that privacy guarantees may be affected by the use of Markov chain Monte Carlo (MCMC) methods; the current implementation may not guarantee ϵ-differential privacy. This appears to be an open problem. The purpose of the current implementation is to investigate the statistical accuracy of the methods. The computation of specific private medians via private MCMC is an interesting direction of future research. For a discussion on the level of privacy guaranteed by MCMC, see (Seeman Ramsay, Jagannath and Chenouri Figure 2: Empirical root mean squared error (ERMSE) of the location estimators under (left) Gaussian data and (right) contaminated Gaussian data when ϵ = 10. In the uncontaminated setting, the ERMSE of the private median grows at a slightly faster rate than its non-private counterpart and the mean estimators. This is due to the cost of privacy. On the other hand, in the contaminated setting, the ERMSE of median estimators is much lower than that of the mean estimators. In particular, the private median outperforms the coin-press mean. This demonstrates the usual trade-offbetween accuracy and robustness between the mean and the median. For a zoomed in version of the left plot, see Figure 4 in the appendix. et al., 2021). As a by-product of our work, we also provide a fast implementation of a modified version of the non-private integrated dual depth-based median. This implementation shows that this multivariate median can be computed quickly, even when the dimension is large. For example, we can compute the median of ten thousand 100-dimensional samples in less than one second on a personal computer.4 We call this modified version of the integrated dual depth the smoothed integrated dual depth, see Section 6.1 for more details. Our code is available on Git Hub. Aside from the private smoothed integrated dual depth median, we also computed the non-private smoothed integrated dual depth median and the private coin-press mean of Biswas et al. (2020). When computing the coin-press mean, we used the existing Git Hub implementation provided by the authors. The bounding ball for the coin-press algorithm had radius 10 d and was run for four iterations. We simulated fifty instances with n = 10, 000 over a range of dimensions from two to one hundred. The data were either generated from a standard Gaussian measure or contaminated standard Gaussian measure. In the contaminated model, 25% of the observations had mean (5, . . . , 5). We tested privacy parameters ϵ = 2, 5 and 10, the smoothing parameter s was fixed at 10 and π = N(0, 25d I). Figure 2 shows the empirical root mean squared error (ERMSE) for the different location estimators as the dimension increases when ϵ = 10. Results for the remaining values of ϵ can be seen in Appendix D. Notice that the private median ERMSE grows at a similar 4. The code was run on a desktop computer with an Intel i7-8700K 3.70GHz chipset and an Nvidia GTX 1660 super graphics card. Differentially private multivariate medians rate as that of the non-private ERMSE. The mean estimators perform very well in the uncontaminated setting, but poorly in the contaminated setting. In particular, the private median outperforms the coin-press mean. The median estimators do not perform as well as the mean estimators in the standard Gaussian setting, but do not become corrupted in the contaminated setting. This demonstrates the usual trade-offbetween accuracy and robustness between the mean and the median. 3. An elementary concentration bound In this section, we introduce the underlying concentration bound used to produce Theorem 5, which we imagine will be useful more broadly. Our result shows that for a general objective function φ, θn concentrates around θ0 = argmaxθ Rd φ(θ, µ) with high probability. In particular, Theorem 5 is specifically for privately estimating multivariate medians, given normal or uniform priors, meanwhile the concentration bound presented in this section applies more broadly to θn drawn from the more general (2.2) as an estimate of θ0 = argmaxθ Rd φ(θ, µ), under a general prior and a relaxed version of Condition 2. The relaxed version of Condition 2 is stated below. Condition 4 The map x 7 φ(x, µ) is π-a.s. uniformly continuous with modulus of continuity ω. We now introduce some new important functions. Replacing the depth function D with φ, we can define the generalized discrepancy function of the pair (φ, µ) as α(t) = φ(θ0, µ) supx Bc t (Eφ,µ) φ(x, µ), where θ0 Eφ,µ. The calibration function of the triple (φ, µ, π) is the function ψ(λ) = min r>0 [λ ω(r) log π(Br(Eφ,µ))] (3.1) Note that ψ(λ) is increasing and tends to as λ . The rate function of the prior is given by I(t) = log π(Bc t (Eφ,µ)). (3.2) The concentration/entropy function of the prior is the rate at which the prior concentrates around the maximizers of the population risk. For large t, it is the rate of decay of the tails of π. The concentration bound is given in the following theorem, which quantifies the accuracy of θn drawn from (2.2) as an estimate of θ0. Theorem 12 Suppose that for the pair (φ, µ), Conditions 1, 3 and 4 hold for some K > 0. Then, there are universal constants c1, c2 > 0 such that the following holds. For any t, β > 0, we have that Pr max θ0 Eφ,µ θn θ0 t c1n VC(F) VC(F) e c2n h 1 β I(t) α(t) i2 /K2 + e βα(t)/2 I(t)/2+ψ(β). Remark 13 (Comparison to some well-known bounds) It is helpful to compare our result to the widely used accuracy bound of Mc Sherry and Talwar (2007). The latter controls the accuracy of the estimator θn as measured by the empirical objective value: it bounds the difference between the empirical objective value achieved by the exponential mechanism and Ramsay, Jagannath and Chenouri the maximal empirical objective value. More precisely, if we let St = {x Rd : φ(x, ˆµn) supθ φ(θ, ˆµn)) 2t}, the result of Mc Sherry and Talwar (2007) says that Pr sup θ φ(θ, ˆµn) φ( θn, ˆµn) > 2t e tϵ/π(St). As well, note that there is no analogous sample complexity bound because this approximation is not improving with the sample size. By contrast, (3.10) concerns the consistency of θn as an estimator, namely the distance between the estimator and any maximizer of the population objective, that is θn θ0 . Furthermore, the quality of this concentration bound depends on n and thus a sample complexity bound can be read offreadily. In addition to Mc Sherry and Talwar (2007), Chaudhuri and Hsu (2012) also provide an accuracy bound for M-estimators drawn from the exponential mechanism in the univariate setting. They give the number of samples required for the error of the private estimator to be within t > 0 of the error of the non-private estimator. Our bound can be applied to the same problem, and we also recover the requirement that n 1/tϵ. Proof We first prove that the regularity conditions imply that the empirical objective function concentrates around the population objective function. By Condition 3, there are some K > 0 and F with VC(F) < such that φ is (K, F)-regular, i.e., sup x |φ(x, µ) φ(x, ˆµn)| K sup g F |Eˆµng(X) Eµg(X)|. (3.3) Now, for a probability measure Q over B, let N(τ, F, L2(Q)) be the τ-covering number of F with respect to the L2(Q)-norm. By assumption, for any g F, g 1 and VC(F) < . This fact implies that there is some universal c > 0 such that for any 0 < τ < 1, it holds that, sup Q N τ, F, L2(Q) VC(F) c See (Sen, 2022, Theorem 7.12), or (Kosorok, 2008, Theorem 9.3). A straightforward manipulation of Talagrand s inequality (Talagrand, 1994, Theorem 1.1) implies that there is some universal c > 0 such that for all n 1 and for all t > 0, it holds that i=1 g(Xi) Eg(X) VC(F) e 2t2. For all t > 0, it holds that sup x Rd |φ(x, ˆµn) φ(x, µ)| > t VC(F) e 2nt2/K2. (3.4) where to go from the second inequality to the third we multiplied both sides of the inequality by n. We can now use the concentration result (3.4) to prove the rest of the theorem. For brevity, let Dn,t = {maxθ0 Eφ,µ θn θ0 > t}, and for any y > 0, define the event An,y = Differentially private multivariate medians { φ( , ˆµn) φ( , µ) < y}. It follows from the law of total probability and (3.4) that for any y > 0 it holds that Pr (Dn,t) = Pr Dn,t Ac n,y + Pr (Dn,t An,y) Pr Ac n,y + Pr (Dn,t An,y) VC(F) e 2ny2/K2 + Pr (Dn,t An,y) . (3.5) We now focus on the right-hand term above. By definition, we have that 1 β log Pr (Dn,t An,y) = 1 Bc t (Eφ,µ) exp (βφ(x, ˆµn))dπ Rd exp (βφ(x, ˆµn))dπ dµ. (3.6) On An,y it holds that exp (βφ(x, µ)) exp( βy) exp (βφ(x, ˆµn)) exp (βφ(x, µ)) exp (βy). Applying this to the right-hand side of (3.6) results in 1 β log Pr (Dn,t An,y) 1 Bc t (Eφ,µ) exp (βφ(x, µ))dπ Rd exp (βφ(x, µ))dπ + 2y. (3.7) We now bound Eπ exp (βφ(x, µ)) below. For any r > 0, Condition 4 implies that Rd exp (βφ(x, µ))dπ 1 Br(Eφ,µ) exp (βφ(x, µ))dπ = φ(θ0, µ) + 1 Br(Eφ,µ) exp (β(φ(x, µ) φ(θ0, µ)))dπ φ(θ0, µ) ω(r) + β 1 log π(Br(Eφ,µ)). Maximizing the right-hand side of the above, and recalling the definition of the calibration function, ψ, from (3.1), yields Rd exp (βφ(x, µ))dπ φ(θ0, µ) ψ(β)/β. Plugging this lower bound into (3.7) yields 1 β log Pr (Dn,t An,y) 1 Bc t (Eφ,µ) exp (βφ(x, µ))dπ φ(θ0, µ) + ψ(β)/β + 2y sup x Bc t (Eφ,µ) φ(x) φ(θ0) + 1 β log π(Bc t (Eφ,µ)) + ψ(β)/β + 2y = α(t) I(t)/β + ψ(β)/β + 2y, where the last line follows from the definitions of α(t) and I(t). Rewriting the last inequality gives that Pr (Dn,t An,y) exp ( βα(t) I(t) + ψ(β) + 2βy) . (3.8) Ramsay, Jagannath and Chenouri Now, setting y = f(t, β) = α(t)/2 I(t)/2β and plugging (3.8) into (3.5) results in Pr max θ0 Eφ,µ θn θ0 > t cn VC(F) VC(F) e nf(t,β)2/8K2 + e βα(t) I(t)+ψ(β)+2βf(t,β). (3.9) Next, note that α(t) + I(t)/β f(t, β) α(t)/2 + I(t)/2β. Plugging this inequality into (3.9) yields the desired result. It is helpful to view the two terms in the above bounds separately. The first term measures the sample complexity of estimation and, in the absence of sampling considerations, recovers the concentration properties of classical (non-private) estimators. (In particular, the non-private bound can be obtained from the above by formally setting β = .) The second term is novel and is a measure of the cost of sampling and encodes the trade-off between the prior, π, the parameter β, and the objective function α. Through taking β = nϵ/2K, Theorem 12 immediately provides a consistency bound (and thereby a sample complexity bound) for the exponential mechanism: Pr max θ0 Eφ,µ θn θ0 t c1n VC(F) VC(F) e c2n[ 1 nϵI(t) α(t)/2K] 2 + e nϵα(t)/4K I(t)/2+ψ(nϵ/2K). (3.10) Let us pause here to interpret this result. The cost of consistent estimation is encoded in the first terms of (3.10) and, in particular, can be related to the non-private setting as well. The cost of privacy is encoded in the second term in (3.10). Finally, we note that the second term in (3.10) encodes a trade-offbetween an effect of the prior and the privacy parameter. Equation (3.10) readily yields the sample complexity of an estimator drawn from the exponential mechanism.5 This yields Corollary 10, and, for instance, Lemma 15 below. 4. Proof of Theorem 5 We now prove Theorem 5 with a series of lemmas. Let us first note the following fact, whose proof is deferred to Appendix B. Lemma 14 Let a, b, c R+. The function h(x) = ax b log (cx/b) is increasing and positive for a [log (c/a) 1]. 5. To see this briefly, assume for simplicity that π is the uniform measure on a hypercube whose side lengths are at most polynomial in d. In addition, assume that θ0 is well captured by this hypercube, i.e., θ0 is at least e d far from any boundary of the cube. Suppose we want to estimate a maximizer of φ( , µ) within some error level t with probability at least 1 e d. Lemma 35 and Corollary 10 give that it is sufficient for VC(F) to be polynomial in d to ensure that the sample complexity is polynomial in d. Differentially private multivariate medians Lemma 15 Suppose that Conditions 1 and 3 hold with φ = D for some K > 0. In addition, suppose that the map x 7 D(x, µ) is L-Lipschitz for some L > 0. Then, there exists a universal constant c > 0, such that for all n, d 1 and all 0 < γ < 1, with probability at least 1 γ, it holds that d(ED,µ, θn) α 1 log(1/γ) VC(F) log n _ log (1/γ) 1 log π(B 2K Lnϵ (Eφ,µ)) Proof Note that all of the conditions of Theorem 12 hold, and so we can apply Theorem 12 with β = nϵ/2K. Proving the deviation bound then amounts to finding a lower bound on t which ensures that: c1n VC(F) VC(F) e c2n[2KI(t)/nϵ α(t)]2/K2 + e nϵα(t)/2K I(t)/2+ψ(nϵ/2K) := I + II γ. We will do this by finding t such that both I and II are less than γ/2. We start with finding t such that I γ/2. First, note that c1n VC(F) VC(F) e c2n[2KI(t)/nϵ α(t)]2/K2 c1n VC(F) VC(F) e c2nα(t)2/K2. If we can find a lower bound on t which implies that c1n VC(F) VC(F) ec2nα(t)2/2K2, (4.3) then such a lower bound, together with the bound t α 1 p K2 log(2/γ)/n yields that I γ/2. It remains to find t such that (4.3) holds. To this end, define the function h(n) = c2nα(t)2/2K2 VC(F) log (c1n/ VC(F)) . The condition (4.3) is then equivalent to the condition h(n) 0. Note that h(n) is in the form of Lemma 14 with a = c2α(t)2/2K2, b = VC(F) and c = c1. Applying Lemma 14 yields that h(n) 0 provided that n K2 VC(F) log K2 cα(t)2 1 α(t)2 , (4.4) for some universal c > 0. Now, provided that α2(t) K2/n, (4.4) is satisfied when α(t)2 K2VC(F) log n/n. It follows that there exists a universal constant c > 0 such that (4.3) holds provided that log(1/γ) VC(F) log n Ramsay, Jagannath and Chenouri We now turn to finding t such that II γ/2. The II γ/2 is satisfied when α(t) > 4K log (2/γ) ψ(nϵ/2K) nϵ > 2K log (2/γ) I(t)/2 + ψ(nϵ/2K) It remains to simplify ψ(nϵ/2K). To this end, given that by assumption D is L-Lipschitz, we have that ω(r) Lr. Plugging this in to the definition of ψ yields: ψ(nϵ/2K) min r>0 2K L r log π(Br(Eφ,µ)) i . Next, taking the specific choice r = 2K/Lnϵ yields ψ(nϵ/2K) 1 log π(B2K/Lnϵ(Eφ,µ)). Plugging this inequality into (4.6), and combining the result with (4.5) yields the desired result. We now note the following lemmas whose proofs are deferred to Appendix B. Lemma 16 If π = N(θπ, σ2 πI) with σπ 1/4, then for all E Rd, all d > 2 and all R σπ it holds that log π(BR(E)) d(E, θπ)2 σ2π + d log σπ R d . (4.7) Lemma 17 Suppose that π 1 {x AR(θπ)} where E AR(θπ), then for all E Rd, d > 0 and all r > 0 it holds that log π(Br(E)) d log R d R,θπ(E) r We can now prove Theorem 5. Proof [Proof of Theorem 5] First note that by assumption, all the conditions of Lemma 15 are satisfied. To prove (2.4) and (2.5) we apply Lemma 16 and Lemma 17, respectively, in conjunction with Lemma 15. To apply Lemma 16, first, observe that together, the assumptions that L > 1, σ2 π 1/16 and n 8K/ϵ imply that 2K/Lnϵ σπ. Applying Lemma 16 with R = 2K/Lnϵ and E = ED,µ in conjunction with (4.1), yields (2.4). Applying Lemma 17 with r = 2K/Lnϵ and E = ED,µ in conjunction with (4.1), yields (2.5) as desired. 4.1 Optimality of Theorem 5 Lemma 18 The upper bound on the deviation of θn about ED,µ given in Theorem 5 is sharp, up to logarithmic factors. The upper bound on the sample complexity of θn given in Corollary 10 is also sharp, up to logarithmic factors. Proof We show that for a specific set of parameters, the upper bound given in Corollary 10 matches the lower bound of Theorem 6.5 of (Kamath et al., 2019) up to logarithmic factors in d and t 1. Specifically, set D = HD, θ0 AR(0), π = AR(0), that µ = N(θ0, σ2I). For t R, the sample complexity is 1. We then focus on the case where t R. We now check the conditions of Corollary 10 under these parameters. To this end, supx D(x, µ) = D(θ0, µ) and so Condition 1 is satisfied. Next, HD satisfies Condition 3 with Differentially private multivariate medians K = 1 and VC(F) = d + 2, as shown in Lemma 44. Furthermore, we have that HD( , µ) is σ 1-Lipschitz. Therefore, the conditions of Corollary 10 are satisfied. Plugging these values into Corollary 10 and setting γ = 1/3, as in Theorem 6.5 of (Kamath et al., 2019), yields n d log K α(t) e α(t)2 d log R/σ d R,θπ (θ0) α(t) d α(t)ϵ . (4.8) Note that in this case, we have that α(t) = Φ(t/σ) 1/2. Let a(R) = (Φ(R/σ) 1/2)/R. Now, since t R, concavity of Φ on R+ implies that α(t) a(R)t. Applying this fact in conjunction with (4.8) results in n d log 1 a(R)t e a(R)2 t2 d log R/σ d R,θπ (θ0) a(R)t d a(R) t ϵ , (4.9) which matches the bound given in Theorem 6.5 of (Kamath et al., 2019) up to logarithmic factors in d and t 1. A simple manipulation of the terms in (4.9) gives an equivalent deviations bound, which matches the minimax lower bounds given by Cai et al. (2021), up to logarithmic factors, if δ is taken to be n k for some k > 0. 5. Inverse discrepancy function for small t As stated earlier, when applying Theorem 5, if α 1 is often at most linear, at least for small t, then this is very convenient technically, in that it simplifies the bounds given in Theorem 5. This is because, in that case, we need not be concerned about inverting α directly. Indeed, if there exists Cµ > 0 such that α(t) Cµt for small t, then this implies that the α 1(t) in Theorem 5 can be replaced by t/Cµ, thus, simplifying the bound greatly. The purpose of this section is to give conditions for there to exist Cµ > 0 such that α(t) Cµt for small t. We first give general conditions under which α(t) Cµt for an arbitrary depth function. Lemma 19 Suppose that supx Rd D(x, µ) = D(θ0, µ), D( , µ) is translation invariant, decreasing along rays, continuous and continuously differentiable on Br(θ0) for some r > 0. In addition, suppose that there exists 0 < a < R < r such that u = argmaxu Sd 1 D(u t, µ) is constant in t for a t R. Then, if for a t R, it holds that dx D(x, µ) x=t u is non-decreasing on [a, R], we have that α(t) tα(R)/R. Note that the proofs of results in this section are deferred to Appendix B. The conditions of Lemma 19 will hold when µ and D are such that the contours of the depth function are not changing in shape as we move away from the center. For example, the conditions of Lemma 19 hold for the class of elliptical distributions and the integrated depths. We give a more general version of this result below, using Lemma 19 to show a general bound for integrated depth functions. In addition, recall that ν denotes the uniform measure on the (d 1)-dimensional unit sphere. Ramsay, Jagannath and Chenouri Lemma 20 Suppose µ is centrally symmetric about θ0 and D(x, µ) = R Sd 1 g(x u, µu)dν where for all u Sd 1, we have that 1. g(ay + b, aµu + b) = g(y, µu), 2. g( , µu) are bounded, continuous and continuously differentiable, 3. g(θ 0 u, µu) = 0 and g(θ 0 u y, µu) = g(θ 0 u + y, µu), 4. there exists R > 0 such that g(θ 0 u + y, µu) is non-increasing for 0 y R and 5. there exists v Sd 1 such that g(c(θ0 + v) u, µu) g(c(θ0 + w) u, µu) for all c > 0 and w Sd 1, then for t [0, R] it holds that α(t) tα(R)/R. Observe that Items 1-4 are satisfied for the smoothed integrated dual depth (Definition 31) and the integrated-rank-weighted depth (Definition 26) provided the projected cumulative distribution functions h(y, u) = µ X u y are continuous and continuously differentiable in y for all u Sd 1. Item 5 depends on the distribution, but is fairly general. For instance, it is easy to see that item 5 is satisfied by the class of elliptical distributions, or anytime µ is such that the depth function has nested, convex contours. In the univariate setting, deviations bounds for the univariate median often only require a lower bound on the density of the population measure in a neighborhood of the median. We next show a result of this flavor, for the halfspace depth. Recall that a measure µ M1(Rd) is halfspace symmetric about θ0 Rd if for every closed halfspace H which contains θ0, it holds that µ(X H) 1/2 (Zuo and Serfling, 2000). Lemma 21 Suppose that D = HD, argmaxx Rd HD(x, µ) = θ0 and suppose that there is some R > 0 such that for all u Sd 1, µu are halfspace symmetric and absolutely continuous. If there exists Cµ > 0 such that for all u Sd 1, infθ 0 u R fµu(x) Cµ, then for all θ 0 u R t θ 0 u + R, we have that α(t) t Cµ. 6. Depth functions and their properties For the convenience of the reader, we collect here some basic facts about depth functions. We first review several common depth functions in the literature and their properties. We then introduce a new depth function, the smoothed integrated dual depth, which has desirable properties from a computational perspective and is used in our simulations. For proofs of the results from this section, see Appendix F. One classical depth function is the simplicial depth (Liu, 1988, 1990). Let (x1, . . . , xd+1) be the simplex in Rd with vertices x1, . . . , xd+1. Definition 22 (Simplicial Depth) Suppose that Y1, . . . , Yd+1 are i.i.d. from µ M1(Rd). The simplicial depth of a point x Rd with respect to µ is SMD(x, µ) = Pr( (Y1, . . . , Yd+1) x). Differentially private multivariate medians The simplicial depth of a point x Rd is the probability that a random simplex defined by d+1 draws from µ contains x. The next depth function we introduce is spatial depth (Vardi and Zhang, 2000; Serfling, 2002), which is maximized at the spatial median (Vardi and Zhang, 2000). Define the spatial sign function some as SR(y) = y/ y (with SR(0) = 0). We can then define the spatial rank function as Eµ SR(x X). The norm of the spatial rank of x Rd is a measure of the outlyingness of x with respect to µ. With this in mind, the spatial depth is defined as follows: Definition 23 The spatial depth SD of a point x Rd with respect to µ M1(Rd) is SD(x, µ) = 1 Eµ SR(x X) . We will see below that the spatial depth is (K, F)-regular, however, K = O(d). Replacing the norm with its square eliminates this inconvenience, and still yields a depth function which is maximized at the spatial median. Therefore, we define the modified spatial depth as follows: Definition 24 The modified spatial depth MSD of a point x Rd with respect to µ M1(Rd) is MSD(x, µ) = 1 Eµ SR(x X) 2 . Lastly, we define the integrated depth functions. Here, the idea is to define the depth of a point x Rd as the average, univariate depth of x u over all directions u. The first integrated depth function6 is the integrated dual depth of Cuevas and Fraiman (2009), which, letting ν be the uniform measure on Sd 1, is defined as follows: Definition 25 (Integrated Dual Depth) The integrated dual depth of a point x Rd with respect to µ M1(Rd) is IDD(x, µ) = Z Sd 1 F(x, u, µ) (1 F(x, u, µ)) dν(u). (6.1) The integrated dual depth of x Rd is the average univariate simplicial depth of x u over all directions u Sd 1. The integrated rank-weighted depth (Ramsay et al., 2019) replaces the simplicial depth in (6.1) with a slightly modified version of halfspace depth. For X µ, define F(x , u, µ) = µ(X u < x u). Definition 26 (Integrated Rank-Weighted Depth) The integrated rank-weighted depth of a point x Rd with respect to µ M1(Rd) is IRW(x, µ) = 2EνF(x, U, µ) (1 F(x, U, µ)). We now turn to a brief discussion of these functions satisfy Definition 2. To this end, recall the following classes of measures. Recall that a measure µ is angularly symmetric about a point θ0 Rd if X θ0 X θ0 d= θ0 X X θ0 for X µ (Liu, 1990). Let N M1(Rd) denote the set of angularly symmetric measures over Rd that are absolutely continuous with respect to the Lebesgue measure. The following proposition collects existing results in the literature: 6. The first integrated multivariate depth function, to be precise. The first integrated depth was developed for functional data by Fraiman and Muniz (2001). Ramsay, Jagannath and Chenouri Proposition 27 The aforementioned depth functions satisfy the following: Halfspace depth is a depth function with admissible set M1(Rd). Simplicial depth is a depth function with admissible set N . Spatial depth and the modified spatial depth satisfy properties (1), (2) and (4) for all µ M1(Rd). Integrated dual depth and integrated rank-weighted depth are depth functions with admissible set C . Furthermore, they satisfy properties (1), (2) and (4) for any µ M1(Rd). Remark 28 Spatial depth does not satisfy Definition 2, as property (3) fails. However, spatial depth satisfies spatial angle monotonicity and spatial rank monotonicity (Serfling, 2019, see Section 4.3), which together provide a weaker alternative to property (3). Simplicial depth and halfspace depth are affine invariant, which is a stronger version of property (1). Remark 29 Both the halfspace depth and the simplicial depth function have an exact computational running time of O(nd 1), see, e.g., (Dyckerhoffand Mozharovskyi, 2016; Afshani et al., 2016) and the references therein. By contrast, the spatial depth and the integrated depth functions are computable in high dimensions (Chaudhuri, 1996; Ramsay et al., 2019). For instance, the integrated depth functions can be approximated easily via sampling, say, m points from ν. One can then approximate the sample depth value of a given point in O(mnd) time (Ramsay et al., 2019). Going further, it is easy to show via Talagrand s inequality (Talagrand, 1994; Sen, 2022) that if one wants to approximate a sample depth value to within error t, with > 0.99 probability then we need m nd log(1/t)/t2 unit vectors, which results in O(n2d2 log(1/t)/t2) time. The following theorem demonstrates that the depth functions defined in this section satisfy the conditions of Theorem 12. Let µu be the law of X u if X µ. Theorem 30 The aforementioned depth functions satisfy the following: Conditions 1 and 3 hold for each of the depth functions defined in Section 6. Suppose that π is absolutely continuous and that for all u Sd 1, the measure µu is absolutely continuous with density fu and sup u =1 fu = L < . Then there exists a universal constant C > 0 such that for all s (0, ], halfspace, simplicial, integrated rank-weighted and the smoothed integrated dual depth7 satisfy Condition 4 with ω(r) = C L r. Suppose that µ has a bounded density and that supy E y X 1 = L . Then modified spatial depth satisfies Condition 4 with ω(r) = 2 d L r and spatial depth satisfies Condition 4 with ω(r) = 2 L r. The above results are also summarized in Table 2.4. 7. The smoothed integrated dual depth is introduced in Section 6.1 below and equals the integrated dual depth when s = . Differentially private multivariate medians 6.1 The smoothed integrated dual depth One issue with many of the aforementioned depth functions is that their empirical versions contain indicator functions, which are non-smooth. This fact is inconvenient from an optimization perspective, e.g., we cannot apply gradient descent to estimate the median. In addition, this non-smoothness implies that the empirical depth functions will possess flat contours. This can cause the estimated medians to perform poorly when the population measure has atoms near its center (Lalanne et al., 2023). Therefore, to resolve this issue, recall that 1 X i u x u σ s(x X) u , for large s, where σ is the usual sigmoid function: σ(x) = (1 + e x) 1. This motivates the following definition of depth, which we use in our simulation study from Section 2.5. Recall that ν denotes the uniform measure on the (d 1)-dimensional unit sphere. Definition 31 (Smoothed Integrated Dual Depth) The smoothed integrated dual depth with smoothing parameter s > 0 of a point x Rd with respect to µ M1(Rd) is IDD(x, µ, s) = Z Sd 1 Eσ s(x X) u 1 Eσ s(x X) u dν(u). (6.2) As s , the smoothed integrated dual depth converges to the integrated dual depth. However, the smoothed integrated dual depth is convenient in that its empirical version is differentiable. Furthermore, the smoothed integrated dual depth is a depth with admissible set C . Proposition 32 For any s > 0, the smoothed integrated dual depth is a depth function with admissible set C . In addition, properties (1) and (4) hold for any µ M1(Rd). In the simulations presented in Section 2.5, we approximate (6.2) with d IDD(x, µ, s) = 1 m=1 Eσ s(x X) um 1 Eσ s(x X) um , where u1, . . . , u M are drawn i.i.d. from ν. One can show that the number of unit vectors needed to maintain a fixed error level t > 0 with high probability 1 γ is polynomial in n and d. Proposition 33 Let u1, . . . , u M be i.i.d. from ν. Then for all d, s, t > 0 and all 0 < γ < 1, there exists a universal constant c1 > 0 such that supx d IDD(x, ˆµn, s) IDD(x, ˆµn, s) t, with probability 1 γ, provided that M c1[log(1/γ) dn log(1/t e)]t 2. Observe that Proposition 33 is a sample complexity style result, where instead of bounding the number of samples needed for achieve some error tolerance t > 0, we bound the number of simulated unit vectors needed to achieve error tolerance t > 0. Specifically, the result says that we need, roughly, O(nd/t2) unit vectors for our approximation to be within t of IDD(x, ˆµn, s), with probability > 0.99. Lastly, note that the median estimators are relatively insensitive to the choice of s. For s 10, the specific value of s appears to have minimal effect on the estimation error and convergence of the gradient descent algorithm, see Appendix C for more details. We recommend choosing s = 100. Ramsay, Jagannath and Chenouri 7. Computing private depth values The depth values themselves can be of interest. For example, they are used in visualization and a host of inference procedures (Li and Liu, 2004; Chenouri et al., 2011). The theoretical analysis from the previous sections yields simple differentially private methods for estimating depth values. If we want to compute the depth of a known point, then we can use an additive noise mechanism, such as the Laplace mechanism or the Gaussian mechanism (Dwork et al., 2006), i.e., we can just add noise that is calibrated to ensure differential privacy. For example, it follows from privacy of the Laplace mechanism (Dwork et al., 2006) that if D is (K, F)-regular, then for all x Rd it holds that D(x, ˆµn) = D(x, ˆµn) + W1K/nϵ, W1 Laplace(1), is ϵ-differentially private. Theorem 12 then implies the concentration of private depth values generated from the Laplace mechanism. Corollary 34 Suppose that D satisfies Condition 3. For x Rd chosen independently of the data, e D(x, ˆµn) = D(x, ˆµn)+W1k/nϵ, is ϵ-differentially private. In addition, there exists universal constants c1, c2 > 0 such that for all t 0 it holds that Pr(|e D(x, ˆµn) D(x, µ)| > t) (c1n/VC(F))VC(F) e c2nt2/K2 + e nϵt/2K. Corollary 34 suggests taking ϵ t/K when simulating private depth values. An obvious problem of interest is estimating the vector of depth values at the sample points, i.e., can we estimate b D(ˆµn) = (D(X1, ˆµn), . . . , D(Xn, ˆµn)) privately? Define the global 1-sensitivity of b D(ˆµn) to be GSn( b D(ˆµn)) = sup Xn Dn d, µn f M(Xn) b D(ˆµn) b D( µn) 1 , where 1 denotes the L1 norm. In general, GSn( b D(ˆµn)) = O(1), see Lemma 39. As a result, if e D is the vector of private depth values generated from the Laplace mechanism then, || e D(ˆµn) b D(ˆµn)|| n/ϵ with high probability; the level of noise is greater than that of the sampling error. Here, denotes the L2 norm. This result is intuitive; these vectors reveal more information about the population as n grows, which differs markedly from the single depth value case, where the amount of information received is fixed in n. In fact, for large n the vector of depth values at the sample points contains a significant amount of information about µ; the depth function can, under certain conditions, characterize the distribution of the input measure µ (see Struyf and Rousseeuw, 1999; Nagy, 2021, and the references therein). In order to release so much information about the population privately, we must inject non-negligible noise. We cannot, then, simply plug in the n private sample depth values into an inference procedure and proceed. An interesting topic for future research is how to extend depth-based inference procedures to the private setting. Acknowledgments The authors acknowledge Gautam Kamath for his helpful comments and discussion, which improved the paper. The authors acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC). A.J. acknowledges the support of the Canada Differentially private multivariate medians Research Chairs Program. Cette recherche a et e financ ee par le Conseil de recherches en sciences naturelles et en g enie du Canada (CRSNG), [DGECR-2023-00311, RGPIN-202004597, DGECR-2020-00199] et du Programme des chaires de recherche du Canada [CRC2022-00142] Appendix A. Notation conventions We collect here some notation we will use throughout the paper. Note that all notations are introduced throughout the paper, but having the notation collected in one place may be a useful reference. For sets A, B, we let A B be the symmetric difference between A and B. Given v Rd, let ||v|| denote the Euclidean norm, and for a function f : R R, let ||f|| = supx R |f(x)|. For a set A and r > 0, let Br(A) = {x: min y A x y r}. Let AR(y) be the d-dimensional cube of side-length R centered at y and let d R,y(x) denote the minimum distance from a point x = (x1, . . . , xd) to a face of the cube AR(y): d R,y(x) = min 1 i d |xi yi R/2|. Similarly, denote the minimum distance from a set B Rd to a face of the cube AR(y) by d R,y(B) = inf x B d R,y(x) and let d(x, B) = inf y B x y denote the usual point-to-set distance. For two numbers a, b R, we have that a b = min(a, b) and a b = max(a, b) and we write a b (a b) if a Cb (a Cb) for some universal constant C > 0. For a space S, M1(S) denotes the space of probability measures on S. We say that a data set of size n d is a collection of n points in Rd, Xn = (Xℓ)n ℓ=1, with repetitions allowed. We assume that we have observed a data set of size n d, which are realizations of n, independent and identically distributed random variables from a population measure µ M1(Rd). The empirical measure of these observations is denoted by ˆµn. We let B denote the space of Borel functions from Rd to [0, 1]. For a family of functions, F B let VC(F) denote the Vapnik Chervonenkis dimension of F. We sometimes use the following pseudometric on M1(Rd), d F(µ, ν) = supg F | R gd(µ ν)|, where µ, ν M1(Rd). Next, we let N(µ, Σ) denote the Gaussian measure with mean µ and covariance matrix Σ. We let Sd 1 denote the (d 1)-dimensional unit sphere and we let ν be the uniform measure on Sd 1, where the dimension is clear from the context. We write X µ if the random vector X is drawn from µ M1(Rd). Furthermore, we let µu denote the law of X u for u Sd 1 and fu be the associated density function (when it exists). For two random vectors X, Y , we write X d= Y if X is equal in distribution to Y . For a random vector X µ, we write its expectation as EµX = R Xdµ and omit the measure, µ, when it is clear from the context. Recall that a measure µ M1(Rd) is centrally symmetric about a point x Rd if X x d= x X for X µ and a measure µ is angularly symmetric about a point θ0 Rd if X θ0 X θ0 d= θ0 X X θ0 for X µ (Liu, 1990). We let C M1(Rd) denote the set of centrally symmetric measures over Rd and we let N M1(Rd) denote the set of angularly symmetric measures over Rd that are absolutely continuous with respect to the Lebesgue measure. Ramsay, Jagannath and Chenouri Appendix B. Technical proofs Proof [Proof of Corollary 10] First note that by assumption, all of the conditions of Lemma 35 are satisfied. To prove (2.6) and (2.7) we apply Lemma 16 and Lemma 17, respectively, in conjunction with Lemma 35. To apply Lemma 16, first, observe that together, the assumptions that L > 1, σ2 π 1/16 and D(x; µ) 1 imply that α(t)/4Lσπ 1. Applying Lemma 16 with R = α(t)/4L and E = ED,µ in conjunction with (B.11) yields that there exists a universal constant c > 0 such that for all t 0, all d > 2 and all 0 < γ < 1, we have that d(ED,µ, θn) < t with probability at least 1 γ provided that n K2 " log(1/γ) (VC(F) log( K cα(t)) 1) K log (1/γ) d(ED,µ,θπ)2 σ2π d log Lσπ α(t) d To show (2.7), we apply Lemma 17 with r = α(t)/4L and E = ED,µ in conjunction with (B.11), which yields that there exists a universal constant c1 > 0 such that for all t 0, all d 1 and all 0 < γ < 1, we have that d(ED,µ, θn) < t with probability at least 1 γ provided that n K2 " log(1/γ) (VC(F) log( K cα(t)) 1) K log (1/γ) d log R d R,θπ (ED,µ) α(t)/L which completes the proof. Proof [Proof of Lemma 19] Without loss of generality, assume that θ0 = 0. It suffices to show that α(t) is concave on [a, R]. This holds if and only if g(t) = sup x t D(x, µ) is convex on [0, R]. By the decreasing along rays property and the fact that D is maximized at 0, it holds that g(t) = sup u =1 D(t u, µ). Next, let u ,t = argmax u =1 D(t u, µ). Observe that by assumption u ,t = u , i.e., u ,t is constant in t. Therefore, sup u =1 D(t u, µ) = D(t u , µ). Next, it holds that d dtg(t) = d dt D(t u , µ) = u dx D(x, µ) x=t u which is non-decreasing by assumption. This implies that g(t) is convex on [a, R]. It follows that α(t) tα(R)/R. Proof [Proof of Lemma 20] Again, without loss of generality, take θ0 = 0. We have that Sd 1 g(x u, µu)dν = Z Sd 1 d dxg(x u, µu)dν = Z Sd 1 g(1)(x u, µu) u dν. dx D(x, µ) x=t u Sd 1 u ug(1)(t u u, µu)dν = 2 Z Sd 1 |u u|g(1)(t |u u|, µu)dν, which is non-decreasing by the assumptions on g. Now, assumptions 1 5 and the preceding observation imply the conditions of Lemma 19 are satisfied, and the result follows. Differentially private multivariate medians Proof [Proof of Lemma 21] Given that HD is translation invariant, there is no loss of generality in assuming that θ0 = 0. First, by definition and the decreasing along rays property of HD, we have that α(t) = HD(0, µ) sup x t HD(x, µ) = 1/2 sup x =t inf u =1 F(x, u, µ). (B.1) Next, we bound the term sup x =t inf u =1 F(x, u, µ) above. To this end, we have that sup x =t inf u =1 F(x, u, µ) sup u =1 2 1 2 h(t, u) This inequality, in combination with (B.1) and the mean value theorem yields that for all 0 t R, it holds α(t) 1 2 h(t, u) t Cµ, which completes the proof. Proof [Proof of Lemma 14] First, we use the derivative test to determine when the function h(x) is increasing. Doing so yields that h(x) is increasing for x b/a. Now, it remains to find x b/a such that h(x) 0. In other words, for yr = rb/a, we want to find r 1 such that h(yr) 0. By definition, h(yr) = b (r log (cr/a)) . It follows that h(yr) 0 is satisfied for r such that r log r log(c/a). Applying the inequality x log x x/2 for x 1 yields that r log r log(c/a) is satisfied when r 2 log (c/a) . Therefore, h(x) is positive and increasing for Proof [Proof of Lemma 16] Let R = R2/σ2 π, θ0 E and let W χ2 d( θ0 θπ 2 /σ2 π). We can lower bound π(BR(E)) as follows: π(BR(E)) π( X θ0 R) = Pr (W R ) . (B.2) Let G(x, k, λ) be the cumulative distribution function for the non-central chi-squared measure with k degrees of freedom and non-centrality parameter λ. Recall that for all x, k, λ > 0, it holds that G(x, k, λ) = e λ2 X j! G(x, k + 2j) e λ2G(x, k, 0). Ramsay, Jagannath and Chenouri Therefore, Pr (W R ) e θ0 θπ 2/2σ2 πG(R , d, 0). (B.3) We now lower bound G(x, d, 0) for x 1. Let γ be the lower incomplete gamma function. It follows from the properties of the gamma function that log G(x, d, 0) = log γ(x/2, d/2) + log Γ(d/2) log γ(x/2, d/2) + d log d. Note that the assumption d > 2 implies that supy γ(y, d/2) 1. This implies that yd/2 1e y/2 is increasing on (0, x). Thus, for any 0 < r < 1 it holds that log G(x, d, 0) log Z x 0 (y/2)d/2 1e y/2/2 dy + d log d log(x r)(r/2)d/2 1/2 + r/2 + d log d. Setting r = x/2 and using the fact that x 1 results in log G(x, d, 0) x + d log d d log 4 x d , (B.4) Note that R2 = R2/σ2 π 1 by assumption. Thus, applying (B.4) results in log G(R, d, 0) d log σπ R d . (B.5) Combining (B.2), (B.3) and (B.5) implies that log π(BR(E)) θ0 θπ 2 σ2π + d log σπ Now, note that this bound holds for all θ0 E, therefore, it holds that log π(BR(E)) d(E, θπ)2 σ2π + d log σπ Proof [Proof of Lemma 17] First, note that for any ball Br(x) such that Br(x) AR(θπ), we have that π(Br(x)) = rdπd/2 RdΓ(d/2 + 1) r d (d/2 + 1)d/2+1 πd/2e d/2 r Taking the negated logarithm of both sides implies that log π(Br(x)) d log R d log πd/2 + d d log R If there exists θ0 E such that Br(θ0) AR(θπ) then (B.6) implies that log π(Br(θ0)) d log R Differentially private multivariate medians If instead there exists θ0 E such that AR(θπ) Br(θ0) then log π(Br(θ0)) = 0. (B.8) Suppose finally that for all θ0 E it holds that both Br(θ0) AR(θπ) and AR(θπ) Br(θ0). Let θ0,j and θp,j be the jth coordinate of θ0 and θπ respectively. where r = min 1 j d (|θ0,j θp,j R/2| |θ0,j θp,j + R/2|) = d R,θπ(θ0). Now, by assumption, there exists θ0 E such that θ0 AR. For such θ0 AR, the definition of d R,θπ immediately implies that Bd R,θπ (θ0)(θ0) AR(θπ). Therefore, (B.6) implies that log π(Br(E)) log π(Br(θ0)) d log R d R,θπ(θ0) It follows from (B.7), (B.8) and (B.9) that for all d > 0 and all r it holds that log π(Br(E)) d log R d R,θπ(θ0) r Note that (B.10) holds for all θ0 E, thus, for all d > 0 and all r > 0 it holds that log π(Br(E)) d log R d R,θπ(E) r Lemma 35 Suppose that Conditions 1 and 3 hold with φ = D for some K > 0. In addition, suppose that the map x 7 D(x, µ) is L-Lipschitz for some L > 0. There is a universal c > 0 such that for all t 0, all d 1 and all 0 < γ < 1, we have that θn ED,µ < t with probability at least 1 γ provided that K2 log(1/γ) (VC(F) log( K cα(t)) 1) # _ K log (1/γ) log π(Bα(t)/4L(ED,µ)) Proof The proof is similar to that of Lemma 15. Note that all of the conditions of Theorem 12 hold, and so we can apply Theorem 12 with β = nϵ/2K. Proving the sample complexity bound then amounts to finding n which ensures that I + II γ, where I, II are defined in the proof of Lemma 15. Now, following the same argument in the proof of Lemma 15, see the arguments between (4.3) and (4.5), we have that I γ/2 whenever n K2 log(2/γ) α(t)2 _ K2 log(1/γ) VC(F) log( K cα(t)) 1 α(t)2 . (B.12) Furthermore, following (4.6), II γ/2 when n > 4K log (2/γ) ψ(nϵ/2K) ϵα(t) , (B.13) Ramsay, Jagannath and Chenouri from which it then remains to simplify the bound ψ(nϵ/2K) nϵα(t)/4K. (B.14) Given that by assumption D is L-Lipschitz, we have that ω(r) Lr. Plugging this in to the definition of ψ from (3.1), we have that ψ(nϵ/2K) min r>0 2K L r log π(Br(Eφ,µ)) o . By taking the specific choice r = α(t)/4L, we see that a sufficient condition for (B.14) to hold is nϵα(t) 8K log π(Bα(t)/4L(ED,µ)) nϵα(t) 4K , (B.15) which is equivalent to n > 4K log π(Bα(t)/4L(ED,µ)) ϵα(t) . (B.16) Combining (B.12), (B.13) and (B.16) yields the desired result. Appendix C. On the smoothing parameter in the smoothed integrated dual depth We executed a small simulation study to assess the effect of the smoothing parameter s on the non-private estimation error of the smoothed integrated dual depth median, as well as the effect of s on the convergence of the gradient descent algorithm. The simulation set up was the same as described in Section 2.5. Figure 3 shows the empirical root-meansquare error for different values of s and the dimension d. We see that the choice of s does not particularly depend on the dimension of the data. Furthermore, we see that choosing s 10 produces the best results in terms balancing convergence and estimation error. Given that larger s theoretically gives a closer approximation to the integrated dual depth, we recommend choosing s = 100 in practice. Appendix D. Extra plots from simulations Differentially private multivariate medians Figure 3: ERMSE of the non-private smoothed integrated dual depth median for different values of s under Gaussian data (left) and contaminated Gaussian data (center) data. Convergence of one simulation run of the gradient descent algorithm in dimension 2 is also presented (right). It can be seen that choosing s large gives the lowest error and best convergence, though any value s 10 produces similar results. Figure 4: Empirical root mean squared error (ERMSE) of the location estimators under Gaussian data. In the uncontaminated setting, the ERMSE of the private median grows at a slightly faster rate than its non-private counterpart and the mean estimators. This is due to the cost of privacy. Ramsay, Jagannath and Chenouri Figure 5: Empirical root mean squared error (ERMSE) of the location estimators under (left) Gaussian data and (right) contaminated Gaussian data when ϵ = 5. Figure 6: Empirical root mean squared error (ERMSE) of the location estimators under (left) Gaussian data and (right) contaminated Gaussian data when ϵ = 2. Differentially private multivariate medians Appendix E. Auxiliary lemmas Lemma 36 Suppose Un is an upper bound on GSn(φ). Setting β = ϵ/2Un implies θn is ϵ-differentially private. Proof From the definition of differential privacy, it is immediate that for any 0 < δ1 < δ2, we have that δ1-differential privacy implies δ2-differential privacy. Now, it remains to show that θn is δ-differentially private for some δ < ϵ. To see this, note that ϵ/2Un ϵ/2GSn(φ), which implies that there exists δ < ϵ such that ϵ/2Un = δ/2GSn(φ). This implies that θn is δ-differentially private and the result follows. Lemma 37 If φ satisfies Condition 3 then GSn(φ) K/n. Proof Definition 4 gives that sup Xn Dn d, µn f M(Xn) sup x |φ(x, ˆµn) φ(x, µn)| K sup Xn Dn d, µn f M(Xn) sup g F | Z gd(ˆµn µn)| K sup g F g /n = K/n. Lemma 38 Let D be the projection depth with location measure MED and scale measure MAD, i.e., D(x, µ) = 1 1 + O(x, µ), with O(x, µ) = sup u Sd 1 |x u MED(µu)|/ MAD(µu). If ϵ = O(1) and π is absolutely continuous, and π 1 {x ED,µ} dx, then GSn(D) = 1 and for a median drawn from the exponential mechanism based on D with β = ϵ/2GSn(D), θn, there exists t > 0 such that lim n Pr d(Eµ,φ, θn) t = 0. Proof The univariate sample median has infinite global sensitivity (Brunel and Avella Medina, 2020), which implies that the projection depth with location measure MED and scale measure MAD has sensitivity 1. It follows that β = ϵ/2. We first show that there exists t > 0 such that Qµ,ϵ/2(d(ED,µ, X) t) > 0. Since π is absolutely continuous and π 1 {x ED,µ} dx, there exists A and t > 0 such that infx A d(x, ED,µ) t > 0 and π(A) > 0. Since D(x, µ) 0, Qµ,ϵ/2(A) > 0, so Qµ,ϵ/2(d(Eµ,φ, X) t) > 0 as well. Next, we show that limn Pr d(ED,µ, θn) t = Qµ,ϵ/2(d(ED,µ, X) t). To this end, we have that Zuo (2003) gives that D(x, ˆµn) D(x, µ) µ a.s. as n . Thus, Ramsay, Jagannath and Chenouri d Qˆµn,ϵ/2/dπ d Qµ,ϵ/2/dπ µ a.s. as n . This implies that, µ a.s., Qˆµn,ϵ/2 Qµ,ϵ/2 in total variation as n . We have that this implies lim n Pr d(Eµ,φ, θn) t = lim n E(Qˆµn,ϵ/2(d(Eµ,φ, X) t)) = E( lim n Qˆµn,ϵ/2(d(Eµ,φ, X) t)) = Qµ,ϵ/2(d(Eµ,φ, X) t)) = 0, which completes the proof. Lemma 39 Suppose that D is non-negative and satisfies properties (2) and (4) in Definition 2. Then there exists a universal constant C > 0 such that GSn( b D(ˆµn)) C. Furthermore, if D satisfies Condition 3, then GSn( b D(ˆµn)) = O(1). Proof Consider µn to be centrally symmetric about X1. Property (2) and non-negativity imply that D(X1, µn) = C > 0. Let µn(Y1) be the empirical measure corresponding to Y1, . . . , Xn with Y1 = X1. Given that µn(Y1) f M(Xn), property (4) implies that GSn( b D(ˆµn)) sup Y1 Rd |D(Y1, µn(Y1)) D(X1, ˆµn)| D(X1, ˆµn) inf Y1 Rd D(Y1, µn(Y1)) C. Now, if D satisfies Condition 3, then Lemma 37 implies that GSn( b D(ˆµn))2 K2 n + sup ˆµn,y |D(y, ˆµn) D(X1, ˆµn)|2 K2 n + D = O(1). Lemma 40 If µ = N(θ0, Σ) then for all t > 0 and positive definite Σ, sup x =t HD(x, µ) = Φ( t/ λ1). Proof From the fact that HD is affine invariant, without loss of generality we can set θ0 = 0. From the fact that Φ is increasing, it holds that sup x =t HD(x, µ) = sup x =t inf u =1 Φ( x u = 1 inf v =1 sup u =1 Φ( tv u u Σu ) = 1 Φ(t inf v =1 sup u =1 It suffices to compute inf v =1 sup u =1 First, note that inf v =1 sup u =1 u Σu inf v =1 (v v)2 v Σv = 1/λ1. Differentially private multivariate medians Showing the reverse inequality, setting v = e1 and setting z = Σ1/2u gives that inf v =1 sup u =1 u Σu sup u =1 u Σu = sup Σ 1/2z =1 (e 1 Σ 1/2z)2 z z = sup Σ 1/2z =1 (e 1 z/ z )2/λ1 1 sup x =t HD(x, µ) = 1 Φ(t/ p λ1) Φ( t/ p Appendix F. Proofs of the results from Section 6 We now prove the various results presented in Section 6. Proof [Proof of Proposition 27] With the exception of the modified spatial depth, these results were shown by (Zuo and Serfling, 2000; Liu, 1988; Serfling, 2002; Cuevas and Fraiman, 2009; Ramsay et al., 2019), respectively. For the modified spatial depth, properties (1), (2) and (4) follow from the fact that properties (1), (2) and (4) for spatial depth. Proof [Proof of Proposition 32] We prove properties (1)-(4) in order. Let A Rd d be any orthogonal matrix and b Rd. For any integrable function h: Sd 1 R+ , it holds that Z Sd 1 h(u)dν = Z Sd 1 h(Au)dν. (F.1) It also holds that (Ax + b AX b) u = A (x X) u = A u (x X). Let Y Aµ + b. Applying the above identity implies that Eσ s(Ax + b Y ) u = Eσ s A u(x X) . The result follows from the definition of the smoothed integrated dual depth and (F.1). For property (2), by definition supx IDD(x, µ, s) 1/4. It remains to show IDD(θ0, µ, s) = 1/4. We will use the fact that (1 σ(x X)) = σ(X x). (F.2) Now, consider the integrand in the definition of smoothed integrated dual depth at any fixed u. Central symmetry of µ and (F.2) yield that Eσ s(θ0 X) u 1 Eσ s(θ0 X) u = Eσ s(X θ0) u 2 . Central symmetry of µ and (F.2) also yield that Eσ s(θ0 X) u 1 Eσ s(θ0 X) u = 1 Eσ s(X θ0) u 2 . Ramsay, Jagannath and Chenouri These two equalities yield that Eσ s(X θ0) u = 1/2. Given that this holds for any u Sd 1, it follows that IDD(θ0; µ, s) = 1/4 = supx IDD(x; µ, s). For property (3), at any fixed u, central symmetry of µ and the fact that σ is increasing yields that either Eσ s(x X) u Eσ s(pθ0 + (1 p)x X) u 1/2, 1/2 Eσ s(pθ0 + (1 p)x X) u Eσ s(x X) u , holds. The result follows immediately from the fact that f(x) = x(1 x) is monotone on both [0, 1/2] and [1/2, 1]. Lastly, we show property (4). For any fixed u, the fact that limx σ(x)σ( x) = 0 and the bounded convergence theorem imply that lim c IDD(cu; µ, s) = lim c Eσ s(cu X) u 1 Eσ s(cu X) u dν(u) Eσ s(cu X) u 1 Eσ s(cu X) u dν(u) Proof [Proof of Proposition 33] Let d IDD(x, ˆµn, s) IDD(x, ˆµn, s) . We first prove a concentration result for Z. To this end, let Gn(x, u, s) = 1 i=1 σ s(x Xi) u m=1 Gn(x, um, s) Z Sd 1 Gn(x, u, s)dν(u) m=1 Gn(x, um, s)2 Z Sd 1 Gn(x, u, s)2dν(u) From the fact that Z W1 + W2, we need only prove concentration results for W1 and W2. Let G = n σ(y u + b)/n: y Rd, b R o . Differentially private multivariate medians Note that VC(G ) d+2. Therefore, it follows from Theorem 7.12 in (Sen, 2022) that there exists a universal constant K > 0 such that for all ϵ (0, 1), it holds that sup Q N(ϵ/n, G , L2(Q)) K i=1 σ(y i u + bi): y1, . . . , yn Rd, , b1, . . . , bn R It follows from Lemma 7.21 in (Sen, 2022) that for all ϵ (0, 1), it holds that i=1 Gi, L2(Q) It follows from (Talagrand, 1994; Sen, 2022), that there exists a universal constant K > 0 such that for all t > 0, it holds that MW1 > t K t 2dn+4n e 2t2. However, note that |W1| 1/4, therefore, for t > M/4, it holds that Pr MW1 > t = 0. Therefore, it holds that with probability 1, and so, Pr (W1 > t) K M The same logic applied to W2 gives that there exists a universal constant K > 0 such that for all t > 0, it holds that Pr (W2 > t) K M Therefore, there exists universal constants K, K > 0 such that for all t > 0, it holds that Pr (Z > t) KM dn e K Mt2. Now, to prove the sample complexity bound, we want to find M such that KM dn e K Mt2 γ, Ramsay, Jagannath and Chenouri which is equivalent to the inequality dn log(M/nd) + K Mt2 log(1/γ) + dn log (K) . (F.3) First, we show that dn log(M/nd) K Mt2/2. Note that the function h(M) = K Mt2/2 dn log(M/nd) satisfies the conditions of Lemma 14. Applying Lemma 14 yields that h(M) 0 for K t2 log 2 K t2 e dn t e . (F.4) When (F.4) is satisfied, we have that (F.3) is satisfied for M 2log(1/γ) + dn log K K t2 . (F.5) Thus, for all t > 0 and all 0 < γ < 1, there exists a universal constant c1 > 0 such that d IDD(x, ˆµn, s) IDD(x, ˆµn, s) t, with probability 1 γ, provided that M c1 log(1/γ) dn log 1 We prove Theorem 30 with a series of Lemmas. Lemma 41 Condition 1 holds for each of the depth functions defined in Section 6. Proof Recall that all of the depth functions satisfy property (4) given in Definition 2. Property (4) implies that there exists a compact set E such that supx Rd D(x, µ) = supx E D(x, µ). It follows from boundedness of D(x, µ) and compactness of E that there exists a point in y E such that D(y, µ) = supx E D(x, µ) = supx Rd D(x, µ). Lemma 42 Suppose that π is absolutely continuous and that for all u Sd 1, the measure µu is absolutely continuous with density fu and sup u =1 fu = L < . Then there exists a universal constant C > 0 such that for all s (0, ], halfspace, simplicial, integrated rank-weighted and smoothed integrated dual depth satisfy Condition 4 with ω(r) = C L r . Proof We prove the result for each depth in turn. Halfspace depth: Consider two points x, y Rd. By assumption, F( , u, µ) is L-Lipschitz function a.e. . It follows that | HD(x, µ) HD(y; µ)| = | inf u F(x, u, µ) inf u F(y, u, µ)| sup u |F(x, u, µ) F(y, u, µ)| L x y a.e. . Differentially private multivariate medians Integrated rank-weighted depth: For integrated rank-weighted depth, note that absolute continuity of µu implies that F(x, u, µ) = F(x , u, µ), where one recalls that F(x , u, µ) = µ({X u < x u}). With this in mind, for any two points x, y Rd the reverse triangle inequality yields that | IRW(x, µ) IRW(y, µ)| Z Sd 1 2|F(x, u, µ) F(y, u, µ)|dν(u) 2L x y a.e. . Integrated dual depth: For any h : S [0, 1], h : S [0, 1], for any x S it holds that |h(x)(1 h(x)) h (x)(1 h (x))| 3|h(x) h (x)|. (F.6) Using this inequality, | IDD(x, µ, s) IDD(y, µ, s)| 3 sup u Eµσ(s(x X) u) Eµσ(s(y X) u) . (F.7) It remains to bound Eµσ(s(x X) u) Eµσ(s(y X) u) . To this end, first assume that s and let w, z R. Using the fact that σ is a 1/4-Lipschitz function, it holds that |Eµuσ(s(w X)) Eµuσ(s(z X))| = Z σ(s(w t)) σ(s(z t))fu(t)dt Z σ(v) σ(sz sw + v)fu( v/s w)dv |z w|/4 Z |fu( v/s w)| dv |z w| fu /4, where in the second line we use the substitution v = s(w t). The preceding inequality and (F.7) imply that | IDD(x, µ, s) IDD(y, µ, s)| 3 sup u Eµσ(s(x X) u) Eµσ(s(y X) u) Now assume that s = . It follows from absolute continuity of µu and (F.7) that | IDD(x, µ) IDD(y, µ)| 3 sup u |F(x, u, µ) F(y, u, µ)| 3L x y a.e. Simplicial depth: For simplicial depth, we must show that Pr(x (X1, . . . , Xd+1) is Lipschitz continuous. It is easy to begin with two dimensions. Consider Pr(x (X1, X2, X3)) Pr(y (X1, X2, X3)), as per (Liu, 1990), we need to show that Pr(X1X2 intersects xy) L x y . In order for this event to occur, we must have that X1 is above xy and X2 is below xy, but both are projected onto the line segment xy when projected onto the line running through xy. Affine invariance of simplicial depth implies we can assume, without Ramsay, Jagannath and Chenouri loss of generality, that x and y lie on the axis of the first coordinate. Let x1 and y1 be the first coordinates of x and y. Suppose that X11 is the first coordinate of X1. It then follows from a.e. Lipschitz continuity of F(x, u, µ) that Pr(X1X2 intersects xy) Pr(x1 < X11 < y1) L |x1 y1| L x y a.e. . In dimensions greater than two, a similar line of reasoning can be used. We can again assume, without loss of generality, that x and y lie on the axis of the first coordinate. It holds that Pr(x (X1, X2, X3)) Pr(y (X1, X2, X3)) d + 1 d where Ad is the event that the d 1-dimensional face of the random simplex, formed by d points randomly drawn from F, intersects the line segment xy. It is easy to see that Pr(Ad) Pr(x1 < X11 < y1) fu |x1 y1| L x y a.e. . This completes the proof. Lemma 43 Suppose that µ has a bounded density and that supy E y X 1 = L . Then the modified spatial depth satisfies Condition 4 with ω(r) = 2 d L r and spatial depth satisfies Condition 4 with ω(r) = 2L r. Proof Taking the derivative of MSD and using the assumption of a bounded density results in d MSD(y, µ) dy = 2Eµ (SR(y X)) Eµ 1d y X (y X) (y X) 1d ||d MSD(y, µ) d sup y E y X 1 = 2 Thus, the map y 7 MSD(y, µ) is 2 d L -Lipschitz. Define the spatial rank function h(y, µ) = Eµ (SR(y X)) . Proposition 4.1 of (Koltchinskii, 1997) says that if µ has a bounded density, then the map y 7 h(y, µ) is continuously differentiable in Rd with derivative Id (y x)(y x) dy || 2 sup y E y X 1 = 2L , where denotes the operator norm. Thus, the map y 7 h(y, µ) is 2L -Lipschitz, which implies that the map y 7 SD(y, µ) is also 2L -Lipschitz. Differentially private multivariate medians Lemma 44 Condition 3 holds for each of the depth functions defined in Section 6. Proof We prove the result for each depth in turn. Halfspace depth: Let F = n 1 n X u y o : u Sd 1, y R o and recall that the set of subgraphs of F is the set of closed halfspaces in Rd. Furthermore, it holds that sup x | HD(x, µ1) HD(x, µ2)| sup g F |Eµ1g(X) Eµ2g(X)|. Thus, it follows HD is (1, F)-regular with VC(F) = d + 2. Integrated rank-weighted-depth: First observe that sup x | IRW(x, µ1) IRW(x, µ2)| 4 sup u,x [|F(x, u, µ1) F(x, u, µ2)| |F(x , u, µ1) F(x , u, µ2)|]. G = n g: g(X) = 1 n X u < y o , u Sd 1, y R o . We see that the integrated rank-weighted depth function is (4, G F)-regular. Furthermore, since the subgraphs of G are contained in those of F, we have that VC(G F) = d + 2. Integrated dual depth: If s = then it follows immediately from (F.6) and the analysis of halfspace depth that the integrated dual depth is (3, F)-regular with VC(F) = d + 2. Suppose that s < . It follows from (F.6) that sup x | IDD(x, µ2) IDD(x, µ1)| 3 sup y R,u Sd 1 |Eµ1σ(s(x X) u) Eµ2σ(s(x X) u)|. We have that sup y R,u Sd 1 |Eµ1σ(s(x X) u) Eµ2σ(s(x X) u)| sup g F |Eµ1g(X) Eµ2g(X)|, F = n σ(AXi + b): A Rd, b R o . The class of functions F is a constructed from a monotone function applied to a finitedimensional vector space of measurable functions. It follows from Lemma 7.15 and Lemma 7.19 of Sen (2022) that VC(F ) = d + 2. Therefore, the smoothed integrated dual depth is (3, F )-regular with VC(F ) = d + 2. Ramsay, Jagannath and Chenouri Simplicial Depth: D umbgen (1992) gives that sup x | SMD(x, µ1) SMD(x, µ2)| (d + 1) sup A A |µ1(A) µ2(A)|, where A is the set of intersections of d open half-spaces in Rd. Define the set of Boolean functions A = {g(x) = 1 {x A} : A A }. Thus, simplicial depth is (d + 1, A )-regular with VC(A ) = O(d2 log d). Spatial Depth: Applying the reverse triangle inequality yields Eµ1 x X x X Eµ2 x X x X Eµ1 x X x X Eµ2 x X x X Eµ1 xj Xj x X Eµ2 xj Xj x X d sup x,1 j d Eµ1 xj Xj x X Eµ2 xj Xj x X It remains to compute the VC-dimension of G = g(X) = xj Xj x X : x Rd, j [d] . Now, let E be the set of standard basis vectors for Rd. We can then write G = g(X) = e (x X) x X : x Rd, e E . It is now easy to see that G n g(X) = y (x X): x Rd, y Rd j [d] o . It follows that VC(G ) O(d) and that spatial depth is (d, G ) regular. Modified Spatial Depth: We may write sup x | SD(X, µ1) SD(X, µ2)| = sup x sup u sup x sup u sup x Differentially private multivariate medians It remains to compute the VC-dimension of G = g(X) = u (x X) x X : x Rd, u Sd 1 . It is now easy to see that G n g(X) = y X + b: b R, y Rd j [d] o . It follows that VC(G ) d + 2 and that the modified spatial depth is (1, G ) regular. Proof [Proof of Theorem 30] The result follows from Lemmas 41-44. Proof [Proof of Corollary 34] Together, Lemma 37 and the definition of the Laplace mechanism imply that e D(x, ˆµn) is ϵ-differentially private. Furthermore, the properties of the Laplace measure give that Pr e D(x, ˆµn) D(x, µ) > t Pr (|D(x, ˆµn) D(x, µ)| > t/2) + Pr |W1 K nϵ| > t/2 Pr (|D(x, ˆµn) D(x, µ)| > t/2) + e nϵt/2K. Applying (3.4) yields Pr e D(x, ˆµn) D(x, µ) > t e VC(F) log c1 n VC(F) c2nt2 + e nϵt/2K. John Abowd, Robert Ashmead, Ryan Cumings-Menon, Simson Garfinkel, Micah Heineck, Christine Heiss, Robert Johns, Daniel Kifer, Philip Leclerc, Ashwin Machanavajjhala, Brett Moran, William Sexton, Matthew Spence, and Pavel Zhuravlev. The 2020 census disclosure avoidance system Top Down algorithm. Harvard Data Science Review, (Special Issue 2), 6 2022. https://hdsr.mitpress.mit.edu/pub/7evz361i. Peyman Afshani, Donald R. Sheehy, and Yannik Stein. Approximating the simplicial depth in high dimensions. In European Workshop on Computational Geometry, 2016. Hilal Asi, Jonathan Ullman, and Lydia Zakynthinou. From robustness to privacy and back. In International Conference on Machine Learning, 2023. Marco Avella-Medina. The role of robust statistics in private data analysis. CHANCE, 33 (4):37 42, 2020. doi: 10.1080/09332480.2020.1847958. Marco Avella-Medina and Victor-Emmanuel Brunel. Differentially private sub-Gaussian location estimators. ar Xiv e-prints, 2019. ar Xiv:1906.11923. Ramsay, Jagannath and Chenouri Jordan Awan and Aleksandra Slavkovi c. Structure and sensitivity in differential privacy: Comparing k-norm mechanisms. Journal of the American Statistical Association, 116 (534):935 954, 2021. doi: 10.1080/01621459.2020.1773831. Amos Beimel, , Shay Moran, Kobbi Nissim, and Uri Stemmer. Private center points and learning of halfspaces. In Conference on Learning Theory, volume 99, pages 269 282. PMLR, 6 2019. Omri Ben-Eliezer, Dan Mikulincer, and Ilias Zadik. Archimedes meets privacy: on privately estimating quantiles in high dimensions under minimal assumptions. In Advances in Neural Information Processing Systems, NIPS 22, Red Hook, NY, USA, 2022. Curran Associates Inc. ISBN 9781713871088. Sourav Biswas, Yihe Dong, Gautam Kamath, and Jonathan Ullman. Coin Press: Practical private mean and covariance estimation. Advances in Neural Information Processing Systems, 33:14475 14485, 2020. Gavin Brown, Marco Gaboardi, Adam Smith, Jonathan Ullman, and Lydia Zakynthinou. Covariance-aware private mean estimation without private covariance estimation. Advances in Neural Information Processing Systems, 34:7950 7964, 2021. Victor-Emmanuel Brunel and Marco Avella-Medina. Propose, test, release: Differentially private estimation with high probability. ar Xiv e-prints, 2020. ar Xiv:2002.08774. Mark Bun and Thomas Steinke. Average-case averages: Private algorithms for smooth sensitivity and mean estimation. Advances in Neural Information Processing Systems, 32, 2019. Mark Bun, Gautam Kamath, Thomas Steinke, and Steven Z Wu. Private hypothesis selection. Advances in Neural Information Processing Systems, 32, 2019. Tony Cai, Yichen Wang, and Linjun Zhang. The cost of privacy: Optimal rates of convergence for parameter estimation with differential privacy. The Annals of Statistics, 49(5): 2825 2850, 2021. doi: 10.1214/21-AOS2058. Kamalika Chaudhuri and Daniel Hsu. Convergence rates for differentially private statistical estimation. In International Conference on Machine Learning., volume 2012, page 1327, 2012. Probal Chaudhuri. On a geometric notion of quantiles for multivariate data. Journal of the American Statistical Association, 91:862 872, 1996. Mengjie Chen, Chao Gao, and Zhao Ren. Robust covariance and scatter matrix estimation under huber s contamination model. The Annals of Statistics, 46(5):1932 1960, 2018. Zhiqiang Chen and David E. Tyler. The influence function and maximum bias of Tukey s median. Annals of Statistics, 30(6):1737 1759, 2002. doi: 10.1214/aos/1043351255. Shojaeddin Chenouri, Christopher G. Small, and Thomas J. Farrar. Data depth-based nonparametric scale tests. Canadian Journal of Statistics, 39(2):356 369, 2011. doi: 10.1002/cjs.10099. Differentially private multivariate medians Antonio Cuevas and Ricardo Fraiman. On depth measures and dual statistics. a methodology for dealing with general data. Journal of Multivariate Analysis, 100(4):753 766, 2009. Differential Privacy Team. Learning with privacy at scale. Apple, 2017. David L Donoho. Breakdown properties of multivariate location estimators. Technical report, Harvard University, 1982. Cynthia Dwork and Jing Lei. Differential privacy and robust statistics. ACM symposium on theory of computing - STOC 09, page 371, 2009. Cynthia Dwork and Aaron Roth. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3 4):211 407, 2014. ISSN 1551305X. doi: 10.1561/0400000042. Cynthia Dwork, Frank Mc Sherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography Conference, pages 265 284, 2006. Cynthia Dwork, Adam Smith, Thomas Steinke, and Jonathan Ullman. Exposed! A survey of attacks on private data. Annual Review of Statistics and Its Application, 4(1):61 84, 4 2017. doi: 10.1146/annurev-statistics-060116-054123. Rainer Dyckerhoffand Pavlo Mozharovskyi. Exact computation of the halfspace depth. Computational Statistics & Data Analysis, 98:19 30, 2016. Lutz D umbgen. Limit theorems for the simplicial depth. Statistics & Probability Letters, 14(2):119 128, 1992. doi: https://doi.org/10.1016/0167-7152(92)90075-G. Rina Foygel Barber and John C. Duchi. Privacy and statistical risk: Formalisms and minimax bounds. ar Xiv e-prints, 2014. ar Xiv:1412.4451. Ricardo Fraiman and Graciela Muniz. Trimmed means for functional data. Test, 10(2): 419 440, 12 2001. ISSN 1133-0686. doi: 10.1007/BF02595706. Yue Gao and Or Sheffet. Private approximations of a convex hull in low dimensions. ar Xiv e-prints, 2020. ar Xiv:2007.08110. Kristian Georgiev and Samuel Hopkins. Privacy induces robustness: Informationcomputation gaps and sparse mean estimation. Advances in neural information processing systems, 35:6829 6842, 2022. Miguel Guevara. How we re helping developers with differential privacy. Technical report, Google, 2021. Samuel B Hopkins, Gautam Kamath, and Mahbod Majid. Efficient mean estimation with pure differential privacy via a sum-of-squares exponential mechanism. In ACM SIGACT Symposium on Theory of Computing, pages 1406 1417, 2022. Ramsay, Jagannath and Chenouri Samuel B Hopkins, Gautam Kamath, Mahbod Majid, and Shyam Narayanan. Robustness implies privacy in statistical estimation. In ACM Symposium on Theory of Computing, pages 497 506, 2023. Gautam Kamath, Jerry Li, Vikrant Singhal, and Jonathan Ullman. Privately learning highdimensional distributions. In Conference on Learning Theory, pages 1853 1902. PMLR, 2019. Gautam Kamath, Vikrant Singhal, and Jonathan Ullman. Private mean estimation of heavy-tailed distributions. In Conference on Learning Theory, pages 2204 2235. PMLR, 2020. Vishesh Karwa and Salil Vadhan. Finite sample differentially private confidence intervals. ar Xiv e-prints, 2017. ar Xiv:1711.03908. Vladmir Koltchinskii. M-estimation, convexity and quantiles. The Annals of Statistics, 25: 435 477, 1997. Michael R Kosorok. Introduction to empirical processes and semiparametric inference. Springer, 2008. Cl ement S ebastien Lalanne, Cl ement Gastaud, Nicolas Grislain, Aur elien Garivier, and R emi Gribonval. Private quantiles estimation in the presence of atoms, 2023. Jun Li and Regina Y. Liu. New nonparametric tests of multivariate locations and scales using data depth. Statistical Science, 19(4):686 696, 11 2004. doi: 10.1214/ 088342304000000594. Regina Y. Liu. On a notion of simplicial depth. National Academy of Sciences, 85(6): 1732 1734, 1988. doi: 10.1073/pnas.85.6.1732. Regina Y. Liu. On a notion of data depth based on random simplices. Annals of Statistics., 18(1):405 414, 3 1990. doi: 10.1214/aos/1176347507. Regina Y Liu, Robert Joseph Serfling, and Diane L Souvaine. Data depth: Robust multivariate analysis, computational geometry, and applications, volume 72. American Mathematical Soc., 2006. Xiyang Liu, Weihao Kong, Sham Kakade, and Sewoong Oh. Robust and differentially private mean estimation. Advances in neural information processing systems, 34:3887 3901, 2021. Xiyang Liu, Weihao Kong, and Sewoong Oh. Differential privacy and robust statistics in high dimensions. In Conference on Learning Theory, pages 1167 1246. PMLR, 2022. Frank Mc Sherry and Kunal Talwar. Mechanism design via differential privacy. In IEEE Symposium on Foundations of Computer Science, pages 94 103. IEEE, 10 2007. doi: 10.1109/FOCS.2007.4389483. Karl Mosler. Data depth, pages 105 131. Springer New York, New York, NY, 2002. ISBN 978-1-4613-0045-8. doi: 10.1007/978-1-4613-0045-8 4. Differentially private multivariate medians Stanislav Nagy. Halfspace depth does not characterize probability distributions. Statistical Papers, 62(3):1135 1139, 2021. Kelly Ramsay and Dylan Spicker. Differentially private projection-depth-based medians. Electron. J. Statist., 19(2):5457 5506, 2025. ISSN 1935-7524. doi: 10.1214/25-EJS2464. Kelly Ramsay, St ephane Durocher, and Alexandre Leblanc. Integrated rank-weighted depth. Journal of Multivariate Analysis, 173:51 69, 2019. doi: https://doi.org/10.1016/j.jmva. 2019.02.001. Kelly Ramsay, Aukosh Jagannath, and Shojaeddin Chenouri. An elementary concentration bound for gibbs measures arising in statistical learning theory. Transactions on Machine Learning Research, 2025. ISSN 2835-8856. URL https://openreview.net/forum?id= ZInwrlk Q3f. Jeremy Seeman, Matthew Reimherr, and Aleksandra Slavkovi c. Exact privacy guarantees for markov chain implementations of the exponential mechanism with artificial atoms. Advances in Neural Information Processing Systems, 34:13125 13136, 2021. Bodhisattva Sen. A gentle introduction to empirical process theory and applications. Lecture Notes, Columbia University, 2022. Robert Serfling. A depth function and a scale curve based on spatial quantiles. In Statistical Data Analysis Based on the L1-Norm and Related Methods, pages 25 38. Birkh auser Basel, 2002. Robert Serfling. Depth functions on general data spaces, II. Formulation and maximality, with consideration of the Tukey, projection, spatial, and contour depths. Technical report, Serfling & Thompson Statistical Consulting, 2019. Robert J Serfling. Depth functions in nonparametric multivariate inference. Data Depth: Robust Multivariate Analysis, Computational Geometry, and Applications, pages 1 16, 2006. Christopher G. Small. A survey of multidimensional medians. International Statistical Review / Revue Internationale de Statistique, 58(3):263, 12 1990. doi: 10.2307/1403809. Werner Stahel. Breakdown of covariance estimators. Technical report, Eidgenoissische technishche Hochschule, 1981. Anja Struyf and Peter J Rousseeuw. Halfspace depth and regression depth characterize the empirical distribution. Journal of Multivariate Analysis, 69:1355153, 1999. Michel Talagrand. Sharper bounds for gaussian and empirical processes. The Annals of Probability, 22(1):28 76, 1994. ISSN 00911798. John W Tukey. Mathematics and the picturing of data. In International Congress of Mathematicians, 1974. Ramsay, Jagannath and Chenouri Christos Tzamos, Emmanouil-Vasileios Vlatakis-Gkaragkounis, and Ilias Zadik. Optimal private median estimation under minimal distributional assumptions. Advances in Neural Information Processing Systems, 33:3301 3311, 2020. Yehuda Vardi and Cun-Hui Zhang. The multivariate L1-median and associated data depth. National Academy of Sciences of the United States of America, 97:1423 6, 03 2000. doi: 10.1073/pnas.97.4.1423. Banghua Zhu, Jiantao Jiao, and Jacob Steinhardt. When does the Tukey median work? 2020 IEEE International Symposium on Information Theory, pages 1201 1206, 2020. doi: 10.1109/ISIT44484.2020.9173995. Yijun Zuo. Projection-based depth functions and associated medians. The Annals of Statistics, 31(5):1460 1490, 10 2003. doi: 10.1214/aos/1065705115. Yijun Zuo. Influence function and maximum bias of projection depth based estimators. Annals of Statistics, 32(1):189 218, 2004. Yijun Zuo and Robert Serfling. General notions of statistical depth function. Annals of Statistics, 28(2):461 482, 2000.