# private_mechanism_design_via_quantile_estimation__72ef5c05.pdf Published as a conference paper at ICLR 2025 DIFFERENTIALLY PRIVATE MECHANISM DESIGN VIA QUANTILE ESTIMATION Yuanyuan Yang University of Washington yyangh@cs.washington.edu Tao Xiao Shanghai Jiao Tong University xt 1992@sjtu.edu.cn Bhuvesh Kumar Snap Inc. bhuvesh@snap.com Jamie Morgenstern University of Washington jamiemmt@cs.washington.edu We investigate the problem of designing differentially private (DP), revenuemaximizing single item auction. Specifically, we consider broadly applicable settings in mechanism design where agents valuation distributions are independent, non-identical, and can be either bounded or unbounded. Our goal is to design such auctions with pure, i.e., (ϵ, 0) privacy in polynomial time. In this paper, we propose two computationally efficient auction learning framework that achieves pure privacy under bounded and unbounded distribution settings. These frameworks reduces the problem of privately releasing a revenue-maximizing auction to the private estimation of pre-specified quantiles. Our solutions increase the running time by polylog factors compared to the non-private version. As an application, we show how to extend our results to the multi-round online auction setting with non-myopic bidders. To our best knowledge, this paper is the first to efficiently deliver a Myerson auction with pure privacy and near-optimal revenue, and the first to provide such auctions for unbounded distributions. 1 INTRODUCTION Though prior-dependent auctions, which adjust parameters based on samples of value distributions, often yield better revenue than prior-independent auctions, they risk leaking information about the bids they were trained upon. To address this issue, differential privacy (DP) offers a promising solution (Dwork, 2006; 2008; Mc Sherry and Talwar, 2007; Pai and Roth, 2013), ensuring that a single data point minimally affects the algorithm s output, thus preventing inference of a specific data point. We study the problem of learning a single-item auction with near-optimal revenue from samples of independent and non-identical value distributions. In this context, the optimal auction (i.e., Myerson s auction (Myerson, 1979)), which relies on value distributions (i.e., prior-dependent), achieves optimal revenue. However, releasing the learned Myerson s auction raises privacy concerns, as the output mechanism may inadvertently reveal sensitive information about the distributions. To provably mitigate this risk, our goal is to integrate pure DP into the learning process of such auction. Pure Differential Privacy. Given two datasets that differ in one data point, i.e., D, D , we say an algorithm A satisfies (ϵ, δ)-approximate DP if for any given output s: Pr[A(D) = s] eϵ[A(D ) = s] + δ. We say A satisfies pure DP if δ = 0. Pure DP allows no slack in privacy protection, and hence is more challenging to achieve than approximate DP. Previous attempts (Mc Sherry and Talwar, 2007; Nissim et al., 2012) to integrate DP with prior-dependent auctions are computationally inefficient or guarantee approximate rather than pure DP. To our knowledge, no algorithm guarantees pure DP for Myerson s auction in polynomial time. Efficiency. Incorporating DP into the mechanism often sacrifices efficiency, as achieving privacy guarantees typically incurs additional computational overhead (e.g., random noise addition or extra sampling procedure). In our context, to achieve pure DP, implementing exponential mechanism over Work done at Georgia Institute of Technology. Published as a conference paper at ICLR 2025 all possible mechanisms would incur exponential time (See Appendix D). To obtain pure DP more efficiently, we apply recent advances (Durfee, 2023; Kaplan et al., 2022) in private quantile estimation. Our algorithm s running time increases by only polylog factors compared to the non-private version. Notations We use MA to denote the optimal mechanism of distribution A, and we use Rev(M, A) to denote the revenue of deploying mechanism M to distribution A. We restricted ourselves to single item auctions; hence, MA denotes the Myerson auction fitted on distribution A, and we denote OPT(A) := Rev(MA, A) as the optimal revenue one could get from a distribution A. We use 1k to denote a k-dimensional vector with all entries equal to 1. We use e O and eΘ to hide polylog factors. 1.1 RESULTS Formally, we define the problem of learning a near-optimal auction with a pure DP: Problem 1.1 (Optimal Auction with (ϵp, 0)-DP). Given n samples of k-dimensional distribution D, the goal is to learn a single item auction M with (ϵp, 0)-DP, whose expected revenue on D is close to the optimal revenue, i.e., with prob. 1 δ1, | E[Rev(M, D) OPT(D)]| ϵ for some small ϵ. Insight. To address this problem, we leverage the insight that, the expected optimal revenue from value distribution is insensitive to small statistical shifts and discretization in the quantile and value space. Additionally, we observe that the accuracy of the points returned by private quantile estimation (QE), assuming the data points follow a distribution, directly correlates with the statistical distance between the distribution formed by the returned points and the true distribution. Thus, we can reduce private Myerson fitting from samples to private quantile estimation of pre-specified quantiles. Achieving pure DP while maintaining meaningful revenue guarantees is challenging. A crucial aspect is to ensure that the values (hence distribution) returned by DP Quantile Estimation (QE) possess meaningful and provable accuracy guarantees. To obtain such accuracy, our algorithm (Alg. 1) first additively discretize the empirical distribution in the value space to distribution b Dϵ, then estimate the pre-specificed quantiles with DPQE. We improved the accuracy bound of DPQE (DPQUANT,Kaplan et al. (2022)) to accommodate cases with duplicate values. This improved bound allows us to upper bound the statistical distance between the output distribution and b Dϵ, thus upper bounding the revenue loss incurred from fitting a Myerson on the output distribution. Theorem 1.2 briefly presents the near-optimal revenue of our proposed mechanism. The final privacy parameter has a dependency on k since the output of mechanism M is of dimension 2k. We present complete details in Section 3 and the complete theorem statement in Theorems 3.2 and 3.3. Theorem 1.2 (Revenue Guarantee of Private Myerson, Bounded). Given n = eΘ(ϵ 2) samples b V of the joint distribution D [0, h]k, there exist a mechanism M that is 2kϵp differentially private with running time eΘ(kn) and takes eΘ(1) pass of the distribution. With probability 1 δ, this mechanism M satisfies:| E[Rev(M, D) OPT(D))]| e O((ϵ + ϵ2/ϵp)kh). The prior algorithm does not work for unbounded distributions. Our second algorithm (Alg. 9) addresses the case for η-strongly regular value distributions by efficiently truncating them to bounded distributions with small expected revenue loss. This approach enables the application of our previous mechanism (Alg. 1) designed for the bounded distribution case. Since the truncation point is a function of the optimal revenue, we develop Alg. 7 to approximate this point by achieving a eΘ(k)- approximation of the optimal revenue, where k denotes the dimension of the product distribution. Theorem 1.3 outlines the accuracy of our proposed mechanism for certain parameter settings. Since this truncation point depends adaptively on the desired accuracy, the revenue gap exceeds that for the bounded case, and the tradeoff between privacy and revenue are more pronounced. We present more details in Section 4, and the complete theorem statement is in Theorems 4.1 and I.13. Theorem 1.3 (Revenue Guarantee of Private Myerson, Unbounded). Given n = eΘ(ϵ 2) samples b V of η-strongly regular joint distribution D Rk, there exist a mechanism M for unbounded distribution that is 2kϵp differentially private with running time eΘ(kn) and takes O(n) passes. With probability 1 δ, this mechanism M satisfies: | E[Rev(M, D) Rev(MD, D)]| e O(k2 ϵ + k2ϵ1.5/ϵp). 1This failure probability δ is inevitable due to the inherent uncertainty in learning from a finite sample set, see Chapter 1 Kearns and Vazirani (1994) Published as a conference paper at ICLR 2025 Application: Online auction with nonmyopic bidders. We now describe how our mechanisms incentivize truthful bidding from nonmyopic bidders under practical online auction settings.2 In the online setting, auctions are deployed iteratively and later auctions are informed by previous bids. Since future auctions can be affected by earlier bids, nonmyopic bidders may strategically bid in earlier rounds to increase winning chances and/or secure lower prices, increasing their utility. To prevent from strategic bidding, we integrate our previous solutions (Alg. 1, Alg. 9) with a commitment mechanism. Our DP Myerson naturally upper bound the utility gain (of future rounds) by definition, in that the change of one bid affect the outcome s probability by privacy parameter ϵp. Our algorithm operates in two stages. In the first stage, it employs a commitment mechanism that penalizes strategic bids. In the second stage, the algorithm fits a DP Myerson auction from the collected bids and generates revenue in the remaining rounds. This approach ensures that strategic bids only lies in a small neighbor of the true value; otherwise, the bidder s utility becomes negative. We present the regret (i.e., the time-averaged revenue of the proposed mechanism compared to the optimal one) of our proposed mechanism (Alg. 3) in Theorem 1.4, which shows the accuracy of our algorithm in terms of regret. We defer readers to Section 5 and Theorem 5.4 for further details. Theorem 1.4 (Revenue Guarantee of Online Mechanism). Given ϵ [0, 1/4], under the online auction setting described in Section 5.1), there exists an algorithm (Alg. 3) run with parameter T = eΘ(ϵ 2) that, with probability 1 δ, achieves diminishing regret, i.e., REGRET = e O[(ϵ + ηϵ)kh], where η is a constant specific to bidders utility model. 1.2 PRIOR WORK DP Mechanism Design. Emerging from Mc Sherry and Talwar (2007), there has been interest in delivering mechanisms with DP guarantees (Nissim et al., 2012; Huang et al., 2018a; Zhang and Zhong, 2022; Huh and Kandasamy, 2024). These mechanism are either no longer optimal in our setting, or doen t generalize to unbounded distribution setting. Online Learning in Repeated Auction. Regarding the single item online auction setting, Kanoria and Nazerzadeh (2014); Huang et al. (2018a) established near-optimal solutions when bidders utility is discounted and valuations are i.i.d.. Deng et al. (2020); Abernethy et al. (2019) introduced specific incentive metrics to quantify bidders willingness to bid other than their true values and developed mechanisms that minimize incentives for strategic bidding under these metrics in large markets. For a detailed, complete list of related work topics, please see Appendix C. 1.3 CONTRIBUTIONS Revenue Maximizing Auctions with Pure Privacy Guarantee. Our work is the first to develop a mechanism with pure DP that obtains near optimal revenue for single item auction with independent and non-identical bidders, and for both bounded and unbounded η-strongly regular distributions. For bounded distributions, our mechanism achieves optimal time complexity within polylog factors. Application to Online Auction Setting. We apply our mechanism into the online auction setting with nonmyopic, independent and non-identical bidders. Combined with our designed commitment strategy, the integrated solution restricts the bids to a small neighbor around the corresponding value. Consequently, these approximately truthful bids enables our solution to generate revenue guarantee that converges to the optimal revenue over time, for time-discounted, or large market bidders. We generalize the i.i.d bidder setting in Huang et al. (2018a) and solve the open problem they proposed. Extended Analysis of Private Quantile Algorithm. We extend the analysis of the quantile estimation oracles employed in this paper. For quantile estimation on bounded datasets (Kaplan et al., 2022), the paper assumes that all data points are distinct and derive accuracy bounds dependent on the dataset s range. We generalize their analysis to accommodate cases where multiple data points may share identical values. Additionally, for quantile estimation of unbounded distributions (Durfee, 2023), we provide theoretical accuracy guarantees, complementing the paper s focus on empirical performance. 2Recognizable non-i.i.d. value distributions are common, e.g., Meta Ad platform (met) requires that each advertiser selects one of six objectives, corresponding to different distributions based on the industry or ad topic. Published as a conference paper at ICLR 2025 2 PRELIMINARIES In this section, we outline the preliminaries on mechanism design, differential privacy, and quantile estimation. Additional information can be found in Appendix E. 2.1 MECHANISM DESIGN BASICS We now formally define the allocation rule and payment rule of a single item auction. Definition 2.1 (Allocation Rule and Payment Rule). Given k bidders with bid b := (b1, . . . , bk), a single-item auction M consists of an allocation rule as x(b) := (x1(b), . . . , xk(b)) [0, 1]k and a payment rule as p(b) := (p1(b), . . . , pk(b)) [0, 1]k, where xj denotes the probability that the j-th bidder gets the item, and pj denotes her payment. Under truthful sample access, the Myerson s auction maximizes the expected revenue. Definition 2.2 (Myerson s Single Item Auction (Myerson, 1981)). For a discrete product distribution D = D1 . . . Dk (Elkind, 2007), the virtual value for Dj at value vj i with support Vj = {vj 1, . . . , vj n} is ϕj(vj i ) = vj i (vj i+1 vj i ) 1 Fj(vj i ) fj(vj i ) , where vj i s are ordered in increasing order of i, fj(vj i ) = P[vj = vj i ], and Fj(vj i ) = Pi k=1 f(vj k). We say the product distribution D is η-strongly regular if for all j, ϕj(vi) ϕj(vj) η(vi vj) for every vi > vj V and η > 0. For these distributions D with nondecreasing virtual value, Myerson s allocation rule xi(vi) = 1{ϕi(vi) max(0, maxj =i ϕj(vj))}, where 1{ } denotes the indicator function. The payment rule pi(vi) = 1{ϕi(vi) max(0, maxj =i ϕj(vj))}ϕ 1 i (max(0, maxj =i ϕj(vj))). 3 2.2 DIFFERENTIAL PRIVACY BASICS We present the definition of pure DP and approximate DP below. Definition 2.3 (Differential privacy). An algorithm A : Rn + R is (ϵ, δ)-approximate DP if for neighboring dataset V, V Rn + that differs in only one data point, and any possible output O, we have: Pr[A(V ) = O] exp (ϵ) Pr[A(V ) = O] + δ. We say it satisfies pure DP for δ = 0. A key property we leverage from differential privacy is its immunity to post-processing. Postprocessing refers to any computation or transformation applied to the output of a DP algorithm after the data has been privatized. In our context, Myerson s auction can be seen as a post-processing step. Therefore, applying Myerson s auction to a differentially private release of the empirical distribution preserves the original privacy guarantees of the input distribution. Lemma 2.4 (Immunity to Post-Processing). Let A : Rn + R be an (ϵ, δ)-DP algorithm, and let f : R R be a random function. Then, f A : Rn + R is also (ϵ, δ)-DP. 2.3 QUANTILE ESTIMATION Quantile estimation (QE) is used for estimating a value of specified quantiles from samples. Given samples from a distribution, an accurate QE from samples directly translates to an accurate CDF estimation of the underlying distribution. Below, we formally introduce the definition of QE. Definition 2.5 (Quantile Estimation). Given a range of the data as H, a dataset X Hn containing n points from range H, and a set of m quantiles 0 q1, . . . , qm < 1, identify quantile estimations v1, . . . vm such that for every j [m], |{x X|x vj}| qj n. 4 We now present the definition of statistical dominance and KS-distance below. Definition 2.6 (Stochastic Dominance and KS-Distance). Given distribution D and D , we denote the CDF of them as FD, FD , respectively. Distribution D stochastically dominates distribution D (denoted as D D ) if: (1) For any outcome x, FD(x) FD (x). (2) For some x, FD(x) < FD (x). The KS distance between D and D is dks(D, D ) = supx R |FD(x) FD (x)|. 3We define the virtual value inverse ϕ 1 i (ϕ) as arg minv V ϕi(v) ϕ. 4More formally, vj X is the minimum value such that this quantity exceeds qjn. Published as a conference paper at ICLR 2025 3 PRIVATE MYERSON S AUCTION FOR BOUNDED DISTRIBUTIONS In this section, we introduce the algorithm for fitting a Myerson s auction with a pure privacy guarantee. To ensure pure privacy, since DP is immune to postprocessing, it is sufficient to input a private distribution estimated from samples to the Myerson. The challenge lies in finding such distributions that still yield near-optimal revenue. Our approach leverages private quantile estimation (QE) over samples to achieve the desired guarantee. However, the standard guarantees of DPQE collapse when the dataset contains points that are extremely close. This is a critical issue in our setting, as increasing the sample size n from continuous value distributions inherently causes the minimum distance between samples to approach zero. To address this, we introduce additional discretization steps to prevent non-identical points from being too close together, and we develop new DPQE guarantees specifically tailored to handle samples with identical values. 3.1 PRIVATE MYERSON FOR BOUNDED DISTRIBUTIONS Next, we present DPMYER algorithm (Alg. 1). The algorithm first value-discretize the samples of the distribution additively by ϵa, then quantile-discretize these samples by ϵq with pure privacy guarantee. Specifically, the quantile discretization estimates the values of the quantile set [ϵq, 2ϵq, . . . , 1] with pure privacy. Next, DPMYER use the estimated quantile values and the quantile set to construct a distribution, then perturb it to a final distribution that is stochastically dominated by the ground truth. Finally, the final distribution is then used to implement Myerson s mechanism. Algorithm 1 DP Myerson, Bounded Distribution DPMYER(V, ϵq, ϵa, h, ϵp) Input: n samples V Rk n + , discretization parameter ϵq, ϵa, upper bound h, privacy parameter ϵp 1: Discretize all values into multiples of ϵa; let the resulting samples be b V . 2: Prepare the quantile to be estimated: Q {ϵq, 2ϵq, . . . , . . . , (1/ϵq) ϵq, 1}. 3: For each dimension i [k], decide the prices b S[i,:] QESTIMATE(Q, V[i,:], ϵp). 4: Estimate the quantiles by DPQUANT (Alg. 4) 5: Construct distribution e D based on b S, treating the valuations in b S as if each has probability ϵq. 6: For each i [k], shift the top ϵq quantile of e Di to the bottom, fit Myerson on this distribution. 3.2 REVENUE OPTIMALITY AND RUNNING TIME Next, we show the revenue optimality and the efficiency of our algorithm. To upper bound the reveue loss, we derive the revenue shift theorem, which upper bounds the revenue difference between two distributions by a linear function of their statistical distance. Theorem 3.1 (Revenue Shift Theorem). Given two product distribution D D whose valuations are bounded by h, with dks(Di, D i) αi for any bidder i, the optimal revenue of these distribution satisfies: 0 E[Rev(MD, D) Rev(MD , D )] (P i [k] αi)h. We apply this theorem to upper bound the revenue loss between 1) the quantile-discretized distribution and its pre-quantized counterpart, and 2) the distribution obtained from private quantile estimation and that from the groundtruth quantile estimation. The first one is evident, while the second arises from DPQUANTILE s ability to control the KS-distance between the estimation and the ground truth. We now present the accuracy guarantee of the private Myerson algorithm. Provided the privacy parameter is not too small (i.e, ϵp = Ω(ϵ 1)), our guarantee implies that the optimal revenue of the distribution does not exceed the revenue of our algorithm on its samples by more than eΘ(ϵkh). Theorem 3.2 (Revenue Guarantee of Private Myerson (Alg. 1)). Given n samples b V [0, h]k n of the joint distribution D, DPMYER (Alg. 1) is (2kϵp, 0)-DP, and the expected revenue of this mechanism is close to the optimal revenue of distribution D, i.e., with probability 1 δ: | E[Rev(MDPMYER, D) OPT(D)]| e O((ϵ + ϵ2/ϵp)kh). under parameter ϵa = ϵq = ϵ and n = eΘ(ϵ 2), where we hide the polylog factors in eΘ and e O . Published as a conference paper at ICLR 2025 Proof Sketch. We begin by deriving the privacy guarantee of our algorithm. Next, we establish an upper bound on the distance between the private distribution b Dp and the additively discretized distribution b Dϵ. This enables us to apply the revenue shift theorem (Thm.3.1) to upper bound the revenue loss from private quantile estimation. By aggregating this loss with the revenue loss due to value discretization, we arrive at the final result. In this proof sketch, we omit the polylog factors that depends on k, n, δ, ϵa, ϵp, ϵq for a clear presentation. Further details are provided in Appendix H.2. Privacy Guarantee. We know that the quantile estimates from DPQE is (ϵp, 0) private (Lem. H.2). Since DP is immune to post-processing (Lem. E.4), and that the output of allocation and payment combination is 2k dimensional, by composition theorem (Lem. E.5), our algorithm is (2kϵp, 0)-DP. Upper Bounding the Statistical Distance The distribution b Dp is obtained by changing from distribution D through distribution b D, the distribution b Dϵ and b Dq (Figure 1). We upper bound the statistical KS distance of these distributions: 1) By DKW inequality, we upper bound the KS-distance between b D and D by eΘ(1/ n) for each coordinate i (with probability 1 δ/2). 2) By definition, we upper bound the KS-distance between b Dϵ and b Dq by kϵq. 3) By developing and converting the bound of the DP quantile algorithm (Lem. H.3) into a bound on the CDF, we upper bound the KS-distance between b Dq and b Dp by kbϵ for bϵ := eΘ(1/(ϵpn)) (with probability 1 δ/2). Upper Bounding the Revenue Loss. We then upper bound optimal revenue loss from D to b Dp. This upper bound can be obtained by combining the revenue loss from the aforementioned distributions (by revenue shift theorem), with an additive ϵa revenue loss from discretization (by Lem. F.1). The revenue loss from statistical shift aggregates to eΘ((1/ n + ϵq + bϵ)kh) with probability 1 δ. Putting it all together. Finally, condition on the DPQUANT proceeds successfully and the samples are close to the underlying distribution (with probability 1 δ), we get that the expected revenue of DPQUANT on the underlying distribution is at least the optimal revenue from this distribution minus the revenue difference between D and b Dp by the following inequality: 0 E[Rev(M b Dp, D) OPT(D)] E[Rev(M b Dp, D) OPT( b Dp)] |OPT( b Dp) OPT(D)| where the first inequality follows from the optimality of MD on D and the second inequality follows from adding OPT( b Dp). By our construction of b Dp, this distribution is stochastically dominated by D, thus from the strong revenue monotonicity (Lem. F.3), we get that E[Rev(M b Dp, D) OPT( b Dp)] 0. Thus, we concluded that the revenue gap is upper bounded by eΘ((1/ n + ϵq + bϵ)kh + ϵa). We set δ in the statement as 1/k of the δ we used in this proof to generate the final revenue guarantee. Dist. D Emp. b D Discrete b Dϵ line 1 in Alg. 1 Discrete b Dq Private b Dp line 4 in Alg. 1 small ϵa close Rev. Figure 1: Distribution analyzed for DPMYER(Alg. 1). We establish connections between the accuracy/revenue guarantee of the original distribution D with the empirical distribution b D, the valuediscretized b Dϵ, the quantile-discretized b Dq and the distribution b Dp returned by DPQUANT(Alg. 4). Next, we demonstrate the efficiency of our algorithm, which is achieved through a organized implementation of the DP Quantile algorithm. Intuitively, given m ordered quantiles, the algorithm iteratively identifies and estimates the median (the m/2-th), followed by the m/4 and the 3m/4 quantiles, and so on. This hierarchical structure ensures that each data point is used in at most log m quantile estimates (of a single quantile). For more details, we refer readers to Appendix H.1. Theorem 3.3 (Time Complexity for Private Myerson, Bounded). Given the same parameters as stated in Theorem 3.2, DPMYER (Alg.1) runs in eΘ(kn) time and requires eΘ(1) passes of the samples. Proof Sketch. The time dominant step is quantile estimation, which requires log( 1/ϵq + 1) passes of the dataset. It takes O(k log( 1/ϵq + 1)/(ϵaϵq)) = eΘ(kn) time, since n = eΘ(ϵ 2). This step calculates the utility of k h/ϵa over 1/ϵq quantiles for at most eΘ(1) time. For full version of this proof, please refer to Appendix H.3 Published as a conference paper at ICLR 2025 4 GENERALIZATION TO UNBOUNDED DISTRIBUTIONS Generalizing the DP Myerson mechanism to unbounded distributions introduces new challenges. The revenue loss upper bound produced by previously introduced quantile estimation algorithm and revenue shift theorem both depends (positively) on the range of the distribution. Without a finite range, these upper bound becomes infinite and fail to effectively control the revenue loss. We consider the widely accepted η-strongly regular distributions, which decays at least as fast as exponential distributions. A key element of our approach is appropriately truncating the distribution, which enables us to extend the discretize-then-DP-quantile method to the unbounded setting. Specifically, we apply the property of the regular distribution that (Devanur et al., 2016), truncating the distribution by 1 ϵ OPT(D) costs at most 2ϵ fraction of the optimal revenue (Lem. I.1). Hence, for the truncation to work, it is essential to approximate the optimal revenue based on sample data. Meanwhile, incorporating the truncation with pure DP introduces additional complexities. We are now ready to present our approach for a k-approximation of the optimal revenue with pure DP for η-strongly regular product distributions. Our DPKOPT (Alg. 2) algorithm approximates the optimal revenue by running a empirical reserve(ER) over each bidder s distribution truncated at the top η1/(1 η)/4 quantile.5 Summing up these estimates gives us a Θ(k)-approximation of the optimal revenue, by the fact that k OPT(D) P i [k] OPT(Dk) OPT(D). Algorithm 2 DP Estimation for Optimal Revenue DPKOPT(V, ϵq, ϵa, ϵp, η) Input: n samples V = {v1, . . . , vn}, quantile discretization ϵq, additive discetization ϵa, privacy parameter ϵp, regularity parameter η. 1: for d = 1 k do 2: bq 1/4 η1/1 η 3: Let ubd DPQUANTU(V[d,:], 1 bq). Estimate the truncation point of Dd. 4: Truncate distribution Dd at ubd as b Dd, and discretize b Dd by additive ϵa in the value space. 5: Prepare the quantile to be estimated, Q {1 bq, 1 bq ϵq, , 1 bq 1 bq 6: b S[d,:] QESTIMATE(Q, V[d,:], ϵp) Apply DP quantile estimate (Alg. 4). 7: Let b Fd be the distribution generated by value profile b S[d,:] and quantile set Q. 8: SREVd maxr b S r(1 b Fd(r)). Estimate the optimal revenue from b Fd (Alg. 6). 9: end for 10: KREV P d [k] SREV 11: return KREV To guarantee pure privacy, our algorithm estimates the optimal revenue using a DP-estimated proxy b F[k] derived from the sample data. This proxy is obtained from truncating the distribution by DPQUANTU (Alg. 7) and quantile-discretizing the distribution by DPQUANT. During this process, the truncation by DPQUANTU cost at most a constant fraction of the optimal revenue, and DPQUANT cost at most an additional eΘ( 1 ϵpnk + ϵa). Aggregating these revenue loss concludes that the output is a Θ(k)-approximation of the optimal revenue. See Appendix I.4 for more details. Our private Myerson algorithm for the unbounded distribution (Alg. 9) integrate DPKOPT and yields the following accuracy bound. See Appendix I.5 for formal statements and more details. Theorem 4.1 (Revenue Guarantee of Private Myerson, Unbounded). Given ϵ [0, 1/4], n samples b V of the joint distribution D [0, h]k, the output of Myerson fitted under DPMYERU (Alg. 9) is (2kϵp, 0)-DP, and under ϵa = ϵq = ϵ, n = eΘ(ϵ2), n1 = eΘ(ϵ2), ϵt = ϵ, with probability 1 δ, E[Rev(MDPMYERU, D) Rev(MD, D)]| e O(k2 ϵ + k2ϵ1.5/ϵp) 5Without privacy constraints, truncating at the top η1/(1 η)-suffices by Lem. I.4. Our algorithm adopt a looser truncation since the DPQUANTU algorithm only return the value of given quantiles approximately. Published as a conference paper at ICLR 2025 5 APPLICATION: ONLINE MECHANISM DESIGN FROM BIDS We now study how to integrate our previous solutions into the online auction setting, such that, the algorithm produces time-averaged revenue guarantee that converges to the optimal. The auction now spans multiple rounds, where each auction is informed by the bids from previous rounds. We consider the setting where bidders are non-myopic bidders, and have incentives to bid strategically in the current round to increase their utilities over future auctions. 5.1 APPLICATION BACKGROUND Before presenting our algorithm, we first provide the formal problem definition of the online auction setting. We study online mechanism design over a time horizon of T, where an identical item is sold at each iteration. Each bidder has a publicly observable attribute. Bidders with the same attribute have the same valuation distribution. We are now ready to describe interactions between bidders and the auctioneer over time horizon T, as shown in Figure 2. We defer to Appendix J.2 for more details how bidder generates the samples. For each time t [T]: The learner/auctioneer sells a fresh copy of the item. The learner collects the bids in the form of (bj, aj), where bj and aj denote the bid and the attribute of the j [dt]-th bidder, respectively. The learner decides the allocation rule xt and payment pt accordingly. Figure 2: Online Auction with k Attributes. Each item the auctioneer sells is identical, and each bidder has an additive (discounted) utility of the items across rounds. We consider the bidders either have discounted utility or are in a large market. Definition 5.1 (Bidder s Utility). Each bidder j has a quasi-linear utility function at time t: ut j = xt j(vt j pt j), where xt j, vt j, pt j are the allocation, value, price for bidder j at time t, respectively. We consider two nonmypoic bidders utility models: Discounted Utility: For discount factor γ [0, 1], the bidders seek to maximize the sum of utilities discounted by γ. At the t-th iteration, the discounted utility is but j = PT r=t ur jγr t. Large Market: (Anari et al., 2014; Jalaly Khalilabadi and Tardos, 2018; Chen et al., 2016): The bidder only participates in a subset Sj of auctions, i.e., for each u1:T = P t Sj ut, with subset |Sj| < l. Ideally, the learner s objective is maximize time-averaged revenue with high probability. Our regret compare this revenue against the optimal revenue of the (unobservable) value history. Definition 5.2 (Learner s Objective). Given δ, the learner s objective is to decide an allocation x1:T and a payment p1:T that achieves sublinear regret, i.e., with probability 1 δ, REGRET := 1 t [T ] E[Rev(xt, pt, bt) E [OPT(vt)]] = o(1), with the expectation taken over the value distribution. 5.2 TWO-STAGE MECHANISM FOR BOUNDED DISTRIBUTION This two-stage algorithm (Alg. 3) consists of repeated auctions over T rounds, and the participating bidders values in each round are upper bounded by a known constant h. The algorithm first collects the samples for the first T1 rounds, by running a commitment algorithm (Alg. 10) that punishes nontruthful bids. Then, the algorithm deploys our previously developed DP Myerson s Algorithm (Alg. 1, Alg. 9) for the remaining rounds to obtain near optimal revenue. In addition to these two steps, our algorithm includes a step where all samples are reduced by ν (line 4 of Alg. 3) and projected onto nonnegative value spaces. This step is designed to offset the impact of strategic bidding. Published as a conference paper at ICLR 2025 Algorithm 3 Two-Stage Algorithm ABOUNDED Input: Rounds T, learning rounds T1, parameter ϵa, ϵq, ϵp, ν, upper bound h. 1: for t 1, . . . T1 do Collection Stage 2: Receive bids bt, and attributes at. 3: Return (xt, pt) COMMIT(bt). Commitment Algorithm(Alg. 10) 4: bbt P[0,h][bt ν1k] 5: end for 6: (ex( ), ep( )) DPMYER(bb1:T1, a1:T1, ϵq, ϵa, h, ϵp) Fit Myerson s auction (Alg. 1, or Alg. 9) 7: for t T1 + 1, . . . T do Revenue Stage 8: Receive bids bt, and attributes at. 9: (xt, pt) MYERSON(ex( ), ep( )); 10: end for Specifically, the parameter ν is carefully calibrated to ensure that the bid distribution fed into the private Myerson mechanism is stochastically dominated by the empirical distribution. Our algorithm provides an incentive guarantee that bids lie within a small, controllable neighborhood of the true values. The range of this neighborhood is determined by the privacy parameter ϵp (hence is controlled by our algorithm), and the bidders utility functions. By setting ν to match the range of this neighborhood, the resulting distribution is dominated by the empirical distribution. 5.3 REVENUE GUARANTEE OF THE ALGORITHM Before presenting the revenue guarantee of our main algorithm, we first introduce a lemma that upper bounds how a bidder s bid deviates from its true value during the collection stage. Intuitively, by the design of our commitment algorithm the bidder will incurs a loss that scales (positively) with the bid deviation, compared to truthful bidding. Furthermore, our private Myerson ensures that the bidder s future utility gain is upper bounded (Lem. J.5). Thus, bidders are incentivized to report bids within a certain range of their true values to optimize their overall utility. More details in Appendix J.4. Lemma 5.3 (Bid Deviation). For any t [0, T1], the bidder will bid only bt such that |bt vt| 2α, where α = p 2(l 1)ϵphk for bidders in a large market; and α = q 1 γ kh for discounting bidder. From this lemma, we get that selecting a small ϵp would incentivize bid distributions that are close to the ground-truth. Let ν = 2α in our algorithm (line 4, Alg. 3) would yield a distribution that is stochatically dominated by, yet close in revenue guarantee to, the true distribution. Run our DP Myerson algorithm on this distribution would give us sublinear regret, as stated below. Theorem 5.4 (Accuracy Guarantee of Two-stage Mechanism). Given ϵ [0, 1/4], n samples of the joint distribution D [0, h]k, and T1 = Θ(ϵ 2 log(k/δ)), T = Ω(T1), ϵa = ϵq = ϵp = ϵ, with probability 1 δ, Alg. 3 generates sublinear regret, i.e., Under a large market, the regret is upper bounded by e O[(ϵ + lϵ)kh], for ν = 2 p 2(l 1)ϵphk. Under discounting bidder, the regret is upper bounded by e O[(ϵ + q γϵ 1 γ )kh], for ν = 2 q Proof Sketch. We denote the empirical distribution as b D, the distribution after subtraction in line 4 of Alg. 3 as e D, and the (final) output distribution as b Dp. Then these distribution satisfies b D e D b Dp. By strong monotonicity(Lem. F.3), we know that E[Rev(M b Dp, D)] E[OPT( b Dp)]. Since M b Dp need not be optimal over D, we have that: 0 E[Rev(M b Dp, D) OPT(D)] E[Rev(M b Dp, D) OPT( b Dp)] + E[OPT( b Dp) Rev(MD, D)] E[OPT( b Dp) OPT( b D)] | E[OPT( b D) OPT(D)]| eΘ((ϵ + ϵ2/ϵp)kh + ν). where in the last inequality we apply revenue shift theorem (Thm. 3.1) to upper bound the first term and apply Lemma J.9 to upper bound the second term. Please refer to Appendix J.3 for more details. Published as a conference paper at ICLR 2025 6 EXPERIMENTS In this section, we present the experimental results for the Differentially Private (DP) Myerson mechanism, comparing its performance against two standard mechanism design baselines: the Myerson (optimal) auction and the Vickrey (second-price) auction. The former is designed to achieve near-optimal revenue for a given value distribution, whereas the latter, while strategy-proof, offers no revenue guarantees in settings with independent and non identical value distributions. Our experiments are conducted on normal and lognormal distributions truncated to positive domains. For each value profile, we test various hyperparameters additive discretization (ϵa), quantile discretization (ϵq), and the privacy parameter (ϵp) and select the configuration with the best performance. For details on DP Myerson s sensitivity to hyperparameters, see Appendix A. Bidder Profile DP Myerson Second Price Myerson Ref. Normal N(0.3, 0.5) 0.25272 0.15154 (66.7 %) 0.32598 Table 2 Lognormal (µ, σ) = ( 1.87, 1.15) Normal N(0.3, 0.5) 0.37691 0.33741 (11.7 %) 0.50204 Table 3 Normal N(0.5, 0.7) Lognormal (µ, σ) = ( 1.87, 1.15) 0.13912 0.11578 (20.2 %) 0.21292 Table 4 Lognormal (µ, σ) = ( 1.24, 1.04) Table 1: Empirical Revenue of DPMyerson (Alg. 1) under 2-dimensional non-identical value distributions. Each DPMyerson configuration is averaged over 50 draws, with revenue evaluated on 10, 000 samples. Percentages in parentheses represent the improvement over the second-price mechanism. In Table 1, under non i.i.d distribution settings where there is a significant revenue gap between the Vickrey auction and the Myerson auction, DPMyerson achieves a notable revenue increase (at least 11% ) over the second-price mechanism. 7 CONCLUSION We investigate the problem of learning a single-item auction (i.e., Myerson) from samples with pure DP. We consider the broader setting where the agents valuations are independent, non-identical, and can either be bounded or unbounded. By recognizing that the optimal auction mechanism exhibits robustness to small statistical perturbations in the underlying distribution, we reduce the challenge of privately learning an optimal auction from sample data to the task of privately approximating pre-specified quantiles. Specifically, our approach ensures pure privacy while generating a distribution that is closely aligned with the underlying distribution in terms of expected revenue. We then extend this framework to the online auction setting, where later auctions are fitted on bids from previous auctions. In this setting, non-myopic bidders reason about their utility across rounds, and can bid strategically under (one-shot) truthful auctions. By leveraging our private Myerson mechanisms with an extra commitment mechanism, we achieve near-optimal revenue outcomes over the bidders (unobservable) value samples, despite the strategic complexity introduced by nonmyopic behavior. This result highlights the robustness of our approach in both protecting privacy and maintaining near optimal expected revenue in dynamic, strategic environments. Several interesting directions remain in this line of work. Private Correlation Robust Auction Design. Correlation robust auctions apply to the partial information setting where the distributions are correlated but only the marginal distributions of them can be observed (He and Li, 2022; Bei et al., 2019). Designing a correlation-robust auction with privacy guarantees based on samples would be a valuable future direction. Other Bidders Utility Models. Other bidder utility functions are also widely considered in the context of online auctions and mechanism design, including the budget-constrained value maximizing bidders (Conitzer et al., 2022), and submodular utility functions (Assadi and Singla, 2020). Advancing privacy-preserving mechanisms for these settings would also be important. Published as a conference paper at ICLR 2025 ACKNOWLEDGEMENTS We sincerely thank the anonymous reviewers from ICLR 2025 for their valuable discussions and suggestions on our experiments, which have improved this work. Yuanyuan Yang and Jamie Morgenstern are supported by NSF Awards CCF-2045402 and CCF-2019844. We gratefully acknowledge Bhuvesh Kumar for his contributions to the early discussions of this work prior to 2022, at a time when privacy techniques had not yet been developed. However, Bhuvesh Kumar was not involved in the substantial revisions and restructuring of the manuscript that took place from 2022 onward. The ad auction explained. URL https://www.facebook.com/business/ads/ ad-auction. (page 3) Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan Mc Mahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pages 308 318, 2016. (page 23) Jacob Abernethy, Chansoo Lee, Abhinav Sinha, and Ambuj Tewari. Online linear optimization via smoothing. In Conference on learning theory, pages 807 823. PMLR, 2014. (page 22) Jacob D Abernethy, Rachel Cummings, Bhuvesh Kumar, Sam Taggart, and Jamie Morgenstern. Learning auctions with robust incentive guarantees. In Neur IPS, pages 11587 11597, 2019. (page 3, 22) Daniel Alabi, Omri Ben-Eliezer, and Anamay Chaturvedi. Bounded space differentially private quantiles. ar Xiv preprint ar Xiv:2201.03380, 2022. (page 22) Kareem Amin, Afshin Rostamizadeh, and Umar Syed. Learning prices for repeated auctions with strategic buyers. Advances in Neural Information Processing Systems, 26, 2013. (page 23) Nima Anari, Gagan Goel, and Afshin Nikzad. Mechanism design for crowdsourcing: An optimal 1-1/e competitive budget-feasible mechanism for large markets. In 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, pages 266 275. IEEE, 2014. (page 8, 40) Sepehr Assadi and Sahil Singla. Improved truthful mechanisms for combinatorial auctions with submodular bidders. ACM SIGecom Exchanges, 18(1):19 27, 2020. (page 10) M.-F. Balcan, A. Blum, J.D. Hartline, and Y. Mansour. Mechanism design via machine learning. In 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS 05), pages 605 614, 2005. doi: 10.1109/SFCS.2005.50. (page 22) Maria-Florina Balcan, Avrim Blum, and Yishay Mansour. Item pricing for revenue maximization. In Proceedings of the 9th ACM Conference on Electronic Commerce, pages 50 59, 2008. (page 22) Maria-Florina Balcan, Tuomas Sandholm, and Ellen Vitercik. Estimating approximate incentive compatibility. In Proceedings of the 2019 ACM Conference on Economics and Computation, pages 867 867, 2019. (page 22) Maria-Florina F Balcan, Tuomas Sandholm, and Ellen Vitercik. Sample complexity of automated mechanism design. In Advances in Neural Information Processing Systems, pages 2083 2091, 2016. (page 22) Santiago R Balseiro, Omar Besbes, and Francisco Castro. Mechanism design under approximate incentive compatibility. Operations Research, 72(1):355 372, 2024. (page 22) Xiaohui Bei, Nick Gravin, Pinyan Lu, and Zhihao Gavin Tang. Correlation-robust analysis of single item auction. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 193 208. SIAM, 2019. (page 10) Published as a conference paper at ICLR 2025 S ebastien Bubeck, Nikhil Devanur, Zhiyi Huang, and Rad Niazadeh. Multi-scale online learning: Theory and applications to online auctions and pricing. Journal of Machine Learning Research, 2019. (page 22) Yang Cai and Constantinos Daskalakis. Extreme-value theorems for optimal multidimensional pricing. In 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science, pages 522 531. IEEE, 2011. (page 22) Ning Chen, Xiaotie Deng, Bo Tang, and Hongyang Zhang. Incentives for strategic behavior in fisher market games. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 30, 2016. (page 8, 40) Richard Cole and Tim Roughgarden. The sample complexity of revenue maximization. In Proceedings of the forty-sixth annual ACM symposium on Theory of computing, pages 243 252, 2014. (page 22, 34) Vincent Conitzer, Christian Kroer, Eric Sodomka, and Nicolas E Stier-Moses. Multiplicative pacing equilibria in auction markets. Operations Research, 70(2):963 989, 2022. (page 10) Xiaotie Deng, Ron Lavi, Tao Lin, Qi Qi, Wenwei Wang, and Xiang Yan. A game-theoretic analysis of the empirical revenue maximization algorithm with endogenous sampling. Advances in Neural Information Processing Systems, 33:5215 5226, 2020. (page 3, 22) Yuan Deng, S ebastien Lahaie, Vahab Mirrokni, and Song Zuo. Revenue-incentive tradeoffs in dynamic reserve pricing. In International Conference on Machine Learning, pages 2601 2610. PMLR, 2021. (page 22) Nikhil R Devanur, Zhiyi Huang, and Christos-Alexandros Psomas. The sample complexity of auctions with side information. In Proceedings of the forty-eighth annual ACM symposium on Theory of Computing, pages 426 439, 2016. (page 7, 22, 28, 34) John C Duchi, Michael I Jordan, and Martin J Wainwright. Minimax optimal procedures for locally private estimation. Journal of the American Statistical Association, 113(521):182 201, 2018. (page 22) David Durfee. Unbounded differentially private quantile and maximum estimation. Advances in Neural Information Processing Systems, 36, 2023. (page 2, 3, 22, 35, 39) Paul D utting, Zhe Feng, Harikrishna Narasimhan, David C Parkes, and Sai Srivatsa Ravindranath. Optimal auctions through deep learning: Advances in differentiable economics. Journal of the ACM, 71(1):1 53, 2024. (page 22) Aryeh Dvoretzky, Jack Kiefer, and Jacob Wolfowitz. Asymptotic minimax character of the sample distribution function and of the classical multinomial estimator. The Annals of Mathematical Statistics, pages 642 669, 1956. (page 26) Cynthia Dwork. Differential privacy. In International colloquium on automata, languages, and programming, pages 1 12. Springer, 2006. (page 1) Cynthia Dwork. Differential privacy: A survey of results. In International conference on theory and applications of models of computation, pages 1 19. Springer, 2008. (page 1) Cynthia Dwork, Krishnaram Kenthapadi, Frank Mc Sherry, Ilya Mironov, and Moni Naor. Our data, ourselves: Privacy via distributed noise generation. In Advances in Cryptology-EUROCRYPT 2006: 24th Annual International Conference on the Theory and Applications of Cryptographic Techniques, St. Petersburg, Russia, May 28-June 1, 2006. Proceedings 25, pages 486 503. Springer, 2006. (page 26) Cynthia Dwork, Moni Naor, Toniann Pitassi, and Guy N Rothblum. Differential privacy under continual observation. In Proceedings of the forty-second ACM symposium on Theory of computing, pages 715 724, 2010. (page 22) Published as a conference paper at ICLR 2025 Edith Elkind. Designing and learning optimal finite support auctions. 2007. (page 4, 22, 25) European Commission. Regulation of the european parliament and of the council on the transparency and targeting of political advertising, 2024. URL https://data.consilium.europa. eu/doc/document/PE-90-2023-INIT/en/pdf. (page 21) Jennifer Gillenwater, Matthew Joseph, and Alex Kulesza. Differentially private quantiles. In International Conference on Machine Learning, pages 3713 3722. PMLR, 2021. (page 22) Elena Gribelyuk, Pachara Sawettamalya, Hongxun Wu, and Huacheng Yu. Near-optimal relative error streaming quantile estimation via elastic compactors. In Proceedings of the 2025 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 3486 3529. SIAM, 2025. (page 22) Chenghao Guo, Zhiyi Huang, and Xinzhi Zhang. Settling the sample complexity of single-parameter revenue maximization. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, pages 662 673, 2019. (page 22) Wenshuo Guo, Michael Jordan, and Emmanouil Zampetakis. Robust learning of optimal auctions. In Thirty-Fifth Conference on Neural Information Processing Systems, 2021. (page 29) Meghal Gupta, Mihir Singhal, and Hongxun Wu. Optimal quantile estimation: beyond the comparison model. ar Xiv preprint ar Xiv:2404.03847, 2024. (page 22) Jason Hartline, Vahab Mirrokni, and Mukund Sundararajan. Optimal marketing strategies over social networks. In Proceedings of the 17th international conference on World Wide Web, pages 189 198, 2008. (page 34) Jason D Hartline and Tim Roughgarden. Simple versus optimal mechanisms. In Proceedings of the 10th ACM conference on Electronic commerce, pages 225 234, 2009. (page 22, 26) Wei He and Jiangtao Li. Correlation-robust auction design. Journal of Economic Theory, 200:105403, 2022. (page 10) Zhiyi Huang, Jinyan Liu, and Xiangning Wang. Learning optimal reserve price against non-myopic bidders. In Advances in Neural Information Processing Systems, 2018a. (page 3, 22) Zhiyi Huang, Yishay Mansour, and Tim Roughgarden. Making the most of your samples. SIAM Journal on Computing, 47(3):651 674, 2018b. (page 22, 34) Joon Suk Huh and Kirthevasan Kandasamy. Nash incentive-compatible online mechanism learning via weakly differentially private online learning. ICML, 2024. (page 3) Prateek Jain, Pravesh Kothari, and Abhradeep Thakurta. Differentially private online learning. In Conference on Learning Theory, pages 24 1. JMLR Workshop and Conference Proceedings, 2012. (page 23) Pooya Jalaly Khalilabadi and Eva Tardos. Simple and efficient budget feasible mechanisms for monotone submodular valuations. In International Conference on Web and Internet Economics, pages 246 263. Springer, 2018. (page 8, 40) Yash Kanoria and Hamid Nazerzadeh. Dynamic reserve prices for repeated auctions: Learning from bids. In Web and Internet Economics: 10th International Conference, WINE 2014, Beijing, China, December 14-17, 2014, Proceedings, volume 8877, page 232. Springer, 2014. (page 3) Yash Kanoria and Hamid Nazerzadeh. Incentive-compatible learning of reserve prices for repeated auctions. Operations Research, 69(2):509 524, 2021. (page 22) Haim Kaplan, Shachar Schnapp, and Uri Stemmer. Differentially private approximate quantiles. In International Conference on Machine Learning, pages 10751 10761. PMLR, 2022. (page 2, 3, 22, 30, 31) Published as a conference paper at ICLR 2025 Michael Kearns, Mallesh Pai, Aaron Roth, and Jonathan Ullman. Mechanism design in large games: Incentives and privacy. In Proceedings of the 5th conference on Innovations in theoretical computer science, pages 403 410, 2014. (page 21) Michael J Kearns and Umesh Vazirani. An introduction to computational learning theory. MIT press, 1994. (page 2) Cl ement Lalanne, Aur elien Garivier, and R emi Gribonval. Private statistical estimation of many quantiles. In International Conference on Machine Learning, pages 18399 18418. PMLR, 2023. (page 22) Yi Liu, Qirui Hu, Lei Ding, and Linglong Kong. Online local differential private quantile inference via self-normalization. In International Conference on Machine Learning, pages 21698 21714. PMLR, 2023. (page 22) Frank Mc Sherry and Kunal Talwar. Mechanism design via differential privacy. In 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS 07), pages 94 103. IEEE, 2007. (page 1, 3, 22, 23, 41) Jamie H Morgenstern and Tim Roughgarden. On the pseudo-dimension of nearly optimal auctions. Advances in Neural Information Processing Systems, 28, 2015. (page 22) Roger B Myerson. Incentive compatibility and the bargaining problem. Econometrica: journal of the Econometric Society, pages 61 73, 1979. (page 1, 22) Roger B Myerson. Optimal auction design. Mathematics of operations research, 6(1):58 73, 1981. (page 4, 25) Kobbi Nissim, Rann Smorodinsky, and Moshe Tennenholtz. Approximately optimal mechanism design via differential privacy. In Proceedings of the 3rd innovations in theoretical computer science conference, pages 203 213, 2012. (page 1, 3, 22) Mallesh M Pai and Aaron Roth. Privacy and mechanism design. ACM SIGecom Exchanges, 12(1): 8 29, 2013. (page 1) Tim Roughgarden and Okke Schrijvers. Ironing in the dark. In Proceedings of the 2016 ACM Conference on Economics and Computation, pages 1 18, 2016. (page 22) Adam Smith. Privacy-preserving statistical estimation with optimal convergence rates. In Proceedings of the forty-third annual ACM symposium on Theory of computing, pages 813 822, 2011. (page 22, 30) Tran Tran, Matthew Reimherr, and Aleksandra Slavkovic. Differentially private quantile regression. In International Conference on Privacy in Statistical Databases, pages 18 34. Springer, 2024. (page 22) Junhua Zhang and Caiming Zhong. Differential privacy-based double auction for data market in blockchain-enhanced internet of things. Wireless Communications and Mobile Computing, 2022 (1):8038846, 2022. (page 3) Xiaojin Zhang, Yan Kang, Kai Chen, Lixin Fan, and Qiang Yang. Trading off privacy, utility, and efficiency in federated learning. ACM Transactions on Intelligent Systems and Technology, 14(6): 1 32, 2023. (page 23) Ying Zhang, Xuemin Lin, Jian Xu, Flip Korn, and Wei Wang. Space-efficient relative error order sketch over data streams. In 22nd International Conference on Data Engineering (ICDE 06), pages 51 51. IEEE, 2006. (page 22)