# adaptive_privacy_composition_for_accuracyfirst_mechanisms__c700bdf2.pdf Adaptive Privacy Composition for Accuracy-first Mechanisms Ryan Rogers Linked In Gennady Samorodnitsky Cornell University Zhiwei Steven Wu Carnegie Mellon University Aaditya Ramdas Carnegie Mellon University In many practical applications of differential privacy, practitioners seek to provide the best privacy guarantees subject to a target level of accuracy. A recent line of work by Ligett et al. [2017], Whitehouse et al. [2022] has developed such accuracyfirst mechanisms by leveraging the idea of noise reduction that adds correlated noise to the sufficient statistic in a private computation and produces a sequence of increasingly accurate answers. A major advantage of noise reduction mechanisms is that the analysts only pay the privacy cost of the least noisy or most accurate answer released. Despite this appealing property in isolation, there has not been a systematic study on how to use them in conjunction with other differentially private mechanisms. A fundamental challenge is that the privacy guarantee for noise reduction mechanisms is (necessarily) formulated as ex-post privacy that bounds the privacy loss as a function of the released outcome. Furthermore, there has yet to be any study on how ex-post private mechanisms compose, which allows us to track the accumulated privacy over several mechanisms. We develop privacy filters [Rogers et al., 2016, Feldman and Zrnic, 2021, Whitehouse et al., 2023] that allow an analyst to adaptively switch between differentially private and ex-post private mechanisms subject to an overall differential privacy guarantee. 1 Introduction Although differential privacy has been recognized by the research community as the de-facto standard to ensure the privacy of a sensitive dataset while still allowing useful insights, it has yet to become widely applied in practice despite its promise to ensure formal privacy guarantees. There are notable applications of differential privacy, including the U.S. Census [Abowd et al., 2022], yet few would argue that differential privacy has become quite standard in practice. One common objection to differential privacy is that it injects noise and can cause spurious results for data analyses. A recent line of work in differential privacy has focused on developing accuracy-first mechanisms that aim to ensure a target accuracy guarantee while achieving the best privacy guarantee [Ligett et al., 2017, Whitehouse et al., 2022]. In particular, these accuracy-first mechanisms do not ensure a predetermined level of privacy, but instead provide ex-post privacy, which allows the resulting privacy loss to depend on the outcome of the mechanism. This is in contrast to the prevalent paradigm of differential privacy that fixes the scale of privacy noise in advance and hopes the result is accurate. With accuracy-first mechanisms, practitioners instead specify the levels of accuracy that would ensure useful data analyses and then aim to achieve such utility with the strongest privacy guarantee. However, one of the limitations of this line of work is that it is not clear how ex-post privacy mechanisms compose, so if we combine multiple ex-post privacy mechanisms, what is the overall 37th Conference on Neural Information Processing Systems (Neur IPS 2023). privacy guarantee? Composition is one of the key properties of differential privacy (when used in a privacy-first manner), so it is important to develop a composition theory for ex-post privacy mechanisms. Moreover, how do we analyze the privacy guarantee when we compose ex-post privacy mechanisms with differentially private mechanisms? Our work seeks to answer these questions by connecting with another line work in differential privacy on fully adaptive privacy composition. Traditional differential privacy composition results would require the analyst to fix privacy loss parameters, which is then inversely proportional to the scale of noise, for each analysis in advance, prior to any interaction. Knowing that there will be noise, the data scientist may want to select different levels of noise for different analyses, subject to some overall privacy budget. Privacy filters and odometers, introduced in Rogers et al. [2016], provide a way to bound the overall privacy guarantee despite adaptively selected privacy loss parameters. There have since been other works that have improved on the privacy loss bounds in this adaptive setting, to the point of matching (including constants) what one achieves in a nonadaptive setting [Feldman and Zrnic, 2021, Whitehouse et al., 2023]. A natural next step would then be to allow an analyst some overall privacy loss budget to interact with the dataset and the analyst can then determine the accuracy metric they want to set with each new query. As a motivating example, consider some accuracy metric of α% relative error to different counts with some overall privacy loss parameters ϵ, δ, so that the entire interaction will be (ε, δ)- differentially private. The first true count might be very large, so the amount of noise that needs to be added to ensure the target α% relative error can be huge, and hence very little of the privacy budget should be used for that query, allowing for potentially more results to be returned than an approach that sets an a priori noise level. A baseline approach to add relative noise would be to add a large amount of noise and then check if the noisy count is within some tolerance based on the scale of noise added, then if the noisy count is deemed good enough we stop, otherwise we scale the privacy loss up by some factor and repeat. We refer to this approach as the doubling approach (see Section 7.1 for more details), which was also used in Ligett et al. [2017]. The primary issue with this approach is that the accumulated privacy loss needs to combine the privacy loss each time we add noise, even though we are only interested in the outcome when we stopped. Noise reduction mechanisms from Ligett et al. [2017], Whitehouse et al. [2022] show how it is possible to only pay for the privacy of the last noise addition. However, it is not clear how the privacy loss will accumulate over several noise reduction mechanisms, since each one ensures ex-post privacy, not differential privacy. We make the following contributions in this work: We present a general (basic) composition result for ex-post privacy mechanisms that can be used to create a privacy filter when an analyst can select arbitrary ex-post privacy mechanisms and (concentrated) differentially private mechanisms. We develop a unified privacy filter that combines noise reduction mechanisms specifically the Brownian Mechanism [Whitehouse et al., 2022] with traditional (concentrated) differentially private mechanisms. We apply our results to the task of releasing counts from a dataset subject to a relative error bound comparing the unified privacy filter and the baseline doubling approach, which uses the privacy filters from Whitehouse et al. [2023]. Our main technical contribution is in the unified privacy filter for noise reduction mechanisms and differentially private mechanisms. Prior work [Whitehouse et al., 2022] showed that the privacy loss of the Brownian noise reduction can be written in terms of a scaled Brownian motion at the time where the noise reduction was stopped. We present a new analysis for the ex-post privacy guarantee of the Brownian mechanism that considers a reverse time martingale, based on a scaled standard Brownian motion. Composition bounds for differential privacy consider a forward time martingale and apply a concentration bound, such as Azuma s inequality [Dwork et al., 2010], so we show how we can construct a forward time martingale from the stopped Brownian motions, despite the stopping times being adaptive, not predictable, at each time. See Figure 1 for a sketch of how we combine reverse time martingales into an overall forward time filtration. There have been other works that have considered adding relative noise subject to differential privacy. In particular, i Reduct [Xiao et al., 2011] was developed to release a batch of queries subject to a relative error bound. The primary difference between that work and our setting here is that we do not Privacy budget: ε Accumulated Number of mechanisms Forward filtration Noise reduction Noise reduction Reverse filtration Reverse filtration Figure 1: Our privacy filter tracks an accumulated privacy loss over many different mechanisms (forward filtration, X-axis) and stopping when it exceeds a predefined ϵ (dotted line). Each mechanism satisfies approximate z CDP (blue dot) or is a noise reduction mechanism (black dots). The latter itself involves several rounds of interaction in a reverse filtration (red arrow) until a stopping criterion based on utility is met (red box). Later queries/mechanisms can depend on past ones. want to fix the number of queries in advance and we want to allow the queries to be selected in an adaptive way. Xiao et al. [2011] consider a batch of m queries and initially adds a lot of noise to each count and iteratively checks whether the noisy counts are good enough. The counts that should have smaller noise are then identified and a Laplace based noise reduction algorithm is used to decrease the noise on the identified set. They continue in this way until either all counts are good enough or a target privacy loss is exhausted. There are two scenarios that might arise: (1) all counts satisfy the relative error condition and we should have tried more counts because some privacy loss budget remains, (2) the procedure stopped before some results had a target relative error, so we should have selected fewer queries. In either case, selecting the number of queries becomes a parameter that the data analyst would need to select in advance, which would be difficult to do a priori. In our setting, no such parameter arises. Furthermore, they add up the privacy loss parameters for each count to see if it is below a target privacy loss bound at each step (based on the ℓ1-general sensitivity), however we show that adding up the privacy loss parameters can be significantly improved on. Other works have modified the definition of differential privacy to accommodate relative error, e.g. Wang et al. [2022]. 2 Preliminaries We start with some basic definitions from differential privacy, beginning with the standard definition from Dwork et al. [2006b,a]. We first need to define what we mean by neighboring datasets, which can mean adding or removing one record to a dataset or changing an entry in a dataset. We will leave the neighboring relation arbitrary, and write x, x X to be neighbors as x x . Definition 2.1. An algorithm A : X Y is (ϵ, δ)-differentially private if, for any measurable set E Y and any neighboring inputs x x , Pr[A(x) E] eϵ Pr[A(x ) E] + δ. If δ = 0, we say A is ε-DP or simply pure DP. We define a statistic s sensitivity as the following: p(f) := maxx,x :x x {||f(x) f(x )||p} . A central actor in our analysis will be the privacy loss of an algorithm, which will be a random variable that depends on the outcomes of the algorithm under a particular dataset. Definition 2.2 (Privacy Loss). Let A : X Y be an algorithm and fix neighbors x x in X. Let px and px be the respective densities of A(x) and A(x ) on the space Y with respect to some reference measure. Then, the privacy loss between A(x) and A(x ) evaluated at point y Y is: LA(y; x, x ) := log px(y) Further, we refer to the privacy loss random variable to be LA(x, x ) := LA(A(x); x, x ). When the algorithm A and the neighboring datasets are clear from context, we drop them from the privacy loss, i.e. L(y) = LA(y; x, x ) and L = LA(x, x ). Many existing privacy composition analyses leverage a variant of DP called (approximate) zeroconcentrated DP (z CDP), introduced by Bun and Steinke [2016]. Recall that the Rényi Divergence of order λ 1 between two distributions P and Q on the same domain is written as the following where p( ) and q( ) are the respective probability mass/density functions, Dλ(P||Q) := 1 λ 1 log Since we study fully adaptive privacy composition (where privacy parameters can be chosen adaptively), we will use the following conditional extension of approximate z CDP, in which the z CDP parameters of a mechanism A can depend on prior outcomes. Definition 2.3 (Approximate z CDP [Whitehouse et al., 2023]). Suppose A : X Z Y with outputs in a measurable space (Y, G). Suppose δ, ρ : Y R 0. We say the algorithm A satisfies conditional δ(z)-approximate ρ(z)-z CDP if, for all z Z and any neighboring datasets x, x , there exist probability transition kernels P , P , Q , Q : Z G [0, 1] such that the conditional outputs are distributed according to the following mixture distributions: A(x; z) (1 δ(z))P ( | z) + δ(z)P ( | z) A(x ; z) (1 δ(z))Q ( | z) + δ(z)Q ( | z), where for all λ 1, Dλ(P ( | z) Q ( | z)) ρ(z)λ and Dλ(Q ( | z) P ( | z)) ρ(z)λ, z Z. The following results establish the relationship between z CDP and DP and the composition of z CDP. Lemma 2.1 (Bun and Steinke [2016]). If M : X Y is (ε, δ)-DP, then M is δ-approximate ε2/2-z CDP. If M is δ-approximate ρ-z CDP, then M is (ρ + 2 p ρ log(1/δ ), δ + δ )-DP for δ > 0. Lemma 2.2 (Bun and Steinke [2016]). If M1 : X Y is δ1-approximate ρ1-z CDP and M2 : X Y Y is δ2-approximate ρ2-z CDP in its first coordinate, then M : X Y where M(x) = (M1(x), M2(x, M1(x))) is (δ1 + δ2)-approximate (ρ1 + ρ2)-z CDP. 3 Privacy Filters In order for us to reason about the overall privacy loss of an interaction with a sensitive dataset, we will use the framework of privacy filters, introduced in Rogers et al. [2016]. Privacy filters allow an analyst to adaptively select privacy parameters as a function of previous outcomes until some stopping time that determines whether a target privacy budget has been exhausted. To denote that privacy parameters may depend on previous outcomes, we will write ρn(x) to mean the privacy parameter selected at round n that could depend on the previous outcomes A1:n 1(x), and similarly for δn(x). We now state the definition of privacy filters in the context of approximate z CDP mechanisms. Definition 3.1 (Privacy Filter). Let (An : X Y)n 1 be an adaptive sequence of algorithms such that, for all n 1, An( ; y1:n 1) is δn(y1:n 1)-approximate ρn(y1:n 1)-z CDP for all y1:n 1 Yn 1. Let ϵ > 0 and δ 0 be fixed privacy parameters. Then, a function N : Y N is an (ϵ, δ)-privacy filter if 1. for all (y1, y2, ) Y , N(y1, y2, ) is a stopping time with respect to the natural filtration generated by (An(x))n 1, and 2. A1:N( )( ) is (ϵ, δ)-differentially private where N(x) = N(A1(x), A2(x), ). Whitehouse et al. [2023] showed that we can use composition bounds from traditional differential privacy (which required setting privacy parameters for the interaction prior to any interaction) in the more adaptive setting, where privacy parameters can be set before each query. Theorem 1. Suppose (An : X Y)n 1 is a sequence of algorithms such that, for any n 1, An( ; y1:n 1) is δn(y1:n 1)-approximate ρn(y1:n 1)-z CDP for all y1:n 1. Let ϵ > 0 and δ 0, δ > 0 be fixed privacy parameters. Consider the function N : R 0 N given by the following where δ < P m n+1 δm(y1:m 1) for all y1, y2, and N(y1, y2, ) := inf v u u tlog 1 m n+1 ρm(y1:m 1) + X m n+1 ρm(y1:m 1) Then, the algorithm A1:N( )( ) is (ϵ, δ + δ )-DP, where N(x) := N((An(x))n 1). In other words, N is an (ϵ, δ)-privacy filter. 4 Ex-post Private Mechanisms Although privacy filters allow a lot of flexibility to a data analyst in how they interact with a sensitive dataset while still guaranteeing a fixed privacy budget, there are some algorithms that ensure a bound on privacy that is adapted to the dataset. Ex-post private mechanisms define privacy loss as a probabilistic bound which can depend on the algorithm s outcomes, so some outcomes might contain more information about an individual than others [Ligett et al., 2017, Whitehouse et al., 2022]. Note that ex-post private mechanisms do not have any fixed a priori bound on the privacy loss, so by default they cannot be composed in a similar way to differentially private mechanisms. Definition 4.1. Let A : X Y Z be an algorithm and E : Z Y R 0 a function. We say A( ; y) is (E( ; y), δ(y))-ex-post private for all y Y if, for any neighboring inputs x x , we have Pr [L(A(x; y)) > E(A(x; y); y)] δ(y) for all y Y. We next define a single noise reduction mechanism, which will interactively apply sub-mechanisms and stop at some time k, which can be random. Each iterate will use a privacy parameter from an increasing sequence of privacy parameters (ε(n) : n 1) set in advance and the overall privacy will only depend on the last privacy parameter ϵ(k), despite releasing noisy values with parameters ε(i) for i k. Noise reduction algorithms will allow us to form ex-post private mechanisms because the privacy loss will only depend on the final outcome. We will write M : X Y to be any algorithm mapping databases to sequences of outputs in Y, with intermediate mechanisms written as M (k) : X Y for the k-th element of the sequence and M (1:k) : X Yk for the first k elements. Let µ be a probability measure on Y, for each k 1 let µk be a probability measure on Yk, and let µ be a probability measure on Y . We assume that the law of M (k)(x) on Y is equivalent to µ, the law of M(x) on Y is equivalent to µ , and the law of M (1:k)(x) on Yk is equivalent to µk for every k and every x. Furthermore, we will write L to be the privacy loss of M, L(k) be the privacy loss of M (k), and L(1:k) to be the privacy loss of the sequence of mechanisms M (1:k). We then define noise reduction mechanisms more formally. Definition 4.2 (Noise Reduction Mechanism). Let M : X Y be a mechanism mapping sequences of outcomes and x, x be any neighboring datasets. We say M is a noise reduction mechanism if for any k 1, L(1:k) = L(k). We will assume there is a fixed grid of time values t(1) > t(2) > > t(k) > 0. We will typically think of the time values as being inversely proportional to the noise we add, i.e. t(i) = 1/ε(i) 2 where ε(1) < ε(2) < . An analyst will not have a particular stopping time set in advance and will instead want to stop interacting with the dataset as a function of the noisy answers that have been released. It might also be the case that the analyst wants to stop based on the outcome and the privatized dataset, but in this work we consider stopping times that can only depend on the noisy outcomes or possibly some public information, not the underlying dataset. Definition 4.3 (Stopping Function). Let A : X Y be a noise reduction mechanism. For x X, let (F(k)(x))k N be the filtration given by F(k)(x) := σ(A(i)(x) : i k). A function T : Y N is called a stopping function if for any x X, T(x) := T(A(x)) is a stopping time with respect to (F(k)(x))k 1. Note that this property does not depend on the choice of measures µ, µ and µk. We now recall the noise reduction mechanism with Brownian noise [Whitehouse et al., 2022]. Definition 4.4 (Brownian Noise Reduction). Let f : X Rd be a function and (t(k))k 1 be a sequence of time values. Let (B(t))t 0 be a standard d-dimensional Brownian motion and T : (Rd) N be a stopping function. The Brownian mechanism associated with f, time values (t(k))k 1, and stopping function T is the algorithm BM : X ((0, t(1)] Rd) given by BM(x) := t(k), f(x) + B(t(k)) We then have the following result. Lemma 4.1 (Whitehouse et al. [2022]). The Brownian Noise Reduction mechanism BM is a noise reduction algorithm for a constant stopping function T(x) = k. Furthermore, we have for any stopping function T : Y N, the noise reduction property still holds, i.e. L(1:T (x)) = L(T (x)). Another noise reduction mechanism uses Laplace noise from Koufogiannis et al. [2017], Ligett et al. [2017], which we consider in Appendix C. 5 General Composition of Ex-post Private Mechanisms We consider combining z CDP mechanisms with mechanisms that satisfy ex-post privacy. We consider a sequence of mechanisms (An : X Y)n 1 where each mechanism may depend on the previous outcomes. At each round, an analyst will use either an ex-post private mechanism or an approximate z CDP mechanism, in either case the privacy parameters may depend on the previous results as well. We will write δn(x) := δn(A1(x), , An 1)(x)), ρn(x) := ρ(A1(x), , An 1)(x)), and En(An(x); x) := En(An(x); A1(x), , An 1(x)). Definition 5.1 (Approximate z CDP and Ex-post Private Sequence). Consider a sequence of mechanisms (An)n 1, where An : X Y. The sequence (An)n 1 is called a sequence of approximate z CDP and ex-post private mechanisms if for each round n, the analyst will select An( ) to be δn( )- approximate ρn( )-z CDP given previous outcomes Ai( ) for i < n, or the analyst will select An( ) to be (En(An( ) ; ), δ n( ))-ex-post private conditioned on Ai( ) for i < n. In rounds where z CDP is selected, we will simply write Ei(Ai( ); ) 0, while in rounds where an ex-post private mechanism is selected, we will set ρi( ) = 0. We now state a composition result that allows an analyst to combine ex-post private and z CDP mechanisms adaptively, while still ensuring a target level of privacy. Because we have two different interactive systems that are differentially private, one that uses only z CDP mechanisms and the other that only uses ex-post private mechanisms, we can then use concurrent composition, see Appendix A for more details. Theorem 2. Let ε, ε , δ, δ , δ > 0. Let (An)n 1 be a sequence of approximate z CDP and ex-post private mechanisms. We require that the ex-post private mechanisms that are selected at each round n to have ex-post privacy functions En that do not exceed the remaining budget from ϵ . Consider the following function N : Y N as the following for any sequence of outcomes (yn)n 1 N((yn)n 1, (yn) = inf {Nz CDP((y1:n 1)n 1), Npost((yn)n 1)} , where Nz CDP((yn)n 1) is the stopping rule given in Theorem 1 with privacy parameters ε, δ, δ and Npost((yn)n 1) is the stopping rule where the sum of realized En(yn; y1, , yn 1) values cannot go above ε nor can the sum of δ n(y1, , yn 1) go above δ (see Lemma B.2 with privacy parameters ε , δ ). Then, the algorithm A1:N( )( ) is (ε + ε , δ + δ + δ )-DP, where N(x) = N ((An(x))n 1) . Although we are able to leverage advanced composition bounds from traditional differential privacy for the mechanisms that are approximate z CDP, we are simply adding up the ex-post privacy guarantees, which seems wasteful. Next, we consider how we can improve on this composition bound for certain ex-post private mechanisms. 6 Brownian Noise Reduction Mechanisms We now consider composing specific types of ex-post private mechanisms, specifically the Brownian Noise Reduction mechanism. From Theorem 3.4 in Whitehouse et al. [2022], we can decompose the privacy loss as an uncentered Brownian motion, even when the stopping time is adaptively selected. Theorem 3 (Whitehouse et al. [2022]). Let BM be the Brownian noise reduction mechanism associated with time values (t(k))k 1 and a function f. All reference measures generated by the mechanism are those generated by the Brownian motion without shift (starting at f(x) = 0). For neighbors x x and stopping function T, the privacy loss between BM(1:T (x))(x) and BM(1:T (x ))(x ) is given by L(1:T (x)) BM (x, x ) = ||f(x ) f(x)||2 2 2t(T (x)) + ||f(x ) f(x)||2 f(x ) f(x) ||f(x ) f(x)||2 , B(t(T (x))) . This decomposition of the privacy loss will be very useful in analyzing the overall privacy loss of a combination of Brownian noise reduction mechanisms. In Appendix C we provide a more general analysis than in Whitehouse et al. [2022] to show that both the Laplace noise reduction from Ligett et al. [2017] and the Brownian noise reduction Whitehouse et al. [2022] are indeed noise reduction mechanisms and give the form of the privacy loss at an adaptive stopping time. 6.1 Backward Brownian Motion Martingale We now present a key result for our analysis of composing Brownian noise reduction mechanisms. Although in Whitehouse et al. [2022], the ex-post privacy proof of the Brownian mechanism applied Ville s inequality [for proof, cf. Howard et al., 2020, Lemma 1] to the (unscaled) standard Brownian motion (B(t))t>0, it turns out that the scaled standard Brownian motion (B(t)/t) forms a backward martingale [cf. Revuz and Yor, 2013, Exercise 2.16] and this fact is crucial to our analysis. Lemma 6.1 (Backward martingale). Let (B(t)) be a standard Brownian motion. Define the reverse filtration G(t) = σ(B(u); u t), meaning that G(s) G(t) if s < t. For every real λ, the process M (t) := exp(λB(t)/t λ2/(2t)); t > 0 is a nonnegative (reverse) martingale with respect to the filtration G = (G(t)). Further, at any t > 0, E[M (t)] = 1, M ( ) = 1 almost surely, and E[M (τ)] 1 for any stopping time τ with respect to G (equality holds with some restrictions). In short, M (τ) is an e-value for any stopping time τ an e-value is a nonnegative random variable with expectation at most one. Let B1 = (B(t) 1 )t 0, B2 = (B(t) 2 )t 0, . . . be independent, standard Brownian motions, with corresponding backward martingales M1 = (M (t) 1 )t 0, M2 = (M (t) 2 )t 0, . . . and (internal to each Brownian motion) filtrations G1 = (G(t) 1 )t 0, G2 = (G(t) 2 )t 0, . . . as defined in the previous lemma. Select time values (t(k) 1 )k 1. Let a Brownian noise reduction mechanism BM1 be run using B1 and stopped at τ1. Then E[M (τ1) 1 ] 1 as per the previous lemma. Based on outputs from the BM1, we choose time values (t(k) 2 )k 1 σ(B(t) 1 ; t τ1) := F1. Now run the second Brownian noise reduction BM2 using B2, stopping at time τ2. Since B1, B2 are independent, we still have that E[M (τ2) 2 |F1] 1. Let F2 := σ((B(t) 1 )t τ1, (B(t) 2 )t τ2) be the updated filtration, based on which we choose time values (t(k) 3 )k 1. Because B3 is independent of the earlier two, at the next step, we still have E[M (τ3) 3 |F2] 1. Proceeding in this fashion, it is clear that the product of the stopped e-values Em, where s=1 M (τs) s = exp B(τs) s τs λ2 is itself a (forward) nonnegative supermartingale with respect to the filtration F = (Fn)n 1, with initial value E0 := 1. Applying the Gaussian mixture method [cf. Howard et al., 2021, Proposition 5], we get that for any γ, δ > 0 and with ψ(t; γ, δ) := r (t + γ) log t+γ s [m] 1/τs; γ, δ This then provides an alternate way to prove Theorem 3.6 in Whitehouse et al. [2022]. Theorem 4. Let (Ti)i 1 be a sequence of stopping functions, as in Definition 4.3, and a sequence of time values (t(j) i : j [ki])i 1. Let BMi denote a Brownian noise reduction with statistic fi that can be adaptively selected based on outcomes of previous Brownian noise reductions and fi has ℓ2-sensitivity 1. We then have, for any γ, δ > 0, i=1 L(Ti(x)) BMi ψ i=1 1/t(Ti(x)) i ; γ, δ In other words, BM(1:Ti( )) i i 1 is ψ P i=1 1/t(Ti( )) i ; γ, δ , δ -ex post private. 6.2 Privacy Filters with Brownian Noise Reduction Mechanisms Given Lemma 6.1 and the decomposition of the privacy loss for the Brownian mechanism given in Theorem 3, we will be able to get tighter composition bounds of multiple Brownian noise reduction mechanisms rather than resorting to the general ex-post privacy composition (see Lemma B.2 in the appendix). It will be important to only use time values with the Brownian noise reduction mechanisms that cannot exceed the remaining privacy loss budget. We then make the following condition on the time values (t(j) n )kn j=1 that are used for each Brownian noise reduction given prior outcomes y1, , yn 1 from the earlier Brownian noise reductions with time values (t(j) i )ki j=1 and stopping functions Ti for i < n and overall budget ρ > 0 2t(kn) n ρ X 2t(Ti(y1:i)) i . (2) Lemma 6.2. Let ρ > 0 and consider a sequence of (BMn)n 1 each with statistic fn : X Rdn with ℓ2 sensitivity 1, stopping function Tn, and time values (t(j) n )kn j=1 which can be adaptively selected and satisfies (2). Consider the function N : Y N { } where Y contains all possible outcome streams from (BMn)n 1 as the following for any sequence of outcomes (yn)n 1: N((yn)n 1) = inf 2t(Ti(y1:i)) i Then, for δ > 0, BM1:N( )( ) is (ρ + 2 p ρ log(1/δ), δ)-DP, where N(x) = N((BMn(x))n 1). One approach to defining a privacy filter for both approximate z CDP and Brownian noise reduction mechanisms would be to use concurrent composition, as we did in Lemma B.2 in the appendix. However, this would require us to set separate privacy budgets for approximate z CDP mechanisms and Brownian noise reduction, which is an extra (nuissance) parameter to set. We now show how we can combine Brownian noise reduction and approximate z CDP mechanisms with a single privacy budget. We will need a similar condition on the time values selected at each round for the Brownian noise reduction mechanisms as in (2). Note that at each round either an approximate z CDP or Brownian noise reduction mechanism will be selected. Given prior outcomes y1, , yn 1 and previously selected z CDP parameters ρ1, ρ2(y1), , ρn(y1:n 1) noting that at round n where BM is selected we have ρn(y1:n 1) = 0 or if a z CDP mechanism is selected we simply set kn = 1 and 1 t(1) n = 0 we have the following condition on ρn(y1:n 1) and the time values (t(j) n )kn j=1 if we select a BM at round n and have overall budget ρ > 0, 2t(kn) n + ρn(y1:n 1) ρ X ρi(y1:i 1) + 1 2t(Ti(y1:i)) i Theorem 5. Let ρ > 0 and δ 0. Let (An)n 1 be a sequence of approximate z CDP and ex-post private mechanisms where each ex-post private mechanism at round n is a Brownian Mechanism with an adaptively chosen stopping function Tn, a statistic fn with ℓ2-sensitivity equal to 1, and time values (t(j) n )kn j=1 that satisfy the condition in (3). Consider the function N : Y N as the following for any sequence of outcomes (yn)n 1 : N((yn)n 1) = inf i [n+1] δi(y1:i 1) or ρ = X ρi(y1:i 1) + 1 2t(Ti(y1:i)) i Then for δ > 0, the algorithm A1:N( )( ) is (ρ + 2 p 2ρ log(1/δ ), δ + δ )-DP, where the stopping function is N(x) = N((An(x))n 1). 7 Application: Bounding Relative Error Our motivating application will be returning as many counts as possible subject to each count being within α% relative error, i.e. if y is the true count and ˆy is the noisy count, we require ||ˆy/y| 1| < α. It is common for practitioners to be able to tolerate some relative error to some statistics and would like to not be shown counts that are outside a certain accuracy. Typically, DP requires adding a predetermined standard deviation to counts, but it would be advantageous to be able to add large noise to large counts so that more counts could be returned subject to an overall privacy budget. 7.1 Doubling Method A simple baseline approach would be to use the doubling method", as presented in Ligett et al. [2017]. This approach uses differentially private mechanisms and checks whether each outcome satisfies some condition, in which case you stop, or the analyst continues with a larger privacy loss parameter. The downside of this approach is that the analyst needs to pay for the accrued privacy loss of all the rejected values. However, handling composition in this case turns out to be straightforward given Theorem 1, due to Whitehouse et al. [2023]. We then compare the doubling method" against using Brownian noise reduction and applying Theorem 5. We now present the doubling method formally. We take privacy loss parameters ε(1) < ε(2) < , where ε(i+1) = 2ε(i). Similar to the argument in Claim B.1 in Ligett et al. [2017], we use the 2 factor because the privacy loss will depend on the sum of square of privacy loss parameters, i.e. Pm i=1(ε(i))2 up to some iterate m, in Theorem 1 as ρi = (ε(i))2/2 is the z CDP parameter. This means that if ε is the privacy loss parameter that the algorithm would have halted at, then we might overshoot it by 2ε . Further, the overall sum of square privacy losses will be no more than 4(ε )2. Hence, we refer to the doubling method as doubling the square of the privacy loss parameter. 7.2 Experiments We perform experiments to return as many results subject to a relative error tolerance α and a privacy budget ε, δ > 0. We will generate synthetic data from a Zipf distribution, i.e. from a density f(k) k a for a > 0 and k N. We will set a max value of the distribution to be 300 and a = 0.75. We will assume that a user can modify each count by at most 1, so that the ℓ0-sensitivity is 300 and ℓ -sensitivity is 1. See Figure 3 in Appendix B for the data distribution we used. In our experiments, we will want to first find the top count and then try to add the least amount of noise to it, while still achieving the target relative error, which we set to be α = 10%. To find the top count, we apply the Exponential Mechanism [Mc Sherry and Talwar, 2007] by adding Gumbel noise with scale 1/εEM to each sensitivity 1 count (all 300 elements counts, even if they are zero in the data) and take the element with the top noisy count. From Cesar and Rogers [2021], we know that the Exponential Mechanism with parameter εEM is ε2 EM/8-z CDP, which we will use in our composition bounds. For the Exponential Mechanism, we select a fixed parameter εEM = 0.1. After we have selected a top element via the Exponential Mechanism, we then need to add some noise to it in order to return its count. Whether we use the doubling method and apply Theorem 1 for composition or the Brownian noise reduction mechanism and apply Theorem 5 for composition, we need a stopping condition. Note that we cannot use the true count to determine when to stop, but we can use the noisy count and the privacy loss parameter that was used. Hence we use the following condition based on the noisy count ˆy and the corresponding privacy loss parameter ε(i) at iterate i: 1 α < (ˆy + 1/ε(i))/(ˆy 1/ε(i)) 1 + α and |ˆy| > 1/ε(i). Note that for Brownian noise reduction mechanism at round n, we use time values t(i) n = 1/(ε(i) n )2. We also set an overall privacy budget of (ε, δ ) = (10, 10 6). To determine when to stop, we will simply consider the sum of squared privacy parameters and stop if it is more than roughly 2.705, which corresponds to the overall privacy budget of ε = 10 with δ = 10 6. If the noisy value from the largest privacy loss parameter does not satisfy the condition above, we discard the result. We pick the smallest privacy parameter squared to be (ε(1) n )2 = 0.0001 for each n in both the noise reduction and the doubling method and the largest value will change as we update the remaining sum of square privacy loss terms that have been used. We then set 1000 equally spaced parameters in noise reduction to select between 0.0001 and the largest value for the square of the privacy loss parameter. We then vary the sample size of the data in {8000, 16000, 32000, 64000, 128000} and see how many results are returned and of those returned, how many satisfy the actual relative error, which we refer to as precision. Note that if 0 results are returned, then we consider the precision to be 1. Our results are given in Figure 2 where we give the empirical average and standard deviation over 1000 trials for each sample size. 8000 16000 32000 64000 128000 Sample Size Precision with (10,1e-06)-DP 8000 16000 32000 64000 128000 Sample Size Results Returned Results Returned with (10,1e-06)-DP Figure 2: Precision and number of results returned on Zipfian data for various sample sizes. We also evaluated our approach on real data from Reddit comments from https://github.com/heyyjudes/differentially-private-set-union/tree/ ea7b39285dace35cc9e9029692802759f3e1c8e8/data. This data consists of comments from Reddit authors. To find the most frequent words from distinct authors, we take the set of all distinct words contributed by each author, which can be arbitrarily large and form the resulting histogram which has ℓ -sensitivity 1 yet unbounded ℓ2-sensitivity. To get a domain of words to select from, we take the top-1000 words from this histogram. We note that this step should also be done with DP, but will not impact the relative performance between using the Brownian noise reduction and the Doubling Gaussian Method. We then follow the same approach as on the synthetic data, using the Exponential Mechanism with εEM = 0.01, minimum privacy parameter ε(1) n = 0.0001, relative error α = 0.01, and overall (ε = 1, δ = 10 6)-DP guarantee. In 1000 trials, the Brownian noise reduction precision (proportion of results that had noisy counts within 1% of the true count) was on average 97% (with minimum 92%) while the Doubling Gaussian Method precision was on average 98% (with minimum 93%). Furthermore, the number of results returned by the Brownian noise reduction in 1000 trials was on average 152 (with minimum 151), while the number of results returned by the Doubling Gaussian method was on average 109 (with minimum 108). 8 Conclusion We have presented a way to combine approximate z CDP mechanisms and ex-post private mechanisms while achieving an overall differential privacy guarantee, allowing for more general and flexible types of interactions between an analyst and a sensitive dataset. Furthermore, we showed how this type of composition can be used to provide overall privacy guarantees subject to outcomes satisfying strict accuracy requirements, like relative error. We hope that that this will help extend the practicality of private data analysis by allowing the release of counts with relative error bounds subject to an overall privacy bound. 9 Acknowledgements We would like to thank Adrian Rivera Cardoso and Saikrishna Badrinarayanan for helpful comments. This work was done while G.S. was a visiting scholar at Linked In. ZSW was supported in part by the NSF awards 2120667 and 2232693 and a Cisco Research Grant. J. M. Abowd, R. Ashmead, R. Cumings-Menon, S. Garfinkel, M. Heineck, C. Heiss, R. Johns, D. Kifer, P. Leclerc, A. Machanavajjhala, B. Moran, W. Sexton, M. Spence, and P. Zhuravlev. The 2020 census disclosure avoidance system topdown algorithm, 2022. ar Xiv: 2204.08986. M. Bun and T. Steinke. Concentrated differential privacy: Simplifications, extensions, and lower bounds. In Theory of Cryptography, pages 635 658. Springer Berlin Heidelberg, 2016. URL https://link.springer.com/chapter/10.1007/978-3-662-53641-4_24. M. Cesar and R. Rogers. Bounding, concentrating, and truncating: Unifying privacy loss composition for data analytics. In Proceedings of the 32nd International Conference on Algorithmic Learning Theory, volume 132 of Proceedings of Machine Learning Research, pages 421 457. PMLR, 2021. URL https://proceedings.mlr.press/v132/cesar21a.html. C. Dwork, K. Kenthapadi, F. Mc Sherry, I. Mironov, and M. Naor. Our data, ourselves: Privacy via distributed noise generation. In Advances in Cryptology - EUROCRYPT 2006, pages 486 503. Springer Berlin Heidelberg, 2006a. ISBN 978-3-540-34547-3. URL https://www.iacr.org/ archive/eurocrypt2006/40040493/40040493.pdf. C. Dwork, F. Mc Sherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis. In Proceedings of the Third Conference on Theory of Cryptography, TCC 06, page 265 284. Springer-Verlag, 2006b. ISBN 3540327312. doi: 10.1007/11681878_14. URL https://doi.org/10.1007/11681878_14. C. Dwork, G. N. Rothblum, and S. P. Vadhan. Boosting and differential privacy. In 51st Annual Symposium on Foundations of Computer Science, pages 51 60, 2010. V. Feldman and T. Zrnic. Individual privacy accounting via a rényi filter. In Advances in Neural Information Processing Systems, 2021. URL https://openreview.net/forum?id=PBctz6_ 47ug. S. R. Howard, A. Ramdas, J. Mc Auliffe, and J. Sekhon. Time-uniform Chernoff bounds via nonnegative supermartingales. Probability Surveys, 17:257 317, 2020. S. R. Howard, A. Ramdas, J. Mc Auliffe, and J. Sekhon. Time-uniform, nonparametric, nonasymptotic confidence sequences. The Annals of Statistics, 49(2):1055 1080, 2021. P. Kairouz, S. Oh, and P. Viswanath. The composition theorem for differential privacy. volume 63, pages 4037 4049, 2017. doi: 10.1109/TIT.2017.2685505. URL https://doi.org/10.1109/ TIT.2017.2685505. F. Koufogiannis, S. Han, and G. J. Pappas. Gradual release of sensitive data under differential privacy. Journal of Privacy and Confidentiality, 7(2), 2017. K. Ligett, S. Neel, A. Roth, B. Waggoner, and S. Z. Wu. Accuracy first: Selecting a differential privacy level for accuracy constrained ERM. In Advances in Neural Information Processing Systems, volume 30, 2017. URL https://proceedings.neurips.cc/paper/2017/file/ 86df7dcfd896fcaf2674f757a2463eba-Paper.pdf. X. Lyu. Composition theorems for interactive differential privacy. In Advances in Neural Information Processing Systems, volume 35, pages 9700 9712, 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/ 3f52b555967a95ee850fcecbd29ee52d-Paper-Conference.pdf. F. Mc Sherry and K. Talwar. Mechanism design via differential privacy. In 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS 07), pages 94 103, 2007. doi: 10.1109/ FOCS.2007.66. URL http://dx.doi.org/10.1109/FOCS.2007.66. J. Murtagh and S. Vadhan. The complexity of computing the optimal composition of differential privacy. In Proceedings, Part I, of the 13th International Conference on Theory of Cryptography - Volume 9562, TCC 2016-A, pages 157 175, Berlin, Heidelberg, 2016. Springer-Verlag. ISBN 978-3-662-49095-2. doi: 10.1007/978-3-662-49096-9_7. URL https://doi.org/10.1007/ 978-3-662-49096-9_7. D. Revuz and M. Yor. Continuous martingales and Brownian motion, volume 293. Springer Science & Business Media, 2013. R. M. Rogers, A. Roth, J. Ullman, and S. P. Vadhan. Privacy odometers and filters: Pay-as-you-go composition. In Advances in Neural Information Processing Systems, pages 1921 1929, 2016. URL http://papers.nips.cc/paper/ 6170-privacy-odometers-and-filters-pay-as-you-go-composition. S. Vadhan and T. Wang. Concurrent composition of differential privacy. In K. Nissim and B. Waters, editors, Theory of Cryptography, pages 582 604, Cham, 2021. Springer International Publishing. ISBN 978-3-030-90453-1. S. Vadhan and W. Zhang. Concurrent composition theorems for differential privacy. In Proceedings of the 55th Annual ACM Symposium on Theory of Computing. ACM, jun 2023. doi: 10.1145/ 3564246.3585241. URL https://doi.org/10.1145%2F3564246.3585241. H. Wang, X. Peng, Y. Xiao, Z. Xu, and X. Chen. Differentially private data aggregating with relative error constraint, 2022. URL https://doi.org/10.1007/s40747-021-00550-3. J. Whitehouse, A. Ramdas, S. Wu, and R. Rogers. Brownian noise reduction: Maximizing privacy subject to accuracy constraints. In Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=J-IZQLQZd Yu. J. Whitehouse, A. Ramdas, R. Rogers, and Z. S. Wu. Fully adaptive composition in differential privacy. In International Conference on Machine Learning. PMLR, 2023. X. Xiao, G. Bender, M. Hay, and J. Gehrke. Ireduct: Differential privacy with reduced relative errors. In Proceedings of the 2011 ACM SIGMOD International Conference on Management of Data, page 229 240. Association for Computing Machinery, 2011. doi: 10.1145/1989323.1989348. A Concurrent Composition We will use concurrent composition for differential privacy, introduced by Vadhan and Wang [2021] and with subsequent work in Lyu [2022], Vadhan and Zhang [2023], in our analysis, which we define next. We first define an interactive system. Definition A.1. An interactive system is a randomized algorithm S : (Q Y) Q Y with input an interactive history (q1, y1), (q2, y2), , (qt, yt) (Q Y)t with a query qt+1 Q. The output of S is denoted yt+1 S((qi, yi)i [t], qt+1). Note that an interactive system may also consist of an input dataset x that the queries are evaluated on, which will then induce an interactive system. In particular, we will consider two neighboring datasets x(0) and x(1), which will then induce interactive systems S(b) i corresponding to input data x(b) for b {0, 1} and i [k]. We will use the concurrent composition definition from Lyu [2022]. Definition A.2 (Concurrent Composition). Suppose S1, , Sk are k interactive systems. The concurrent composition of them is an interactive system COMP(S1, , Sk) with query domain [k] Q and response domain Y. An adversary is a (possibly randomized) query algorithm A : ([k] Q Y) [k] Q . The interaction between A and COMP(S1, , Sk) is a stochastic process that runs as follows. A first computes a pair (i1, q1) [k] Q , sends a query q1 to Si1 and gets the response y1. In the t-th step, A calculates the next pair (it, qt) based on the history, sends the t-th query qt to Sit and receives yt. There is no communication or interaction between the interactive systems. Each system Si can only see its own interaction with A. Let IT(A : S1, , Sk) denote the random variable recording the transcript of the interaction. We will be interested in how much the distribution of the transcript of interaction changes for interactive systems with neighboring inputs. We will say COMP(S1, , Sk) is (ε, δ)-DP if for all neighboring datasets x(0), x(1), we have for any outcome set of transcripts E, we have for b {0, 1} Pr[IT(A : S(b) 1 , , S(b) k ) E] eε Pr[IT(A : S(1 b) 1 , , S(1 b) k ) E] + δ. We then have the following result from Lyu [2022]. Theorem 6. Let A1, , Ak be k interactive mechanisms that run on the same data set. Suppose that each mechanism Ai satisfies (εi, δi)-DP. Then COMP(A : A1, , Ak) is (ε, δ)-DP, where ε, δ are given by the optimal (sequential) composition theorem Kairouz et al. [2017], Murtagh and Vadhan [2016]. This result is very powerful because we can consider the privacy of each interactive system separately and still be able to provide a differential privacy guarantee for the more complex setting that allows an analyst to interweave queries to different interactive systems. B Missing Proofs and Figures B.1 Missing Proofs from Section 5 We start with a simple result that states that ex-post mechanisms compose by just adding up the ex-post privacy functions. Similar to our notation for privacy filters, we will denote En( ; x) as the privacy loss bound for algorithm An conditioned on the outcomes of A1:n 1(x). Lemma B.1. Fix a sequence δ1, , δm 0. Let there be a probability measure µn on Yn for each n and the product measure on Y1 Ym. Consider mechanisms An : X Qn 1 i=1 Yi Yn for n [m] where each An( ; y1:n 1) is (En( ; y1:n 1), δn)-ex-post private for all prior outcomes y1:n 1). Then the overall mechanism A1:m( ) is (Pm i=1 Ei(Ai( ) ; ), Pm i=1 δi)-ex-post private with respect to the product measure. Proof. We consider neighboring inputs x, x and write the privacy loss random variable LA(A(x)) for A in terms of the privacy losses Li(Ai(x)) of each Ai for i [m] i=1 Ei(Ai(x); x) i=1 Li(Ai(x)) < i=1 Ei(Ai(x); x) Because each mechanism Ai is (Ei( ; x), δi)-ex post private, we have Pr [Li(A(x)) Ei(Ai(x); x)] δi and hence i=1 Li(Ai(x)) i=1 Ei(Ai(x); x) i [m] {Li(Ai(x)) Ei(Ai(x); x)} i [m] {Li(Ai(x)) > Ei(Ai(x); x)} i [m] Pr [Li(Ai(x)) > Ei(Ai(x); x)] Hence, we have Pr [LA(A(x)) Pm i=1 Ei(Ai(x); x)] P i [m] δi, as desired. For general ex-post private mechanisms, this basic composition cannot be improved. We can simply pick Ei to be the same as the privacy loss Li at each round with independently selected mechanisms Ai at each round i [m]. We now show how we can obtain a privacy filter from a sequence of ex-post mechanisms as long as each selected ex-post privacy mechanism selected at each round cannot exceed the remaining privacy budget. We will write δn(x) to denote the parameter δn selected as a function of prior outcomes from A1(x), , An 1(x). Lemma B.2. Let ε > 0 and δ 0 be fixed privacy parameters. Let (An : X Y)n 1 be a sequence of (En( ; x), δn(x))-ex-post private conditioned on prior outcomes y1:n 1 = A1:n 1(x) where for all yi we have i=1 Ei(yi; x) ε. Consider the function N : Y N where N((yn)n 1) = inf i [n] Ei(yi; x) or δ < X i [n+1] δi(y1:i 1) Then the algorithm A1:N( )( ) is (ε, δ)-DP where N(x) = N((An(x))n 1). Proof. We follow a similar analysis in Whitehouse et al. [2023] where they created a filter for probabilistic DP mechanisms, that is the privacy loss of each mechanism can be bounded with high probability. We will write the corresponding privacy loss variables of An to be Ln and for the full sequence of algorithms A1:n = (A1, , An), the privacy loss is denoted as L1:n. Define the events D := { n N(x) : L1:n > ε} , and E := { n N(x) : Ln > En(An(x); x)} . Using Bayes rule, we have that Pr[D] = Pr[D Ec] + Pr[D E] Pr[D|Ec] + Pr[E] = Pr[E], where the last inequality follows from how we defined the stopping function N(x). Now, we show that Pr[E] δ. Define the modified privacy loss random variables ( e Ln)n N by e Ln := Ln n N(x) 0 otherwise . Likewise, define the modified privacy parameter random variables e En( ; x) and eδn(x) in an identical manner. Then, we can bound Pr[E] in the following manner: Pr[ n N(x) : Ln > En(An(x); x)] = Pr h n N : e Ln > e En(An(x); x) i n=1 Pr h e Ln > e En(An(x); x) i n=1 E h Pr h e Ln > e En(An(x); x) | Fn 1 ii n=1 E h eδn(x) i = E n N(x) δn(x) as desired. Proof of Theorem 2. We will separate out the approximate z CDP mechanisms, (Az CDP n )n 1 from the ex-post private mechanisms (Apost n )n 1 to form two separate interactive systems. In this case, the parameters that are selected can only depend on the outcomes from the respective interactive system, e.g. ρn(x) can only depend on prior outcomes to mechanisms Az CDP i (x) for i < n. From Theorem 1, we know that Az CDP 1:Nz CDP( )( ) is (ε, δ + δ )-DP. We denote Lpost n to be the privacy loss random variable for the ex-post private mechanism at round n. We will also write the stopping time for the ex-post private mechanisms as Npost(x). From Lemma B.2, we know that Apost 1:Npost( )( ) is (ε , δ )-DP. From Theorem 6, we know that the concurrent composition, which allows for both Apost 1:Npost( )( ) and Az CDP 1:Nz CDP( )( ) to interact arbitrarily, will still be (ε + ε , δ + δ + δ )-DP. Remark 1. Note that although DP is closed under post-processing, ex-post privacy is not. We typically say that any post processing function of a DP outcome is also DP with the same parameter. Our privacy analysis depends on getting actual outcomes from BM, rather than a post-processed value of it. However, we also say post processing ensures that no one can compromise privacy by taking the outcome and trying to learn more about the dataset. With ex-post privacy, we are not trying to ensure privacy of each outcome, but rather the full interaction, so the full interaction will still be DP under any post processing of any intermediate outcomes. B.2 Missing Remarks from Section 6 We now write out some remarks regarding Theorem 4. Remark 2. We note here that the stopping time of each Brownian noise reduction is with respect to H = (H(t)) where H(t) = σ(f(x) + B(u); u > t), but f(x) is constant so the shifted process is equivalent to the of the non-shifted process G as defined above. Remark 3. For the multivariate case we have B(u) = B(u)[i] : i [d] where each coordinate is an independent Brownian motion, and we will write the filtration G[i] to be the natural reverse filtration corresponding to the Brownian motion for index i. We then define G(t) = σ B(u)[1], , B(u)[d] : u t . From Lemma 6.1, we have the following for all λ R, i [d], and 0 < s < t E[exp(λB(t)[i]/t λ2/(2t)) | G(s)[i]] 1. We then consider the full d-dimensional Brownian motion so that with a unit vector v = (v[1], , v[d]) Rd we have i=1 v[i] B(t)[i]/t λ2 i=1 exp λ v[i] B(t)[i]/t λ2 v[i]2 i=1 E exp λ v[i] B(t)[i]/t λ2 v[i]2 B.3 Missing Proofs from Section 6 Proof of Lemma 6.2. We use the decomposition of the privacy loss L(1:k) n at round n 1 for Brownian noise reduction in Theorem 3 that is stopped at time value t(k) n to get the following with the natural filtration (Fn(x))n 1 generated by (An(x))n 1. Recall that we have for the Brownian motion (B(t) n )t>0 used in the Brownian noise reduction L(1:k) n = L(k) n = 1 2t(k) n + B(t(k) n ) n t(k) n . From Lemma 6.1 we have for all λ R and n 1 with time value t(k) n E exp λX(k) n λ2 | Fn 1(x) 1, where X(k) n = L(k) n 1 We then form the following process M (k1, ,kn) n = exp i=1 X(ki) i Hence, with fixed time values (t(kn) n )n 1 for n 1 we have for all λ, E h M (k1, ,kn) n | Fn 1(x) i 1. We then replace (ki)i n with an adaptive stopping time functions (Ti)i n, rather than fixing them in advance, in which case we rename M (k1, ,kn) n as M BM n . We know from Lemma 6.1 that M BM n is still an e-value for any n. We then apply the optional stopping theorem to conclude that with the stopping time N(x) that E h M BM N(x) i 1. By the definition of our stopping time, so that PN(x) i=1 1 2t Ti(x) i = ρ, we have for all λ , i=1 L(Ti(x)) i We then set λ = 2ρ+2 ρ log(1/δ) 2ρ to get i=1 L(Ti(x)) i ρ + 2 p We then have a high probability bound on the overall privacy loss, which then implies differential privacy. Proof of Theorem 5. We will show that A1:N( )( ) is δ-approximate ρ-z CDP and then use Lemma 2.1 to obtain a DP guarantee stopping at N(x) as defined in the theorem statement. We follow a similar analysis to the proof of Theorem 1 in Whitehouse et al. [2023]. Let P1:n and Q1:n denote the joint distributions of (A1, . . . , An) with inputs x and x , respectively. We overload notation and write P1:n(y1, . . . , yn) and Q1:n(y1, . . . , yn) for the likelihood of y1, . . . , yn under input x and x respectively. We similarly write Pn(yn | y1:n 1) and Qn(yn | y1:n 1) for the corresponding conditional densities. For any n N, we have P1:n(y1, , yn) = m=1 Pm(ym | y1:m 1), Q1:n(y1, , yn) = m=1 Qm(ym | y1:m 1). When then show that the two likelihoods can be decomposed as weighted mixtures of P and P , as well as Q and Q , respectively such that the mixture weights on P and Q are at least (1 δ), and for all λ 1, max n Dλ (P Q ) , Dλ (Q P ) o ρλ. (4) By our assumption of approximate z CDP at each step n, we can write the conditional likelihoods of Pn and Qn as the following convex combinations: Pn(yn | y1:n 1) = (1 δn(y1:n 1))P n(yn | y1:n 1) + δn(y1:n 1)P n (yn | y1:n 1), Qn(yn | y1:n 1) = (1 δn(y1:n 1))Q n(yn | y1:n 1) + δn(y1:n 1)Q n(yn | y1:n 1), such that for all λ 1 and all prior outcomes y1:n 1, we have both Dλ (P n( | y1:n 1) Q n( | y1:n 1)) ρn(y1:n 1)λ, (5) Dλ (Q n( | y1:n 1) P n( | y1:n 1)) ρn(y1:n 1)λ. (6) Note that at each round, we either select an approximate z CDP mechanism or select a Brownian noise reduction, and in the latter case δn(x) 0 and ρn(x) 0, which then means P n Pn and Q n Qn at those rounds n. We will write the distribution P1:n for any prefix of outcomes from A1(x), , An(x) and similarly we will write the distribution Q1:n for the prefix of outcomes from A1(x ), An(x ). We can then write these likelihood as a convex combination of likelihoods, using the fact that P n=1 δn(y1:n 1) δ for all y1, y2, . P1:n(y1, , yn) = (1 δ) ℓ=1 P ℓ(yℓ|y1:ℓ 1) | {z } P 1:n(y1, ,yn) +δP 1:n(y1, , yn). (7) Q1:n(y1, , yn) = (1 δ) ℓ=1 Q ℓ(yℓ|y1:ℓ 1) | {z } Q 1:n(y1, ,yn) +δQ 1:n(y1, , yn). (8) For any fixed λ 1 and k 1, consider the following filtration F = (F n)n 1 where F n = σ(Y 1, , Y n), with Y 1 P 1, Y 2 | Y 1 P 2( | Y 1), , Y k | Y 1:k 1 P k( | Y 1:k 1). We will first consider the Brownian noise reduction mechanisms with time values (t(k) n : k 1)n 1 to be stopped at fixed rounds (kn)n 1, although not every round will have a Brownian noise reduction mechanism selected with Brownian motion (B(t) n )t>0. We will write out the privacy loss for the Brownian noise reduction mechanisms stopped at rounds (kn)n 1 as L(1:kn) n where from Theorem 3 we have, L(1:kn) n = L(kn) n = 1 2t(kn) n + B(kn) n t(kn) n . We will add the noise reduction outcomes to the filtration, so that F n = σ(Y 1, , Y n 1, (B(t) n )t kn). From Lemma 6.1, we know for all λ R E exp λ L(1:kn) n 1 = E exp λL(1:kn) n λ(λ + 1) We then replace (ki)i n with an adaptive stopping time functions (Ti)i n with corresponding stopping times (Ti(x))i n, rather than fixing them in advance, in which case we rename L(1:kn) n as LBMn n and the same inequality still holds with the filtration F n = σ(Y 1, , Y n 1, (B(t) n )t Tn(x)). Note that we will call Y n = (f(x) + B(t) n )t Tn(x) so that F n = F n. At rounds n where a Brownian noise reduction mechanism is not selected, we simply have 1/t(kn) n = 0 and L(1:kn) n = 0. We also consider the modified privacy losses for the approximate z CDP mechanisms L n where L n = L n(y1:n) = log P n(yn|y 0 E[M (λ) N(x)] 1 = E h eλ PN(x) n=1 (L n+LBM n )i eλ(λ+1)ρ. We then still need to convert to DP, which we do with the stopping rule N(x) and the conversion lemma between approximate z CDP and DP in Lemma 2.1. 0 50 100 150 200 250 300 Element Probability Data Distibution Figure 3: Data from a Zipf Distribution with parameter a = 0.75 and max value of 300. B.4 Data Distribution Plot from Section 7 In Section 7 we generate data from a Zipfian distribution, which we present in Figure 3. C Noise Reduction Mechanisms We provide a more general approach to handle both the Laplace and Brownian noise reduction mechanisms from Koufogiannis et al. [2017], Ligett et al. [2017], Whitehouse et al. [2022]. Consider a discrete set of times {a(j)} with 0 < a(j) b. We will allow the stopping time and the number of steps to be adaptive. We will handle the univariate case and note that the analysis extends to the multivariate case. We will write time functions (ϕ(k), k 1) to satisfy the following ϕ(1) b > 0, ϕ(k) : {a(j)} R {a(j)} such that ϕ(k)(t, z) < t for all (t, z) {a(j)} R and k 2. Consider the noise reduction mechanisms where we have M (1,p) = t(1) = b, M (1,q)(x) = f(x) + Z(t(1)) where Z(t) is either a standard Brownian motion or a Laplace process Koufogiannis et al. [2017], and for k > 1 we have M (k,p)(x) = ϕ(k)(M (k 1,p)(x), M (k 1,q)(x)), M (k,q)(x) = f(x) + Z(M (k,p)(x)). We will then write M (1:k)(x) = (M (k,p)(x), M (k,q)(x)), , (M (1,p)(x), M (1,q)(x)) . We then consider the extended mechanism M (x) = (T(x), M (1:T (x))(x)), where T(x) is a stopping function. Hence, M (x) takes values in Z ((0, b) R) , where ((0, b] R) , is a collection of all finite sequences of the type ((t(i), z(i)) : i [k]) for k = 1, 2, and 0 < t(k) < , t(1) = b and z(i) R for i [k]. Let m be the probability measure on this space generated by M (x ) where x is a point where f(x ) = 0. Note that x might be a fake value that needs to be added to make the function equal zero. Let m(x) be the probability measure on this space generated by M (x). We will then compute densities with respect to m . Let A be a measurable set in Z ((0, b] R) , . Let A(j) A on which the last (smallest) p-coordinate of every point is equal to a(j). Then Pr[M (x) A] = X j Pr[M (x) A(j)]. By construction, M (x) is a function of (Z(t) + f(x), 0 t b). Therefore, for each j, we have Pr[M (x) A(j)] = Pr[(Z(t) + f(x), 0 t b) (M ) 1(A(j))], where the set of paths is of the following form for a measurable subset D(j) C([a(j), b]) of continuous functions on [a(j), b]: (M ) 1(A(j)) = n (y(t) : 0 t b) : (y(t) : a(j) t b) D(j)o . The standard Brownian motion and the Laplace process both have independence of increments, so we have the following with pt as the density for Z(t) Pr[(Z(t) + f(x) : 0 t b) {(y(t) : 0 t b) : (y(t) : a(j) t b) D(j)}] = Pr[(Z(t) + f(x) : a(j) t b) D(j)] = E 1 n (Z(t) : a(j) t b) D(j)o pa(j)(Z(a(j)) f(x)) pa(j)(Z(a(j))) 1 n M (x ) A(j)o p M (T (x ),q)(x )(Z(M (T (x ),q)(x )) f(x)) p M (T (x ),q)(x )(Z(M (T (x ),q)(x ))) where the second equality follows from the fact that the law of a shifted process with independent increments on (a(j), b) is equivalent to the law of the non-shifted process and its density is the ratio of the two densities evaluated at its left most point. We then conclude that Pr[M (x) A] = E 1 {M (x ) A} p M (T (x ),p)(x )(Z(M (T (x ),p)(x )) f(x)) p M (T (x ),p)(x )(Z(M (T (x ),p)(x ))) . Therefore m(x) is absolutely continuous with respect to m . To write the density (the Radon Nikodym derivative), recall that the density is evaluated at some point s Z ({a(j)} R) , . Let t(s) be the smallest p-value in s and z(s) be the corresponding space value. Then we have p(1:T (x)) x (s) = dm(x) dm (s) = pt(s)(z(s) f(x)) pt(s)(z(s)) . Similarly, we consider M last(x) = (T(x), M (T (x))(x)) so that the state space is Z {a(j)} R. We will write ˆm to be the probability measure on this space generated by M last(x ) and ˆm(x) to be the probability measure on this space generated by M last(x). We then compute densities with respect to ˆm . Let A now be a measurable subset of Z {a(j)} R and let A(j) A with the p-coordinate equal to a(j). Then we follow a similar argument to what we have above to show that Pr [M last(x) A] = E 1 {M last(x ) A} p M (T (x ),p)(x )(Z(M (T (x ),p)(x )) f(x)) p M (T (x ),p)(x )(Z(M (T (x ),p)(x ))) Hence, for (k, t, z) Z {a(j)} R we have p(T (x)) x (k, t, z) = dm(x) dm (k, t, z) = pt(z f(x)) We then consider the privacy loss for both M , denoted as L(1:T (x)), and M last, denoted as L(T (x)), under neighboring data x and x . We have L(1:T (x)) = log p M (T (x),p)(x)(Z(M (T (x),p)(x)) f(x)) p M (T (x),p)(x)(Z(M (T (x),p)(x)) f(x )) = L(T (x)). (11) We then instantiate the Brownian noise reduction, in which case pt(z f(x))/pt(z) = exp 1 2t(z f(x))2 + 1 2tz2 = exp f(x) Without loss of generality, consider the function g(y) = f(y) f(x) so that we apply log pt(z g(x)) pt(z g(x )) |g(x )|tz|g(x )| + g(x )2 We then use the Brownian motion (B(t))t 0 to get the privacy loss L(1:T (x)) BM L(1:T (x)) BM = (f(x) f(x )2 2t f(x) f(x ) |f(x) f(x )|t|f(x) f(x )| B M (T (x),p)(x) = (f(x) f(x )2 t |f(x) f(x )| W M T (x),p(x) , where (W(t) = z B(t)) is a standard Brownian motion for |z| = 1. There is another noise reduction mechanism based on Laplace noise, originally from Koufogiannis et al. [2017] and shown to be ex-post private in Ligett et al. [2017]. We first show that it is indeed a noise reduction mechanism with a stopping function and consider the resulting privacy loss random variable. We focus on the univariate case, yet the analysis extends to the multivariate case. We construct a Markov process (X(t))t 0 with X(0) = 0 such that, for each t > 0, X(t) has the Laplace distribution with parameter t, which has density p(z) = 1 2t exp ( |z|/t) for z R. The process we construct has independent increments, i.e. for any 0 t(1) < t(2) < < t(k) the following differences are independent, X(t(0)), X(t(1)) X(t(0)), , X(t(k)) X(t(k 1)). Hence, the process is Markovian. The idea of constructing such a process is that a Laplace random variable with parameter t > 0 is a symmetric, infinitely divisible random variable without a Gaussian component whose Lévy measure has density g(z; t) = e |z|/t |z| , z = 0. Let U be an infinitely divisible random measure on [0, ) with Lebesgue control measure and local Lévy density ψ(z; t) = t 2e |z|/t, z = 0, t > 0. (12) We define X(t) = U([0, t]), where t 0. Then X(0) = 0 and for t > 0, X(t) is a symmetric, infinitely divisible random variable without a Gaussian component, whose Lévy measure has the density equal to Z t 0 ψ(z; u)du = Z t 0 u 2e |z|/udu = Z 1/t e |z|sds = g(z; t). That is X(t) is a Laplace random variable with parameter t and the resulting process has independent increments by construction. Note that the process (X(t))t 0 has infinitely many jumps near 0. On an interval [t(1), t(2)] with 0 < t(1) < t(1), it is flat with probability (t(1)/t(2))2. We can then describe the Laplace noise reduction algorithm in a way similar to the Brownian Noise Reduction. Definition C.1 (Laplace Noise Reduction). Let f : X Rd be a function and (t(k))k 1 a sequence of time values. Let (X(t))t 0 be the Markov process described above independently in each coordinate and T : (Rd) N be a stopping function. The Laplace noise reduction associated with f, time values (t(k))k 1, and stopping function T is the algorithm LNR : X ((0, t(1)] Rd) given by LNR(x) = LNR(1:T (x))(x) = t(k), f(x) + X(t(k)) We then aim to prove the following Lemma C.1. The privacy loss LLNR for the Laplace noise reduction associated with f, time values (t(k))k 1, and stopping function T can be written as LLNR = L(1:T (x)) LNR = L(T (x)) LNR . Furthermore we have L(1:T (x)) LNR = 1 M (T (x),p)(x) |X(M (T (x),p)(x))| |X(M (T (x),p)(x)) 1(f)| . Proof. Because the Laplace process has independent increments, we get the same expression for the privacy loss as in (11) where we substitute the Laplace process for Z(t), which has the following density pt(z f(x))/pt(z) = exp 1 t (|z f(x)| |z|) . We will again use g(y) = f(y) f(x) for neighboring datasets x, x with f(x ) > f(x) without loss of generality to get log (pt(z g(x))/pt(z g(x ))) = 1 t (|z| |z g(x )|) . Therefore, we have from (11) that L(1:T (x)) LNR = L(T (x)) LNR = 1 M (T (x),p)(x) |X(M (T (x),p)(x))| |X(M (T (x),p)(x)) |f(x) f(x )|| , as desired. We now want to show a similar result that we had for the Brownian noise reduction mechanism in Lemma 6.1, but for the Laplace process. Instead of allowing general time step functions ϕ(j), we will simply look at time values 0 < t(k) < < t(1) = b so that L(1:T (x)) LNR = 1 t(T (x)) |X(t(T (x)))| |X(t(T (x))) |f(x) f(x )|| . This form of the privacy loss for the Laplace noise reduction might be helpful in determining whether we can get a similar backward martingale as in the Brownian noise reduction case in Lemma 6.1. We leave the problem open to try to get a composition bound for Laplace noise reduction mechanisms that improves over simply adding up the ex-post privacy bounds as in Theorem B.1.