# discrete_distribution_estimation_under_local_privacy__e5a46b53.pdf Discrete Distribution Estimation under Local Privacy Peter Kairouz KAIROUZ2@ILLINOIS.EDU Keith Bonawitz BONAWITZ@GOOGLE.COM Daniel Ramage DRAMAGE@GOOGLE.COM Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, University of Illinois, Urbana-Champaign, 1308 W Main St, Urbana, IL 61801 The collection and analysis of user data drives improvements in the app and web ecosystems, but comes with risks to privacy. This paper examines discrete distribution estimation under local privacy, a setting wherein service providers can learn the distribution of a categorical statistic of interest without collecting the underlying data. We present new mechanisms, including hashed k-ary Randomized Response (k-RR), that empirically meet or exceed the utility of existing mechanisms at all privacy levels. New theoretical results demonstrate the order-optimality of k-RR and the existing RAPPOR mechanism at different privacy regimes. 1. Introduction Software and service providers increasingly see the collection and analysis of user data as key to improving their services. Datasets of user interactions give insight to analysts and provide training data for machine learning models. But the collection of these datasets comes with risk can the service provider keep the data secure from unauthorized access? Misuse of data can violate the privacy of users and substantially tarnish the provider s reputation. One way to minimize risk is to store less data: providers can methodically consider what data to collect and how long to store it. However, even a carefully processed dataset can compromise user privacy. In a now famous study, (Narayanan & Shmatikov, 2008) showed how to deanonymize watch histories released in the Netflix Prize, a public recommender system competition. While most providers do not intentionally release anonymized datasets, security breaches can mean that even internal, anonymized Proceedings of the 33 rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 48. Copyright 2016 by the author(s). datasets have the potential to become privacy problems. Fortunately, mathematical formulations exist that can give the benefits of population-level statistics without the collection of raw data. Local differential privacy (Duchi et al., 2013a;b) is one such formulation, requiring each device (or session for a cloud service) to share only a noised version of its raw data with the service provider s logging mechanism. No matter what computation is done to the noised output of a locally differentially private mechanism, any attempt to impute properties of a single record will have a significant probability of error. But not all differentially private mechanisms are equal when it comes to utility: some mechanisms have better accuracy than others for a given analysis, amount of data, and desired privacy level. Private distribution estimation. This paper investigates the fundamental problem of discrete distribution estimation under local differential privacy. We focus on discrete distribution estimation because it enables a variety of useful capabilities, including usage statistics breakdowns and count-based machine learning models, e.g. naive Bayes (Mc Callum et al., 1998). We consider empirical, maximum likelihood, and minimax distribution estimation, and study the price of local differential privacy under a variety of loss functions and privacy regimes. In particular, we compare the performance of two recent local privacy mechanisms: (a) the Randomized Aggregatable Privacy-Preserving Ordinal Response (RAPPOR) (Erlingsson et al., 2014), and (b) the k-ary Randomized Response (k-RR) (Kairouz et al., 2014) from a theoretical and empirical perspective. Our contributions are: 1. For binary alphabets, we prove that Warner s randomized response model (Warner, 1965) is globally optimal for any loss function and any privacy level (Section 3). 2. For k-ary alphabets, we show that RAPPOR is order optimal in the high privacy regime and strictly sub-optimal in the low privacy regime for ℓ1 and ℓ2 losses using an empirical estimator. Conversely, k-RR is order optimal in the low privacy regime and strictly sub-optimal in the Discrete Distribution Estimation under Local Privacy high privacy regime (Section 4.1). 3. Large scale simulations show that the optimal decoding algorithm for both k-RR and RAPPOR depends on the shape of the true underlying distribution. For skewed distributions, the projected estimator (introduced here) offers the best utility across a wide variety of privacy levels and sample sizes (Section 4.4). 4. For open alphabets in which the set of input symbols is not enumerable a priori we construct the O-RR mechanism (an extension to k-RR using hash functions and cohorts) and provide empirical evidence that the performance of O-RR meets or exceeds that of RAPPOR over a wide range of privacy settings (Section 5). 5. We apply the O-RR mechanism to closed k-ary alphabets, replacing hash functions with permutations. We provide empirical evidence that the performance of ORR meets or exceeds that of k-RR and RAPPOR in both low and high privacy regimes (Section 5.4). Related work. There is a rich literature on distribution estimation under local privacy (Chan et al., 2012; Hsu et al., 2012; Bassily & Smith, 2015), of which several works are particularly relevant herein. (Warner, 1965) was the first to study the local privacy setting and propose the randomized response model that will be detailed in Section 3. (Kairouz et al., 2014) introduced k-RR and showed that it is optimal in the low privacy regime for a rich class of information theoretic utility functions. k-RR will be extended to open alphabets in Section 5.1. (Duchi et al., 2013a;b) was the first to apply differential privacy to the local setting, to study the fundamental trade-off between privacy and minimax distribution estimation in the high privacy regime, and to introduce the core of k-RAPPOR. (Erlingsson et al., 2014) proposed RAPPOR, systematically addressing a variety of practical issues for private distribution estimation, including robustness to attackers with access to multiple reports over time, and estimating distributions over open alphabets. RAPPOR has been deployed in the Chrome browser to allow Google to privately monitor the impact of malware on homepage settings. RAPPOR will be investigated in Sections 4.2 and 5.2. Private distribution estimation also appears in the global privacy context where a trusted service provider releases randomized data (e.g., NIH releasing medical records) to protect sensitive user information (Dwork, 2006; Dwork et al., 2006; Dwork & Lei, 2009; Dwork, 2008; Diakonikolas et al., 2015; Blocki et al., 2016). 2. Preliminaries 2.1. Local differential privacy Let X be a private source of information defined on a discrete, finite input alphabet X = {x1, ..., xk}. A statistical privatization mechanism is a family of distributions Q that map X = x to Y = y with probability Q (y|x). Y , the privatized version of X, is defined on an output alphabet Y = {y1, ..., yl} that need not be identical to the input alphabet X. In this paper, we will represent a privatization mechanism Q via a k l row-stochastic matrix. A conditional distribution Q is said to be ε-locally differentially private if for all x, x X and all E Y, we have that Q (E|x) eεQ (E|x ) , (1) where Q (E|x) = P(Y E|X = x) and ε [0, ) (Duchi et al., 2013a) . In other words, by observing Y E, the adversary cannot reliably infer whether X = x or X = x (for any pair x and x ). Indeed, the smaller the ε is, the closer the likelihood ratio of X = x to X = x is to 1. Therefore, when ε is small, the adversary cannot recover the true value of X reliably. 2.2. Private distribution estimation The private multinomial estimation problem is defined as follows. Given a vector p = (p1, ..., pk) on the probability simplex Sk, samples X1, ..., Xn are drawn i.i.d. according to p. An ε-locally differentially private mechanism Q is then applied independently to each sample Xi to produce Y n = (Y1, , Yn), the sequence of private observations. Observe that the Yi s are distributed according to m = p Q and not p. Our goal is to estimate the distribution vector p from Y n. Privacy vs. utility. There is a fundamental trade-off between utility and privacy. The more private you want to be, the less utility you can get. To formally analyze the privacy-utility trade-off, we study the following constrained minimization problem rℓ,ε,k,n = inf Q Dε rℓ,ε,k,n(Q), (2) rℓ,ε,k,n(Q) = inf ˆp sup p E Y n p Qℓ(p, ˆp) is the minimax risk under Q, ℓis an application dependent loss function, and Dε is the set of all ε-locally differentially private mechanisms. This problem, though of great value, is intractable in general. Indeed, finding minimax estimators in the non-private setting is already hard for several loss functions. For instance, the minimax estimator under ℓ1 loss is unknown Discrete Distribution Estimation under Local Privacy even until today. However, in the high privacy regime, we are able to bound the minimax risk of any differentially private mechanism Q. Proposition 1 For the private distribution estimation problem in (2), for any ε-locally differentially private mechanism Q, there exist universal constants 0 < cl cu < 5 such that for all ε [0, 1], cl min 1, 1 rℓ2 2,ε,k,n cu min 1, k cl min 1, k rℓ1,ε,k,n cu min 1, k Proof See (Duchi et al., 2013b). This result shows that in the high privacy regime (ε 1), the effective sample size of a dataset decreases from n to nε2/k. In other words, a factor of k/ε2 extra samples are needed to achieve the same minimax risk. This is problematic for large alphabets. Our work shows that (a) this problem can be (partially) circumvented using a combination of cohort-style hashing and k-RR (Section 5), and (b) the dependence on the alphabet size vanishes in the moderate to low privacy regime (Section 4.3). 3. Binary Alphabets In this section, we study the problem of private distribution estimation under binary alphabets. In particular, we show that Warner s randomized response model (W-RR) is optimal for binary distribution minimax estimation (Warner, 1965). In W-RR, interviewees flip a biased coin (that only they can see the result of), such that a fraction η of participants answer the question Is the predicate P true (of you)? while the remaining particants answer the negation ( Is P true? ), without revealing which question they answered. For η = eε (ε 0), W-RR can be described by the following 2 2 row-stochastic matrix QWRR = 1 eε + 1 It is easy to check that the above mechanism satisfies the constraints imposed by local differential privacy. Theorem 2 For all binary distributions p, all loss functions ℓ, and all privacy levels ε, QWRR is the optimal solution to the private minimax distribution estimation problem in (2). Proof sketch. (Kairouz et al., 2014) showed that W-RR dominates all other differentially private mechanisms in a strong Markovian sense: for any binary differentially private mechanism Q, there exists a 2 2 stochastic mapping W such that Q = W QWRR. Therefore, for any risk function r( ) that obeys the data processing inequality (r(Q) r(Q W ) for any stochastic mappings Q and W ), we have that r(QWRR) r(Q) for any binary differentially private mechanism Q. In Supplementary Section A, we prove that rℓ,ε,k,n(Q) obeys the data processing inequality, thus W-RR achieves the optimal privacy-utility trade-off under minimax distribution estimation. 4. k-ary Alphabets Above, we saw that W-RR is optimal for all privacy levels and all loss functions. However, it can only be applied to binary alphabets. In this section, we study optimal privacy mechanisms for k-ary alphabets. We show that under ℓ1 and ℓ2 losses, k-RAPPOR is order optimal in the high privacy regime and sub-optimal in the low privacy regime. Conversely, k-RR is order optimal in the low privacy regime and sub-optimal in the high privacy regime. 4.1. The k-ary Randomized Response The k-ary randomized response (k-RR) mechanism is a locally differentially private mechanism that maps X stochastically onto itself (i.e., Y = X), given by QKRR(y|x) = 1 k 1 + eε eε if y = x, 1 if y = x. (4) k-RR can be viewed as a multiple choice generalization of the W-RR mechanism (note that k-RR reduces to W-RR for k = 2). In (Kairouz et al., 2014), the k-RR mechanism was shown to be optimal in the low privacy regime for a large class of information theoretic utility functions. Empirical estimation under k-RR. It is easy to see that under QKRR, outputs are distributed according to: m = eε 1 eε + k 1p + 1 eε + k 1 (5) The empirical estimate of p under QKRR is given by ˆp = ˆm Q 1 KRR (6) eε 1 ˆm 1 eε 1, where ˆm is the empirical estimate of m and Q 1 KRR(y|x) = 1 eε 1 eε + k 2 if y = x, 1 if y = x. (7) via the Sherman-Morrison formula. Observe that because ˆm m almost surely, ˆp p almost surely. Discrete Distribution Estimation under Local Privacy Proposition 3 For the private distribution estimation problem under k-RR and its empirical estimator given in (6), for all ε, n, and k, we have that E ℓ2 2(ˆp, p) = 1 Pk i=1 p2 i n + k 1 k + 2(eε 1) and for large n, E ℓ1(ˆp, p) 2((eε 1)pi + 1)((eε 1)(1 pi) + k 1) πn(eε 1)2 , where an bn means limn an/bn = 1. Proof See Supplementary Section B. Observe that for p U = 1 k , we have that E ℓ2 2(ˆp, p) E ℓ2 2(ˆp, p U) (8) = 1 + k + 2(eε 1) (eε 1)2 k 1 1 E ℓ1(ˆp, p) E ℓ1(ˆp, p U) (9) Constraining empirical estimates to Sk. It is easy to see that ||ˆp KRR||1 = 1. However, some of the entries of ˆp KRR can be negative (especially for small values of n). Several remedies are available, including (a) truncating the negative entries to zero and renormalizing the entire vector to sum to 1, or (b) projecting ˆp KRR onto the probability simplex. We evaluate both approaches in Section 4.4. 4.2. k-RAPPOR The randomized aggregatable privacy-preserving ordinal response (RAPPOR) is an open source Google technology for collecting aggregate statistics from end-users with strong local differential privacy guarantees (Erlingsson et al., 2014). The simplest version of RAPPOR, called the basic one-time RAPPOR and referred to herein as k RAPPOR, first appeared in (Duchi et al., 2013a;b). k RAPPOR maps the input alphabet X of size k to an output alphabet Y of size 2k. In k-RAPPOR, we first map X deterministically to X = Rk, the k-dimensional Euclidean space. Precisely, X = xi is mapped to X = ei, the ith standard basis vector in Rk. We then randomize the coordinates of X independently to obtain the private vector Y {0, 1}k. Formally, the jth coordinate of Y is given by: Y (j) = X(j) with probability eε/2/(1 + eε/2) and 1 X(j) with probability 1/(1 + eε/2). The randomization in Qk-RAPPOR is ε-locally differentially private (Duchi et al., 2013a; Erlingsson et al., 2014). Under k-RAPPOR, Yi = [Y (1) i , , Y (k) i ] is a kdimensional binary vector, which implies that P(Y (j) i = 1) = eε/2 1 pj + 1 eε/2 + 1, (10) for all i {1, , n} and j {1, , k}. Empirical estimation under k-RAPPOR. Let Y n be the n k matrix formed by stacking the row vectors Y1, , Yn on top of each other. The empirical estimator of p under k RAPPOR is: ˆpj = eε/2 + 1 n 1 eε/2 1, (11) where Tj = Pn i=1 Y (j) i . Because Tj/n converges to mj almost surely, ˆpj converges to pj almost surely. As with k-RR, we can constrain ˆp to Sk through truncation and normalization or through projection (described in Section 4.1), both of which will be evaluated in Section 4.4. Proposition 4 For the private distribution estimation problem under k-RAPPOR and its empirical estimator given in (11), for all ε, n, and k, we have that E ℓ2 2(ˆp, p) = 1 Pk i=1 p2 i n + keε/2 n(eε/2 1)2 , and for large n, E ℓ1(ˆp, p) 2((eε/2 1)pi + 1)((eε/2 1)(1 pi) + 1) πn(eε/2 1)2 , where an bn means limn an/bn = 1. Proof See Supplementary Section C. Observe that for p U = 1 k , we have that E ℓ2 2(ˆp, p) E ℓ2 2(ˆp, p U) (12) = 1 + k2eε/2 (k 1)(eε/2 1)2 E ℓ1(ˆp, p) E ℓ1(ˆp, p U) (13) (eε/2 + k 1)(eε/2(k 1) + 1) (eε/2 1)2(k 1) 4.3. Theoretical Analysis We now analyze the performance of k-RR and k-RAPPOR relative to maximum likelihood estimation (which is equivalent to empirical estimation) on the non-privatized data Discrete Distribution Estimation under Local Privacy Xn. In the non-private setting, the maximum likelihood es- timator has a worst case risk of q πn under the ℓ1 loss, and a worst case risk of 1 1 k n under the ℓ2 2 loss (Lehmann & Casella, 1998; Kamath et al., 2015). Performance under k-RR. Comparing Equation (8) to the observation above, we can see that an extra factor of 1 + k+2(eε 1) (eε 1)2 k samples is needed to achieve the same ℓ2 2 loss as in the non-private setting. Similarly, from Equa- tion (9), a factor of eε+k 1 eε 1 2 samples is needed under the ℓ1 loss. For small ε, the sample size n is effectively reduced to nε2/k2 (under both losses). When compared to Proposition 1, this result implies that k-RR is not optimal in the high privacy regime. However, for ε ln k, the sample size n is reduced to n/4 (under both losses). This result suggests that, while k-RR is not optimal for small values of ε, it is order optimal for ε on the order of ln k. Note that k-RR provides a natural interpretation of this low privacy regime: specifically, setting ε = ln k translates to telling the truth with probability 1 2 and lying uniformly over the remainder of the alphabet with probability 1 2; an intuitively reasonably notion of plausible deniability. Performance under k-RAPPOR. Comparing Equation (12) to the observation at the beginning of this subsection, we can see that an extra factor of 1 + k2eε/2 (k 1)(eε/2 1)2 samples is needed to achieve the same ℓ2 2 as in the nonprivate case. Similarly, from Equation (13), an extra factor of (eε/2+k 1)(eε/2(k 1)+1) (eε/2 1)2(k 1) samples is needed under the ℓ1 loss. For small ε, n is effectively reduced to nε2/4k (under both losses). When compared to Proposition 1, this result implies that k-RAPPOR is order optimal in the high privacy regime. However, for ε ln k, n is reduced to n/ k (under both losses). This suggests that k-RAPPOR is strictly sub-optimal in the moderate to low privacy regime. Proposition 5 For all p Sk and all ε ln(k/2), E ||ˆp KRR p||2 2 E ||ˆp RAPPOR p||2 2 , (14) where ˆp KRR is the empirical estimate of p under k-RR, ˆp RAPPOR is the empirical estimate of p under k-RAPPOR, and ˆp is the empirical estimator under k-RAPPOR. Proof See Supplementary Section D. 4.4. Simulation Analysis To complement the theoretical analysis above, we ran simulations of k-RR and k-RAPPOR varying the alphabet size k, the privacy level ε, the number of users n, and the true distribution p from which the samples were drawn. In all cases, we report the mean over 10,000 evaluations of ˆp ˆpdecoded 1 where ˆp is the ground truth sample drawn from the true distribution and ˆpdecoded is the decoded k-RR or k-RAPPOR distribution. We vary ε over a range that corresponds to the moderate-to-low privacy regimes in our theoretical analysis above, observing that even large values of ε can provide plausible deniability impossible under un-noised logging. We compare using the ℓ1 distance of the two distributions because in most applications we want to estimate all values well, emphasizing neither very large values (as an ℓ2 or higher metric might) nor very small values (as information theoretic metrics might). Supplementary Figures 5 and 6, analogous to the ones in this section, demonstrate that the choice of distance metric does not qualitatively affect our conclusions on the decoding strategies for k-RR or k RAPPOR nor on the regimes in which each is superior. The distributions we considered in simulation were binomial distributions with parameter in {.1, .2, .3, .4, .5} , Zipf distribution with parameter in {1, 2, 3, 4, 5}, multinomial distributions drawn from a symmetric Dirichlet distribution with parameter 1, and the geometric distribution with mean k/5. The geometric distribution is shown in Supplementary Figure 4. We focus primarily on the geometric distribution here because qualitatively it shows the same patterns for decoding as the full set of binomial and Zipf distributions and it is sufficiently skewed to represent many real-world datasets. It is also the distribution for which k-RAPPOR does the best relative to k-RR over the largest range of k and ε in our simulations. 4.4.1. DECODING We first consider the impact of the choice of decoding mechanism used for k-RR and k-RAPPOR. We find that the best decoder in practice for both k-RR and k-RAPPOR on skewed distributions is the projected decoder which projects the ˆp KRR or ˆp RAPPOR onto the probability simplex Sk using the method described in Algorithm 1 of (Wang & Carreira-Perpi n an, 2013). For k-RR, we compare the projected empirical decoder to the normalized empirical decoder (which truncates negative values and renormalizes) and to the maximum likelihood decoder (see Supplementary Section F.1). For k-RAPPOR, we compare the standard decoder, normalized decoder, and projected decoder. Figure 1 shows that the projected decoder is substantially better than the other decoders for both k-RR and k-RAPPOR for the whole range of k and ε for the geometric distribution. We find this result holds as we vary the number of users from 30 to 106 and for all distributions we evaluated except for the Dirichlet distribution, which is the least skewed. For the Dirichlet distribution, the normalized decoder variant is best for both k-RR and k-RAPPOR. Be- Discrete Distribution Estimation under Local Privacy cause the projected decoder is best on all the skewed distributions we expect to see in practice, we use it exclusively for the open-alphabet experiments in Section 5. 4.4.2. k-RR VS k-RAPPOR To construct a fair, empirical comparison of k-RR and k RAPPOR, we employ the same methodology used above in selecting decoders. Figure 2 shows the difference between the best k-RR decoder and the best k-RAPPOR decoder (for a particular k and ε). For most cells, the best decoder is the projected decoder described above. Note that the best k-RAPPOR decoder is consistently better than the best k-RR decoder for relatively large k and low ε. However, k-RR is slightly better than k-RAPPOR in all conditions where k < eε (bottom-right triangle), an empirical result for ℓ1 that complements Proposition 5 s statement about ML decoders in ℓ2. All of the skewed distributions manifest the same pattern as the geometric distribution. As the number of users increases, k-RR s advantage over k RAPPOR in the low privacy environment shrinks. In the next sections, we will examine the use of cohorts to improve decoding and to handle larger, open alphabets. 5. Open Alphabets, Hashing, and Cohorts In practice, the set of values that may need to be collected may not be easily enumerable in advance, preventing a direct application of the binary and k-ary formulations of private distribution estimation. Consider a population of n users, where each user i possesses a symbol si drawn from a large set of symbols S whose membership is not known in advance. This scenario is common in practice; for example, in Chrome s estimation of the distribution of home page settings (Erlingsson et al., 2014). Building on this intuitive example, we assume for the remainder of the paper that symbols si are strings, but we note that the methods described are applicable to any hashable structures. 5.1. O-RR: k-RR with hashing and cohorts k-RR is effective for privatizing over known alphabets. Inspired by (Erlingsson et al., 2014), we extend k-RR to open alphabets by combining two primary intuitions: hashing and cohorts. Let HASH(s) be a function mapping S N with a low collision rate, i.e. HASH(s) = HASH(s ) with very low probability for s = s. With hashing, we could use k-RR to guarantee local privacy over an alphabet of size k by having each client report QKRR(HASH(s) mod k). However, as we will see, hashing alone is not enough to provide high utility because of the increased rate of collisions introduced by the modulus. Complementing hashing, we also apply the idea of hash cohorts: each user i is assigned to a cohort ci sampled i.i.d. from the uniform distribution over C = {1, ..., C}. Each cohort c C provides an independent view of the underlying distribution of strings by projecting the space of strings S onto a smaller space of symbols X using an independent hash function HASHc. The users in a cohort use their cohort s hash function to partition S into k disjoint subsets by computing xi = HASHci(si) mod k = HASH(k) ci (si). Each subset contains approximately the same number of strings, and because each cohort uses a different hash function, the induced partitions for different cohorts are orthogonal: P(xi = xj|ci = cj) 1 k even when si = sj. 5.1.1. ENCODING AND DECODING For encoding, the O-RR privatization mechanism can be viewed as a sampling distribution independent of C. Therefore, QORR(y, c|s) is given by 1 C(eε + k 1) ( eε if HASH(k) c (s) = y, 1 if HASH(k) c (s) = y. (15) For decoding, fix candidate set S and interpret the privatization mechanism QORR as a k C S row-stochastic matrix: C 1 eε + k 1 (1 + (eε 1)H) (16) where: H(y, c|s) = 1{HASH(k) c (s)=y} (17) Note that H is a k C S sparse binary matrix encoding the hashed outputs for each cohort, wherein each column of H has exactly C non-zero entries. Now m = p QORR is the expected output distribution for true probability vector p, allowing us to form an empirical estimator by using standard least-squares techniques to solve the linear system: ˆp ORRH = 1 eε 1 (C(eε + k 1) ˆm 1) . (18) Note that when C = 1 and H is the identity matrix, (18) reduces to standard k-RR empirical estimator as seen in (6). As with the k-RR empirical estimator, ˆp ORR may have negative entries. Section 4.1 describes methods for constraining ˆp ORR to Sk, of which simplex projection is demonstrated to offer superior performance in Section 4.4. The remainder of the paper assumes that O-RR uses the simplex projection strategy. 5.2. O-RAPPOR RAPPOR also extends from k-ary alphabets to open alphabets using hashing and cohorts (Erlingsson et al., 2014); we refer to this extension herein as O-RAPPOR. However, the k-RAPPOR mechanism uses a size | X| = 2k Discrete Distribution Estimation under Local Privacy Figure 1: The improvement in ℓ1 decoding of the projected k-RR decoder (left) and projected k-RAPPOR decoder (right). Each grid varies the size of the alphabet k (rows) and privacy parameter ε (columns). Each cell shows the difference in ℓ1 magnitude that the projected decoder has over the ML and normalized k-RR decoders (left) or the standard and normalized k-RAPPOR decoders (right). Negative values mean improvement of the projected decoder over the next best alternative. Geometric Dirichlet Figure 2: The improvement (negative values, blue) of the best k-RR decoder over the best k-RAPPOR decoder varying the size of the alphabet k (rows) and privacy parameter ε (columns). The left charts focus on small numbers of users (100); the right charts show a large number of users (30000, also representative of larger numbers of users). The top charts show the geometric distribution (skewed) and the bottom charts show the Dirichlet distribution (flat). (a) Open alphabets. (b) Closed alphabets. Figure 3: ℓ1 loss of O-RR and O-RAPPOR for n = 106 on the geometric distribution when applied to unknown input alphabets (via hash functions, (a)) and to known input alphabets (via perfect hashing, (b)). Lines show median ℓ1 loss with 90% confidence intervals over 50 samples. Free parameters are set via grid search over k [2, 4, 8, . . . , 2048, 4096], c [1, 2, 4, . . . , 512, 1024], h [1, 2, 4, 8, 16] for each ε. Note that the k-RAPPOR and O-RAPPOR lines in (b) are nearly indistinguishable. Baselines indicate expected loss from (1) using an empirical estimator directly on the input s and (2) using the uniform distribution as the ˆp estimate. Discrete Distribution Estimation under Local Privacy input representation as opposed to k-RR s size |X| = k representation. Taking advantage of the larger input space, O-RAPPOR uses an independent h-hash Bloom filter BLOOM(k) c for each cohort before applying the k-RAPPOR mechanism i.e. the j-th bit of xi is 1 if HASH(k) c,h (si) = j for any h [1 . . . h], where HASH(k) c,h are a set of h C mutually independent hash functions modulo k. Decoding for O-RAPPOR is described in (Erlingsson et al., 2014) and follows a similar strategy as for O-RR. However, because this paper focuses on distribution estimation rather than heavy hitter detection, we eliminate both the Lasso regression stage and filtering of imputed frequencies relative to Bonferroni corrected thresholds, retaining just the regular least-squares regression. 5.3. Simulation Analysis We ran simulations of O-RR and O-RAPPOR for n = 106 users with input drawn from an alphabet of S = 256 symbols under a geometric distribution with mean=S/5 (see Supplementary Figure 4). As described in Section 4.4, the geometric distribution is representative of actual data and relatively easy for k-RAPPOR and challenging for k-RR. Free parameters were set to minimize the median ℓ1 loss. Similar results for S = 4096 and n = 106 and 108 are included in the Supplementary Material. In Figure 3(a), we see that under these conditions, O-RR matches the utility of O-RAPPOR in both the very low and high privacy regimes and exceeds the utility of O-RAPPOR over midrange privacy settings. For O-RR, we find that the optimal k depends directly on ε, that increasing C consistently improves performance in the low-to-mid privacy regime, and that C = 1 noticably underperforms across the range of privacy levels. For ORAPPOR, we find that performance improves as k increases (with k = 4096 near the asymptotic limit), that C = 1 noticably underperforms across the range of privacy values, but with all C 2 performing indistinguishably. Finally, we find that the optimal value for h is consistently 1, indicating that Bloom filters provide no utility improvement beyond simple hashing. See Supplementary Figure 11 for details. 5.4. Improved Utility for Closed Alphabets O-RR and O-RAPPOR extend k-ary mechanisms to open alphabets through the use of hash functions and cohorts. These same mechanisms may also be applied to closed alphabets known a priori. While direct application is possible, the reliance on hash functions exposes both mechanism to unnecessary risk of hash collision. Instead, we modify the O-RR and O-RAPPOR mechanisms, replacing each cohort s generic hash functions with minimal perfect hash functions mapping S to [0 . . . S 1] before applying the modulo k operation. In most closed-alphabet applications, S = [0 . . . S 1], in which case these minimal perfect hash functions are simply permutations. Also note that in this setting, O-RR and and O-RAPPOR reduce to exactly their k-ary counterparts when C and h are both 1 except that the output symbols are permuted. In Figure 3(b), we evaluate these modified mechanisms using the same method described in Section 5.3 (note that the utilities of k-RAPPOR and O-RAPPOR are nearly indistinguishable). O-RAPPOR benefits little from the introduction of minimal perfect hash functions. In contrast, O-RR s utility improves significantly, meeting or exceeding the utility of all other mechanisms at all considered ε. 6. Conclusion Data improves products, services, and our understanding of the world. But its collection comes with risks to the individuals represented in the data as well as to the institutions responsible for the data s stewardship. This paper s focus on distribution estimation under local privacy takes one step toward a world where the benefits of data-driven insights are decoupled from the collection of raw data. Our new theoretical and empirical results show that combining cohort-style hashing with the k-ary extension of the classical randomized response mechanism admits practical, state of the art results for locally private logging. In many applications, data is collected to enable the making of a specific decision. In such settings, the nature of the decision frequently determines the required level of utility, and the number of reports to be collected n is predetermined by the size of the existing user base. Thus, the differential privacy practitioner s role is often to offer users as much privacy as possible while still extracting sufficient utility at the given n. Our results suggest that O-RR may play a crucial role for such a practitioner, offering a single mechanism that provides maximal privacy at any desired utility level simply by adjusting the mechanism s parameters. In future work, we plan to examine estimation of nonstationary distributions as they change over time, a common scenario in data logged from user interactions. We will also consider what utility improvements may be possible when some responses need more privacy than others, another common scenario in practice. Much more work remains before we can dispel the collection of un-noised data altogether. Acknowledgements. Thanks to Ulfar Erlingsson, Ilya Mironov, and Andrey Zhmoginov for their comments on drafts of this paper. Discrete Distribution Estimation under Local Privacy Bassily, Raef and Smith, Adam. Local, private, efficient protocols for succinct histograms. ar Xiv preprint ar Xiv:1504.04686, 2015. Blocki, Jeremiah, Datta, Anupam, and Bonneau, Joseph. Differentially private password frequency lists. 2016. Boyd, Stephen and Vandenberghe, Lieven. Convex optimization. Cambridge university press, 2004. Chan, T-H Hubert, Li, Mingfei, Shi, Elaine, and Xu, Wenchang. Differentially private continual monitoring of heavy hitters from distributed streams. In Privacy Enhancing Technologies, pp. 140 159. Springer, 2012. Diakonikolas, Ilias, Hardt, Moritz, and Schmidt, Ludwig. Differentially private learning of structured discrete distributions. In Advances in Neural Information Processing Systems, pp. 2557 2565, 2015. Duchi, John, Wainwright, Martin J, and Jordan, Michael I. Local privacy and minimax bounds: Sharp rates for probability estimation. In Advances in Neural Information Processing Systems, pp. 1529 1537, 2013a. Duchi, John C, Jordan, Michael I, and Wainwright, Martin J. Local privacy, data processing inequalities, and statistical minimax rates. ar Xiv preprint ar Xiv:1302.3203, 2013b. Dwork, C. Differential privacy. In Automata, languages and programming, pp. 1 12. Springer, 2006. Dwork, C. and Lei, J. Differential privacy and robust statistics. In Proceedings of the 41st annual ACM symposium on Theory of computing, pp. 371 380. ACM, 2009. Dwork, C., Mc Sherry, F., Nissim, K., and Smith, A. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography, pp. 265 284. Springer, 2006. Dwork, Cynthia. Differential privacy: A survey of results. In Theory and applications of models of computation, pp. 1 19. Springer, 2008. Erlingsson, Ulfar, Pihur, Vasyl, and Korolova, Aleksandra. Rappor: Randomized aggregatable privacy-preserving ordinal response. In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, pp. 1054 1067. ACM, 2014. Hsu, Justin, Khanna, Sanjeev, and Roth, Aaron. Distributed private heavy hitters. In Automata, Languages, and Programming, pp. 461 472. Springer, 2012. Kairouz, Peter, Oh, Sewoong, and Viswanath, Pramod. Extremal mechanisms for local differential privacy. In Advances in Neural Information Processing Systems, pp. 2879 2887, 2014. Kamath, Sudeep, Orlitsky, Alon, Pichapati, Venkatadheeraj, and Suresh, Ananda Theertha. On learning distributions from their samples. In Proceedings of The 28th Conference on Learning Theory, pp. 1066 1100, 2015. Lehmann, Erich Leo and Casella, George. Theory of point estimation, volume 31. Springer Science & Business Media, 1998. Mc Callum, Andrew, Nigam, Kamal, et al. A comparison of event models for naive bayes text classification. In AAAI-98 workshop on learning for text categorization, volume 752, pp. 41 48. Citeseer, 1998. Narayanan, Arvind and Shmatikov, Vitaly. Robust deanonymization of large sparse datasets. In Security and Privacy, 2008. SP 2008. IEEE Symposium on, pp. 111 125. IEEE, 2008. Wang, Weiran and Carreira-Perpi n an, Miguel A. Projection onto the probability simplex: An efficient algorithm with a simple proof, and an application. Co RR, abs/1309.1541, 2013. URL http://arxiv.org/ abs/1309.1541. Warner, Stanley L. Randomized response: A survey technique for eliminating evasive answer bias. Journal of the American Statistical Association, 60(309):63 69, 1965.