# differentially_private_fair_division__75e945bc.pdf Differentially Private Fair Division Pasin Manurangsi1, Warut Suksompong2 1Google Research 2National University of Singapore Fairness and privacy are two important concerns in social decision-making processes such as resource allocation. We study privacy in the fair allocation of indivisible resources using the well-established framework of differential privacy. We present algorithms for approximate envy-freeness and proportionality when two instances are considered to be adjacent if they differ only on the utility of a single agent for a single item. On the other hand, we provide strong negative results for both fairness criteria when the adjacency notion allows the entire utility function of a single agent to change. 1 Introduction Fairness is a principal concern in numerous social decisionmaking processes, not least when it comes to allocating scarce resources among interested parties. Whether we divide equipment between healthcare personnel, assign facility time slots to potential users, or distribute office space among working groups in an organization, it is desirable that all parties involved feel fairly treated. While fair division has been studied in economics for several decades (Brams and Taylor 1996; Moulin 2003, 2019), the subject has received substantial interest from computer scientists in recent years, much of which has concentrated on the fair allocation of indivisible resources (Bouveret, Chevaleyre, and Maudet 2016; Markakis 2017; Walsh 2020; Suksompong 2021; Amanatidis et al. 2022; Aziz et al. 2022). The fair division literature typically focuses on satisfying concrete fairness criteria. Two of the most prominent criteria are envy-freeness and proportionality. In an envy-free allocation, no agent prefers to have another agent s bundle instead of her own. In a proportional allocation, every agent receives value at least 1/n of her value for the entire resource, where n denotes the number of agents. As neither of these criteria is always satisfiable,1 researchers have proposed the relaxations envy-freeness up to c items (EFc) and proportionality up to c items (PROPc); here, c is a nonnegative integer parameter. Under additive utilities, EFc implies PROPc for every c,2 and an EF1 allocation (which must Copyright 2023, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 1For example, imagine two siblings fighting for one toy. 2See the proof of Proposition 2.2(b) in our prior work (Manurangsi and Suksompong 2022a). also be PROP1) is guaranteed to exist (Lipton et al. 2004). In addition to fairness, another consideration that has become increasingly important nowadays as large amounts of data are constantly collected, processed, and analyzed is privacy. Indeed, an agent participating in a resource allocation procedure may not want other participants to know her preferences if she considers these as sensitive information, for example, if these preferences correspond to the times when she is available to use a facility, or if they represent her valuations for potential team members when distributing employees in an organization. Consequently, a desirable procedure should ensure that an individual participant s preferences cannot be inferred based on the output of the procedure. Achieving privacy alone is trivial, as the procedure can simply ignore the agents preferences and always output a fixed allocation that it announces publicly in advance. However, it is clear that such a procedure can be highly unfair for certain preferences of the agents. Despite its significance, the issue of privacy has been largely unaddressed in the fair division literature as far as we are aware.3 In this paper, we investigate the fundamental question of whether fairness and privacy can be attained simultaneously in the allocation of indivisible resources. We use the wellestablished framework of differential privacy (DP), which has been widely adopted not only in academia but also in industry (Erlingsson, Pihur, and Korolova 2014; Shankland 2014; Greenberg 2016; Apple Differential Privacy Team 2017; Ding, Kulkarni, and Yekhanin 2017) as well as government sectors (Abowd 2018). Intuitively, the output distribution of a (randomized) DP4 algorithm should not change by much when a single entry of the input is modified. DP provides a privacy protection for individual entries by ensuring that an adversary with access to the output can only gain limited information about each individual entry. At the same time, DP algorithms often still provide useful outputs based on the aggregated information. We outline the tools and concepts from DP used in this work in Section 2.2.5 3Sun and Yang (2009) studied a setting in which agents have reservation values over items, and considered a mechanism to be private if it does not require agents to reveal these values. 4We use the abbreviation DP for both differential privacy and differentially private . 5For in-depth treatments of the subject, we refer to the surveys by Dwork (2008) and Dwork and Roth (2014). The Thirty-Seventh AAAI Conference on Artificial Intelligence (AAAI-23) Agent-Level DP (Agent Item)-Level DP EF Upper O(m/n) (Trivial) O(n log m) (Theorem 4.1) Lower Ω p m n log n (Theorem 3.1) Ω(log m) (Theorem 4.8) PROP Upper O(m/n) (Trivial) O(log m) (Theorem 4.7) Lower Ω p m n (Theorem 3.2) Ω((log m)/n) (Theorem 4.10) Table 1: Overview of our results. We display the smallest c such that there is an ε-DP algorithm that outputs an EFc or PROPc allocation with probability at least 1 β. For simplicity of the bounds, we assume that m > n2, ε is a small constant, and β is sufficiently small (depending only on ε, n and not on m). All upper bounds hold even for connected allocations. Lower bounds for (agent item)-level DP hold only for connected allocations, but those for agent-level DP hold for arbitrary allocations. The bounds O(m/n) follow trivially from outputting a fixed allocation in which each agent receives O(m/n) items. As alluded to above, DP is defined with respect to what one considers to be an entry of the input, or equivalently, in terms of adjacent inputs. We consider two notions of adjacency between fair division instances. For agent-level adjacency, two instances are considered to be adjacent if they differ on the utility function of at most one agent. For (agent item)-level adjacency, two instances are adjacent if they differ at most on the utility of a single agent for a single item. We work with the standard definition of ε-differential privacy (ε-DP): for a parameter ε 0, an algorithm is said to satisfy ε-DP if the probability that it outputs a certain allocation for an input and the corresponding probability for an adjacent input differ by a factor of at most eε. Note that, for the same ε, agent-level DP offers a stronger privacy protection for an entire utility function of an individual agent, whereas (agent item)-level DP only offers a protection for a utility of a single agent for a single item. Our goal is to devise ε-DP algorithms that output an EFc or PROPc allocation for a small value of c with sufficiently high probability, or to prove that this task is impossible. Denote by n and m the number of agents and items, respectively. We begin in Section 3 by considering agent-level DP. For this demanding benchmark, we establish strong lower bounds with respect to both approximate envy-freeness and proportionality (Theorems 3.1 and 3.2). In both cases, our lower bounds imply that, for fixed n and ε, no ε-DP algorithm can output an EFc or PROPc allocation for c = o( m) with some large constant probability. Our results hold even when the agents have binary additive utilities, and indicate that agent-level DP is too stringent to permit algorithms with significant fairness guarantees. In Section 4, we turn our attention to (agent item)- level DP, and deliver encouraging news for this more relaxed notion. In contrast to the previous lower bounds, we present ε-DP algorithms for EFc and PROPc where c only grows logarithmically in m for fixed n and ε (Theorems 4.1 and 4.7). Our EFc algorithm works for arbitrary monotone utility functions, whereas our PROPc algorithm allows (not necessarily binary) additive utilities. Moreover, our algorithms always output allocations that are connected.6 We complement these results by showing a tight 6See Section 2 for the definition. Connectivity can be desirable when there is a spatial or temporal order of the items, for instance, lower bound of Ω(log m) for connected allocations (Theorems 4.8 and 4.10), even with binary additive utilities. A summary of our results can be found in Table 1. 2 Preliminaries 2.1 Fair Division In fair division of indivisible items, there is a set N = [n] of agents and a set M = [m] of items, where [k] denotes the set {1, 2, . . . , k} for each positive integer k. The utility function of agent i is given by ui : 2M R 0. Throughout this work, we assume that utility functions are monotone, that is, ui(S) ui(T) for any S T M. For a single item j M, we write ui(j) instead of ui({j}). We seek to output an allocation A = (A1, . . . , An), which is an ordered partition of M into n bundles. We consider two important fairness notions. Let c be a non-negative integer. An allocation A is said to be envy-free up to c items (EFc) if, for any i, i N, there exists S Ai with |S| c such that ui(Ai) ui(Ai \ S). An allocation A is said to be proportional up to c items (PROPc) if, for any i N, there exists S M \Ai with |S| c such that ui(Ai) ui(M)/n ui(S). We say that an allocation A is connected if each Ai corresponds to an interval, i.e., Ai = {ℓ, ℓ+ 1, . . . , r} for some ℓ, r M; it is possible that Ai is a singleton or empty. Let Pconn(m, n) denote the set of all connected allocations. We will use the following result on the existence of connected EF2 allocations.7 Theorem 2.1 ((Bil o et al. 2022)). For any m, n N and any monotone utility functions, there exists a connected EF2 allocation. A utility function ui is additive if ui(S) = P j S ui(j) for all S M. Furthermore, an additive utility function is said to be binary if ui(j) {0, 1} for all j M. when allocating time slots to facility users or offices along a corridor to research groups (Bouveret et al. 2017; Suksompong 2019; Bei et al. 2022; Bil o et al. 2022). 7Recently, Igarashi (2023) improved this guarantee to EF1. 2.2 Differential Privacy Let us start by recalling the general definition of DP. Denote by X the set of all possible inputs to the algorithm. Definition 2.2 (Differential Privacy (Dwork et al. 2006b)). Let ε 0 be a non-negative real number. A randomized algorithm M is said to be ε-differentially private (ε-DP) if, for every pair of adjacent inputs X, X , it holds that Pr[M(X) = o] eε Pr[M(X ) = o] for all o range(M). An input in our fair division context consists of the agents utility functions. Different notions of adjacency lead to different levels of privacy protection. We consider two natural notions: agent-level DP and (agent item)-level DP. Definition 2.3 (Agent-Level DP). Two inputs (ui)i N and (u i)i N are said to be agent-level adjacent if they coincide on all but a single agent, i.e., there exists i N such that ui = u i for all i N \ {i }. An algorithm that is ε-DP against this adjacency notion is said to be agent-level ε-DP. Definition 2.4 ((Agent Item)-Level DP). Two inputs (ui)i N and (u i)i N are said to be (agent item)-level adjacent if they coincide on all but the utility of a single agent for a single item, i.e., there exist i N, j M such that ui = u i for all i N \ {i }, and ui (S) = u i (S) for all S M \ {j }. An algorithm that is ε-DP against this adjacency notion is said to be (agent item)-level ε-DP. It is clear that agent-level DP is a more demanding notion than (agent item)-level DP. Specifically, the former provides a stronger privacy protection than the latter, and designing an algorithm for the former is more difficult. Indeed, we will prove strong lower bounds for agent-level DP and present algorithms for (agent item)-level DP. Next, we outline several tools from the DP literature that will be useful for our proofs. Basic Composition. The first tool that we will use is the composition of DP: the result of running multiple DP algorithms remains DP, but with a worse privacy parameter. Theorem 2.5 (Basic Composition of DP, e.g., (Dwork and Roth 2014)). An algorithm that is a result of running two algorithms (possibly in an adaptive manner) that are ε1-DP and ε2-DP, respectively, is (ε1 + ε2)-DP. Group Privacy. While differential privacy offers protection primarily against the adjacency notion for which it is defined, it also offers protection against more general adjacency notions. Below we state one such protection, which is often referred to as group differential privacy. Let be any adjacency relation. For k N, let us define k as the adjacency relation where X k X if and only if there exists a sequence X0, . . . , Xk such that X0 = X, Xk = X , and Xi 1 Xi for all i [k]. Lemma 2.6 (Group Differential Privacy, e.g., (Vadhan 2017)). Let k N. If an algorithm is ε-DP with respect to an adjacency notion , it is also (kε)-DP with respect to the adjacency notion k. As an immediate consequence of Lemma 2.6, any (agent item)-level ε-DP algorithm is also agent-level (mε)-DP. However, the factor mε makes the latter guarantee rather weak, especially as m grows. Sensitivity. We next define the sensitivity of a function, which will be used multiple times in this work. Note that the definition depends on the adjacency notion, but we do not explicitly include it in the notation for convenience. Definition 2.7. The sensitivity of a function f : X R (with respect to adjacency notion ) is defined as (f) := max X X |f(X) f(X )|. Sensitivity is a key notion in DP. As shown by Dwork et al. (2006b), the Laplace mechanism which outputs f(X) + Z where Z is drawn from the Laplace distribution8 with scale ( (f)/ε) satisfies ε-DP. This means that if a function has low sensitivity, then we can estimate it to within a small error (with high probability). Sparse Vector Technique. We will use the so-called sparse vector technique (SVT). The setting is that there are low-sensitivity functions f1, . . . , f H : X R. We want to find the first function fi whose value is above a target threshold. A straightforward approach would be to add Laplace noise to each fi(X) and then select the function accordingly; due to the basic composition theorem, this would require us to add noise of scale O(H/ε) to each function. SVT allows us to reduce the dependency on H to merely O(log H). The technique was first introduced by Dwork et al. (2009), and the convenient version below is due to Dwork and Roth (2014, Theorem 3.24). Theorem 2.8. There exists a constant υ > 0 such that the following holds. Let f1, . . . , f H : X R be functions with (f1), . . . , (f H) 1. For any ε > 0, β (0, 1), and τ R, there exists an ε-DP algorithm such that, if maxh fh(X) τ, then, with probability at least 1 β, the algorithm outputs h [H] with the following properties: fh (X) τ υ log(H/β)/ε; For all h < h , fh (X) τ + υ log(H/β)/ε. Exponential Mechanism. We will also use the exponential mechanism (EM) of Mc Sherry and Talwar (2007). In its generic form, EM allows us to select a solution from a candidate set H. Specifically, we may define (low-sensitivity) scoring functions scrh : X R for each h H. Then, EM outputs a solution that approximately maximizes the score. The precise statement is given below.9 Theorem 2.9 ((Mc Sherry and Talwar 2007)). For any ε > 0 and β (0, 1), a finite set H, and a set of scoring functions {scrh}h H such that (scrh) 1 for each h H, there is an ε-DP algorithm that, on every input X, outputs h such that scrh (X) max h H scrh(X) 2 log(|H|/β) ε with probability at least 1 β. 8The Laplace distribution with scale b is the distribution whose probability density function is proportional to exp( |x|/b). 9The formulation here can be derived, e.g., by plugging t = log(1/β) into Corollary 3.12 of Dwork and Roth (2014). 2.3 Anti-Concentration Inequality Denote by Ber(1/2) the distribution that is 0 with probability 1/2, and 1 otherwise. The following type of anticoncentration inequalities is well-known; for completeness, we provide a proof for this one in the full version of our paper (Manurangsi and Suksompong 2022b). Lemma 2.10. If k 100 and X1, . . . , Xk are independent random variables drawn from Ber(1/2), then 3 Agent-Level DP We begin by considering the demanding notion of agentlevel DP, and provide strong negative results for this notion. For EFc, we show a lower bound that,10 when m > n log n, holds even against c = Θ p m Theorem 3.1. There exists a constant ζ > 0 such that, for any ε > 0, there is no agent-level ε-DP algorithm that, for any input binary additive utility functions, outputs an EFc allocation with probability higher than 1 e ε 200 , where c = j ζ q n min log n, m For proportionality, we prove a slightly weaker bound where c = Θ( p m/n) and the failure probability required for the lower bound to apply is also smaller at Oε(1/n) (compared to Oε(1) for envy-freeness). Theorem 3.2. There exists a constant ζ > 0 such that, for any ε > 0, there is no agent-level ε-DP algorithm that, for any input binary additive utility functions, outputs a PROPc allocation with probability higher than 1 e ε 8n , where c = ζ p Due to space constraints, we only present the proof of Theorem 3.2 here. The proof of Theorem 3.1, which uses similar arguments but requires a more delicate anticoncentration inequality, can be found in the full version of our paper (Manurangsi and Suksompong 2022b). 3.1 Proof of Theorem 3.2 We let ζ = 0.01. If m < 100n, then c = 0 and the theorem holds trivially even without the privacy requirement. Hence, we may assume that m 100n. Throughout the proof, we consider random utility functions u = (ui)i N where each ui(j) is an independent Ber(1/2) random variable. For brevity, we will not repeatedly state this in the calculations below. We start by proving the following auxiliary lemma that if Ai is small, then, for a random utility u as above, the allocation fails to be PROPc for agent i with a constant probability. Lemma 3.3. For ζ = 0.01, let c be as in Theorem 3.2 and A be an allocation such that |Ai| m/n. Then, we have Pr u [A is not PROPc for agent i] 1/8. 10Unless specified otherwise, log refers to the natural logarithm. Proof. Let c = 2c. We have Pr u [A is not PROPc for agent i] ui(Ai) < ui(M) ui(Ai) < ui(M \ Ai) n 1 n n 1 c 2n c ui(M \ Ai) m(n 1) h ui(Ai) < m ui(M \ Ai) m(n 1) h ui(Ai) < m 2n c i , (1) where the last inequality follows from the fact that |M \ Ai| m(n 1)/n and symmetry. Since |Ai| is an integer, |Ai| m/n . Moreover, since m/n 100 and the function f(k) = k/2 0.1 k is increasing in [1, ), applying Lemma 2.10 with k = m/n gives Prui ui(Ai) < m 2n c 1/4. Plugging this back into (1) yields the desired bound. We are now ready to prove Theorem 3.2. Proof of Theorem 3.2. Let ζ = 0.01 and let M be any agent-level ε-DP algorithm. Consider the input utility functions u = (u i)i N where the utility functions are all-zero, and consider the distribution M(u ). For any allocation A, we have Pri N [|Ai| m/n] 1/n since at least one bundle Ai must have size at most m/n. This implies that Pri N,A M(u )[|Ai| m/n] 1/n. Thus, there exists i N such that Pr A M(u )[|Ai | m/n] 1/n. Recalling the definition of u from earlier and applying Lemma 3.3, we have11 Pr u,A M(u )[A is not PROPc for agent i ] Pr A M(u ) [|Ai | m/n] Pr u,A M(u )[A is not PROPc for agent i | |Ai | m/n] (1/n) (1/8) = 1/(8n). Hence, there exists u i such that12 Pr A M(u )[A is not PROPc for agent i w.r.t. u i ] 1/(8n). Now, let u be the input utility such that u i is all-zero for each i = i while u i is as above. Notice that u is adjacent to u under agent-level adjacency. Thus, applying the ε-DP guarantee of M, we get Pr A M(u )[A is not PROPc for agent i w.r.t. u i ] 11Here, PROPc is with respect to u. 12The abbreviation w.r.t. stands for with respect to . e ε Pr A M(u )[A is not PROPc for agent i w.r.t. u i ] (e ε)/(8n). This completes the proof. 4 (Agent Item)-Level DP In this section, we turn our attention to (agent item)-level DP, which is a more relaxed notion than agent-level DP. We explore both the possibilities (Section 4.1) and limits (Section 4.2) of private algorithms with respect to this notion. 4.1 Algorithms In contrast to agent-level DP, we will show that Oε,n(log m) upper bounds can be attained in the (agent item)-level DP setting. Before we do so, let us first explain why straightforward approaches do not work. To this end, assume that utilities are additive and ui(j) [0, 1] for all i N, j M. One may want to estimate ui(S) for each S using the Laplace mechanism. While the Laplace mechanism guarantees that the estimate has an expected additive error of O(1/ε), this is not useful for obtaining approximate envy-freeness or proportionality guarantees: it is possible that (almost) every good yields utility much less than 1. In this case, additive errors do not translate to any non-trivial EFc or PROPc guarantees. We will therefore develop different and more robust comparison methods, which ultimately allow us to overcome the aforementioned issue. Approximate Envy-Freeness Our main algorithmic result for approximate envy-freeness is stated below. Note that this result holds even for non-additive utility functions. Theorem 4.1. For any ε > 0 and β (0, 1], there exists an (agent item)-level ε-DP algorithm that, for any input monotone utility functions, outputs a connected EFc allocation with probability at least 1 β, where c = O 1 + n log(mn)+log(1/β) The high-level idea of our algorithm is to apply the exponential mechanism (Theorem 2.9) to select an allocation among the (mn)n connected allocations. This gives us an error in the score of Oε(log((mn)n)) = Oε(n log(mn)). The question is how to set up the score so that (i) it has low sensitivity and (ii) such an error translates to an approximate envy-freeness guarantee. Our insight is to define the score based on the following modified utility function that removes a certain number of most valuable items. Definition 4.2. For u = (u1, . . . , un) and k N {0}, we define u k = (u k 1 , . . . , u k n ) by u k i (S) := min T M,|T | k ui(S \ T) i N, S M. It is clear that u k inherits the monotonicity of u. We next list some simple but useful properties of such utility functions. The first property, whose proof is trivial, relates Definition 4.2 to approximate envy-freeness. Observation 4.3. Let k, d N {0}. Any allocation that is EFd with respect to u k is EF(d + k) with respect to u. The second property is that the d, k values are robust with respect to (agent item)-level adjacency. Lemma 4.4. Let k N, d N {0}, and u, u be any two (agent item)-level adjacent inputs. If an allocation A is EFd with respect to u k, then it is EF(d + 2) with respect to u (k 1). Proof. By definition of (agent item)-level adjacency, there exist i N, j M such that ui = ui for all i N \ {i }, and ui(S) = ui(S) for all S M \ {j }. Consider any i, i N. Since A is EFd with respect to u k, we have u k i (Ai) min S M,|S| d u k i (Ai \ S) = min T M,|T | d+k ui(Ai \ T). (2) Furthermore, we have u k i (Ai) = min T M,|T | k ui(Ai \ T) min T M,|T | k 1 ui(Ai \ (T {j })) = min T M,|T | k 1 ui(Ai \ (T {j })) min T M,|T | k 1 ui(Ai \ T) = u (k 1) i (Ai). (3) min T M,|T | d+kui(Ai \ T) min T M,|T | d+k ui(Ai \ (T {j })) = min T M,|T | d+k ui(Ai \ (T {j })) min T M,|T | d+k+1 ui(Ai \ T) = min S M,|S| d+2 u (k 1) i (Ai \ S). (4) Thus, we can conclude that u (k 1) i (Ai) (3) u k i (Ai) (2) min T M,|T | d+k ui(Ai \ T) (4) min S M,|S| d+2 u (k 1) i (Ai \ S). It follows that A is EF(d + 2) with respect to u (k 1). We can now define our scoring function. Definition 4.5. For an allocation A, u = (u1, . . . , un), and g N, define scrg A(u) := min{t [g] | A is EF(2t) w.r.t. u (g t)}. We let scrg A(u) = g if the set above is empty. Algorithm 1: (Agent item)-level ε-DP algorithm for EFc Parameter: ε > 0, β (0, 1] 1: g 4 l 1 + log((mn)n/β) 2: return the allocation output by the ε-DP exponential mechanism using the scoring function scrg A (Definition 4.5) with the candidate set Pconn(m, n) (Section 2.1). The following lemma shows that this scoring function has low sensitivity. Lemma 4.6. For any allocation A and g N, (scrg A) 1. Proof. Let u, u be any pair of (agent item)-level adjacent inputs. Assume without loss of generality that scrg A(u) scrg A( u). Let t := scrg A(u). If t = g, then scrg A( u) scrg A(u) = g, so scrg A( u) = g. Otherwise, t g 1, and A is EF(2t ) with respect to u (g t ). Lemma 4.4 ensures that A is EF(2(t + 1)) with respect to u (g t 1). Thus, scrg A( u) (t + 1). Since t = scrg A(u) scrg A( u), we have | scrg A(u) scrg A( u)| 1 in both cases, completing the proof. With all the ingredients ready, we can prove Theorem 4.1 by applying the exponential mechanism with appropriate parameters (see Algorithm 1). Proof of Theorem 4.1. Let g = 4 l 1 + log((mn)n/β) ε m . We run the exponential mechanism using the scoring function scrg A with the candidate set Pconn(m, n). By Theorem 2.9, this is an ε-DP algorithm that, for each u, with probability at least 1 β, outputs an allocation A such that scrg A (u) max A Pconn(m,n) scrg A(u) 2 log |Pconn(m,n)| Fix any u, and define A as above. By Theorem 2.1, there exists a connected allocation AEF2 that is EF2 with respect to u (g 1). This means that scrg AEF2 = 1. Furthermore, we have13 |Pconn(m, n)| (mn)n. Plugging these into the inequality above, we get scrg A (u) 1 2 log((mn)n/β) where the latter inequality follows from our choice of g. Hence, A is EF(2c) with respect to u (g c) for some c g/2. Invoking Observation 4.3, we find that A is EF(g + c), and therefore EF(3g/2), with respect to u. This concludes our proof. 13Indeed, from the set M = {1, 2, . . . , m}, we can allocate one block of items at a time starting from items with lower indices. There are at most m possibilities for the size of the next block, this block can be allocated to one of the (at most n) remaining agents, and we allocate n blocks in total, hence the bound (mn)n. Approximate Proportionality Next, we present an improved result for approximate proportionality, where the dependence on n is reduced to O(log n). Theorem 4.7. For any ε > 0 and β (0, 1], there exists an (agent item)-level ε-DP algorithm that, for any input additive utility functions, outputs a connected PROPc allocation with probability at least 1 β, where c = O log n + log(mn/β) Our algorithm is based on the well-known movingknife procedure from cake cutting (Dubins and Spanier 1961). A natural way to implement this idea in our setting is to place the items on a line, put a knife at the left end, and move it rightwards until some agent values the subset of items to the left of the knife at least 1/n of her value for the whole set of items. We give this subset to this agent, and proceed similarly with the remaining agents and items. To make this procedure DP, we can replace the check of whether each agent receives sufficiently high value with the SVT algorithm (Theorem 2.8), where the usual utility is modified similarly to Definition 4.5 to achieve low sensitivity. While this approach is feasible, it does not establish the bound we want: since the last agent has to participate in n rounds of this protocol, the basic composition theorem (Theorem 2.5) implies that we can only allot a privacy budget of ε/n in each round. This results in a guarantee of the form c = O(log m/(ε/n)) = O((n log m)/ε), which does not distinctly improve upon the guarantee in Theorem 4.1. To overcome this issue, notice that instead of targeting a single agent, we can continue moving our knife until at least n/2 agents value the subset of items to the left of the knife at least half of the entire set.14 This allows us to recurse on both sides, thereby reducing the number of rounds to log n. Hence, we may allot a privacy budget of ε/ log n in each round. Unfortunately, this only results in a bound of the form c = O(log m/(ε/ log n)) = O((log n log m)/ε), which is still worse than what we claim in Theorem 4.7. Our last observation is that we can afford to make more mistakes in earlier rounds: for example, in the first round, we would be fine with making an error of roughly O(n) in the knife position because the subsets on both sides will be subdivided to Ω(n) parts later. As a result, our strategy is to allot less privacy budget in earlier rounds and more in later rounds. By letting the privacy budgets form a geometric sequence, we can achieve our claimed O(log(mn)/ε) bound. The detailed proof can be found in the full version of our paper (Manurangsi and Suksompong 2022b). 4.2 Lower Bounds Next, we prove lower bounds for (agent item)-level DP via the packing method (Hardt and Talwar 2010). This involves constructing inputs that are close to one another (with respect to the corresponding adjacency notion) such that the acceptable solutions (i.e., EFc or PROPc allocations) are different for different inputs. The DP requirement can then be used to rule out the existence of algorithms with strong utility guarantees. We reiterate that our lower bounds hold only 14Even and Paz (1984) used a similar idea in cake cutting. against connected allocations. In our constructions, we design the utility functions so that each input forces us to pick particular positions to cut in order to get an EFc or PROPc allocation. We start with the proof for envy-freeness. Theorem 4.8. There exists ζ > 0 such that, for any ε (0, 1], there is no ε-DP algorithm that, for any input binary additive utility functions, outputs a connected EFc allocation with probability at least 0.5, where c = j ζ min n log m Proof. Let ζ = 0.01, c be as in the theorem statement, and T = m/(4c + 4) . We may assume that c 1, as otherwise the theorem holds trivially even without the privacy requirement. Consider the following utility functions. Let u = (u 1, . . . , u n) denote the binary additive utility functions defined as follows: u 1 and u 2 are all-zero utility functions. For all i {3, . . . , n} and j M, let u i(j) = 1 if j m (c + 1)(n 2); 0 otherwise. For every t [T], let ut = (ut 1, . . . , ut n) denote the binary additive utility functions defined as follows: ut 1 and ut 2 are defined as follows: ut 1(j) = ut 2(j) = ( 1 if j j 1 2c+1 k = t 1; 0 otherwise, for all j M. For all i {3, . . . , n}, ut i is exactly the same as u i defined earlier. Suppose for contradiction that there is an ε-DP algorithm M that, with probability at least 0.5, outputs a connected allocation that is EFc for its input utility functions. For each t [T], let At denote the set of allocations that are EFc for ut. The assumption on M can be written as Pr[M(ut) At] 0.5. (5) Let denote the (agent item)-level adjacency relation. One can check that u 4c+2 ut for all t [T]. Using this fact together with group DP (Lemma 2.6), we have Pr[M(u ) At] e ε(4c+2) Pr[M(ut) At] (5) 0.5 e ε(4c+2). (6) Lemma 4.9. A1, . . . , AT are disjoint. Due to space constraints, we defer the proof of Lemma 4.9 to the full version of our paper (Manurangsi and Suksompong 2022b). The idea behind the proof is that, if there were an allocation A contained in both At and At for some t < t , there would have to be at least n 3 cuts between item m (c + 1)(n 2) and item m, a cut between item (2c + 1)(t 1) + 1 and item (2c + 1)t, and a cut between item (2c + 1)(t 1) + 1 and item (2c + 1)t . Since these intervals of items are pairwise disjoint, the n 1 cuts must be all the cuts in A. Based on this, we can then show that no matter which agent the leftmost interval of A is assigned to, A cannot be EFc for both ut and ut simultaneously. Lemma 4.9 implies that t [T ] Pr[M(u ) At] (6) 0.5 e ε(4c+2) T 0.5 e 6cε jm 0.5 e 0.06 log m 10 m (0.5/ m) (5 m) > 1, where the third and fourth inequalities follow from our choice of parameters c, T and the assumption that c 1. This is a contradiction which establishes Theorem 4.8. For proportionality, we obtain a slightly weaker bound where the log m term is replaced by log(m/n)/n. The proof is similar to that of Theorem 4.8 and can be found in the full version of our paper (Manurangsi and Suksompong 2022b). Theorem 4.10. There exists a constant ζ > 0 such that, for any ε (0, 1], there is no ε-DP algorithm that, for any input binary additive utility functions, outputs a connected PROPc allocation with probability at least 0.5, where c = j ζ min n log(m/n) 5 Conclusion and Future Work In this paper, we have studied the fundamental task of fair division under differential privacy constraints, and provided algorithms and impossibility results for approximate envyfreeness and proportionality. There are several open questions left by our work. First, it would be useful to close the gaps in terms of n; for example, our envy-freeness upper bound for (agent item)-level DP grows linearly in n (Theorem 4.1) but our lower bound (Theorem 4.8) does not exhibit this behavior. Another perhaps more interesting technical direction is to extend our lower bounds for (agent item)-level DP to arbitrary (i.e., not necessarily connected) allocations. Specifically, we leave the following intriguing open question: Is there an (agent item)-level ε-DP algorithm that, with probability at least 0.99, outputs an EFc allocation for c = Oε(1) regardless of the values of n and m? While we have considered the original notion of DP proposed by Dwork et al. (2006b), there are a number of modifications that could be investigated in future work. A commonly studied notion is approximate DP (also called (ε, δ)- DP), which has an additional parameter δ 0 that specifies the probability with which the condition Pr[M(X) = o] eε Pr[M(X ) = o] is allowed to fail (Dwork et al. 2006a). The notion of DP that we use in this paper corresponds to the case δ = 0 and is often referred to as pure DP. Several problems in the literature are known to admit approximate-DP algorithms with better guarantees compared to pure-DP algorithms (see, e.g., the work of Steinke and Ullman (2016)). In light of this, it would be interesting to explore whether a similar phenomenon occurs in fair division as well. Acknowledgments This work was partially supported by the Singapore Ministry of Education under grant number MOE-T2EP20221-0001 and by an NUS Start-up Grant. References Abowd, J. M. 2018. The US Census Bureau adopts differential privacy. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD), 2867. Amanatidis, G.; Birmpas, G.; Filos-Ratsikas, A.; and Voudouris, A. A. 2022. Fair division of indivisible goods: a survey. In Proceedings of the 31st International Joint Conference on Artificial Intelligence (IJCAI), 5385 5393. Apple Differential Privacy Team. 2017. Learning with privacy at scale. http://machinelearning.apple.com/research/ learning-with-privacy-at-scale. Accessed 2022-11-21. Aziz, H.; Li, B.; Moulin, H.; and Wu, X. 2022. Algorithmic fair allocation of indivisible items: a survey and new questions. ACM SIGecom Exchanges, 20(1): 24 40. Bei, X.; Igarashi, A.; Lu, X.; and Suksompong, W. 2022. The price of connectivity in fair division. SIAM Journal on Discrete Mathematics, 36(2): 1156 1186. Bil o, V.; Caragiannis, I.; Flammini, M.; Igarashi, A.; Monaco, G.; Peters, D.; Vinci, C.; and Zwicker, W. S. 2022. Almost envy-free allocations with connected bundles. Games and Economic Behavior, 131: 197 221. Bouveret, S.; Cechl arov a, K.; Elkind, E.; Igarashi, A.; and Peters, D. 2017. Fair division of a graph. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI), 135 141. Bouveret, S.; Chevaleyre, Y.; and Maudet, N. 2016. Fair allocation of indivisible goods. In Brandt, F.; Conitzer, V.; Endriss, U.; Lang, J.; and Procaccia, A. D., eds., Handbook of Computational Social Choice, chapter 12, 284 310. Cambridge University Press. Brams, S. J.; and Taylor, A. D. 1996. Fair Division: From Cake-Cutting to Dispute Resolution. Cambridge University Press. Ding, B.; Kulkarni, J.; and Yekhanin, S. 2017. Collecting telemetry data privately. In Proceedings of the 30th Annual Conference on Neural Information Processing Systems (NIPS), 3571 3580. Dubins, L. E.; and Spanier, E. H. 1961. How to cut a cake fairly. American Mathematical Monthly, 68(1): 1 17. Dwork, C. 2008. Differential privacy: a survey of results. In Proceedings of the 5th International Conference on Theory and Applications of Models of Computation (TAMC), 1 19. Dwork, C.; Kenthapadi, K.; Mc Sherry, F.; Mironov, I.; and Naor, M. 2006a. Our data, ourselves: Privacy via distributed noise generation. In Proceedings of the 25th Annual International Conference on the Theory and Applications of Cryptographic Techniques (EUROCRYPT), 486 503. Dwork, C.; Mc Sherry, F.; Nissim, K.; and Smith, A. D. 2006b. Calibrating noise to sensitivity in private data analysis. In Proceedings of the 3rd Theory of Cryptography Conference (TCC), 265 284. Dwork, C.; Naor, M.; Reingold, O.; Rothblum, G. N.; and Vadhan, S. P. 2009. On the complexity of differentially private data release: Efficient algorithms and hardness results. In Proceedings of the 41st Annual ACM Symposium on Theory of Computing (STOC), 381 390. Dwork, C.; and Roth, A. 2014. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3 4): 211 407. Erlingsson, U.; Pihur, V.; and Korolova, A. 2014. Rappor: Randomized aggregatable privacy-preserving ordinal response. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (CCS), 1054 1067. Even, S.; and Paz, A. 1984. A note on cake cutting. Discrete Applied Mathematics, 7(3): 285 296. Greenberg, A. 2016. Apple s differential privacy is about collecting your data but not your data. http://www.wired.com/2016/06/apples-differential-privacycollecting-data. Accessed 2022-08-08. Hardt, M.; and Talwar, K. 2010. On the geometry of differential privacy. In Proceedings of the 42nd ACM Symposium on Theory of Computing (STOC), 705 714. Igarashi, A. 2023. How to cut a discrete cake fairly. In Proceedings of the 37th AAAI Conference on Artificial Intelligence (AAAI). Forthcoming. Lipton, R. J.; Markakis, E.; Mossel, E.; and Saberi, A. 2004. On approximately fair allocations of indivisible goods. In Proceedings of the 5th ACM Conference on Economics and Computation (EC), 125 131. Manurangsi, P.; and Suksompong, W. 2022a. Almost envyfreeness for groups: Improved bounds via discrepancy theory. Theoretical Computer Science, 930: 179 195. Manurangsi, P.; and Suksompong, W. 2022b. Differentially private fair division. Co RR, abs/2211.12738. Markakis, E. 2017. Approximation algorithms and hardness results for fair division with indivisible goods. In Endriss, U., ed., Trends in Computational Social Choice, chapter 12, 231 247. AI Access. Mc Sherry, F.; and Talwar, K. 2007. Mechanism design via differential privacy. In Proceedings of the 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS), 94 103. Moulin, H. 2003. Fair Division and Collective Welfare. MIT Press. Moulin, H. 2019. Fair division in the Internet age. Annual Review of Economics, 11: 407 441. Shankland, S. 2014. How Google tricks itself to protect Chrome user privacy. http://www.cnet.com/tech/servicesand-software/how-google-tricks-itself-to-protect-chromeuser-privacy. Accessed 2022-11-21. Steinke, T.; and Ullman, J. R. 2016. Between pure and approximate differential privacy. Journal of Privacy and Confidentiality, 7(2): 3 22. Suksompong, W. 2019. Fairly allocating contiguous blocks of indivisible items. Discrete Applied Mathematics, 260: 227 236. Suksompong, W. 2021. Constraints in fair division. ACM SIGecom Exchanges, 19(2): 46 61. Sun, N.; and Yang, Z. 2009. Strategy proof and privacy preserving fair allocation mechanism. Japanese Economic Review, 60(2): 143 151. Vadhan, S. 2017. The complexity of differential privacy. In Lindell, Y., ed., Tutorials on the Foundations of Cryptography, chapter 7, 347 450. Springer International Publishing. Walsh, T. 2020. Fair division: the computer scientist s perspective. In Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI), 4966 4972.