# allocating_indivisible_items_in_categorized_domains__0d9b7cc5.pdf Allocating Indivisible Items in Categorized Domains Erika Mackin Computer Science Department Rensselaer Polytechnic Institute Troy, NY, USA mackie2@rpi.edu Lirong Xia Computer Science Department Rensselaer Polytechnic Institute Troy, NY, USA xial@cs.rpi.edu We initiate a research agenda of mechanism design for categorized domain allocation problems (CDAPs), where indivisible items from multiple categories are allocated to agents without monetary transfer and each agent gets at least one item per category. We focus on basic CDAPs, where each agent gets exactly one item per category. We first characterize serial dictatorships by a minimal set of three axiomatic properties: strategy-proofness, non-bossiness, and category-wise neutrality. Then, we propose a natural extension of serial dictatorships called categorial sequential allocation mechanisms (CSAMs), which allocate the items in multiple rounds: in each round, the designated agent chooses an item from a designated category. We fully characterize the worst-case rank efficiency of CSAMs for optimistic and pessimistic agents. 1 Introduction Suppose we are organizing a seminar and must allocate 10 discussion topics and 10 dates to 10 students. Students have heterogeneous and combinatorial preferences over (topic, date) bundles. For example, a student may say I would prefer an early date if I get an easy topic, but I would prefer a late date if I get a hard topic . How should we allocate the topics and dates to students based on their preferences? This example illustrates a common situation of allocating multiple indivisible items, which we formulate as a categorized domain. A categorized domain contains multiple indivisible items, each of which belongs to one of the p 1 categories. In categorized domain allocation problems (CDAPs), we want to design a mechanism to allocate the categorized items to agents without monetary transfer, such that each agent gets at least one item per category. In the seminarorganization example there are two categories: topics and dates, and each agent (student) must get one topic and one date. Many other allocation problems are CDAPs. For example, in cloud computing, agents have heterogeneous preferences over multiple types of items including CPU, memory, and storage [Ghodsi et al., 2011; 2012; Bhattacharya et al., 2013]; patients must be allocated multiple types of resources including surgeons, nurses, rooms, and equipments [Huh et al., 2013]; agents may want to reallocate houses and cars [Moulin, 1995; Konishi et al., 2001]; college students compete for course slots from multiple categories, e.g. computer science courses, math courses, social science courses, etc. The design and analysis of allocation mechanisms for classical non-categorized domains have been an active research area at the interface of computer science and economics. In computer science, allocation problems have been studied as multi-agent resource allocation [Chevaleyre et al., 2006]. In economics, allocation problems have been studied as onesided matching, also known as assignment problems [S onmez and Unver, 2011]. Previous research faces three main barriers. Preference bottleneck: When the number of items is not small, it is impractical for the agents to fully express their preferences over all (exponential) bundles of items. Computational bottleneck: Even if the agents can express their preferences compactly using some preference language, computing an optimal allocation is often hard. Threats of agents strategic behavior: An agent may have incentive to report untruthfully to obtain a more preferred bundle. This may lead to a socially inefficient allocation. Our Contributions. We initiate a research agenda of mechanism design for CDAPs towards breaking the three aforementioned barriers. CDAPs naturally generalize classical noncategorized allocation problem (which are CDAPs with one category). CDAPs are our main conceptual contribution. As a first step, we focus on basic categorized domain allocation problems (basic CDAPs), where the number of items in each category is exactly the same as the number of agents, so that each agent gets exactly one item from each category. The seminar-organization example is a basic CDAP. Our technical contributions are three-fold. First, we characterize serial dictatorships for any basic CDAPs with at least two categories by a minimal set of three axiomatic properties: strategy-proofness, which states that no agent has incentive to misreport her preferences; non-bossiness, which states that no agent can change the allocation of any other agent without changing her own by Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16) reporting differently; category-wise neutrality, which states that after permuting the names of items in any category in all agents preferences, the outcome allocation is permuted in the same way. This characterization helps us understand the possibility of designing strategy-proof mechanisms to overcome the third barrier, i.e. threats of agents strategic behavior. Second, to overcome the preference bottleneck and the computational bottleneck, and to go beyond serial dictatorships, we propose categorial sequential allocation mechanisms (CSAMs), which are a large class of indirect mechanisms that naturally extend serial dictatorships [Svensson, 1999], sequential allocation protocols [Bouveret and Lang, 2011; Cechl arov a et al., 2014; 2015], and the draft mechanism [Budish and Cantillon, 2012]. For n agents and p categories, a CSAM is characterized by an ordering over all (agent, category) pairs: in each round, the designated agent picks an available item from the designated category. Third, we completely characterize the worst-case rank efficiency of CSAMs for any combination of two types of myopic agents: optimistic agents, who always choose the item in their top-ranked bundle that is still available, and pessimistic agents, who always choose the item that have the best worstcase guarantee. This characterization naturally leads to useful corollaries that help us choose the optimal CSAMs w.r.t. their worst-case rank efficiency. For example, we show that while serial dictatorships with all-optimistic agents have the best worst-case utilitarian rank, they have the worst worst-case egalitarian rank (Proposition 1). A previous version of this paper was presented at ISAIM14 special session on Computational Social Choice [Xia, 2014]. The full version with all missing proofs can be found on ar Xiv. Related Work and Discussions. We are not aware of previous work that explicitly formulates CDAPs. Previous work on multi-type resource allocation often assume that items of the same type are interchangeable, and agents have a specific type of preferences, e.g. Leontief preferences [Ghodsi et al., 2011] or threshold preferences [Huh et al., 2013]. CDAPs are different because agents preferences are only required to be rankings but not otherwise restricted. From the modeling perspective, if we ignore the categorial information then CDAPs become standard multi-agent resource allocation problems. However, the categorial information can be very useful in designing natural allocation mechanisms. For example, CSAMs are not even well-defined without the categorial information. As another example, agents can naturally use graphical languages such CP-nets [Boutilier et al., 2004] to represent their preferences. Then natural solutions for combinatorial voting [Brandt et al., 2013] such as sequential voting can be modified to allocate items for CDAPs. Technically, one-sided matching problems are basic CDAPs with one category. Our characterization of serial dictatorships for basic CDAPs are different from characterizations of serial dictatorships and similar mechanisms for onesided matching [Svensson, 1999; P apai, 2000a; 2000b; 2001; Ehlers and Klaus, 2003; Hatfield, 2009]. This is because the category-wise neutrality used in our characterization is different from (and arguably weaker than) the neutrality used in previous work. Our analysis of the worst-case rank efficiency of categorial sequential allocation mechanisms resembles the price of anarchy [Koutsoupias and Papadimitriou, 1999], which is defined for strategic agents together with a social welfare function that numerically evaluates the quality of outcomes. Our theorem is also related to distortion in the voting setting [Procaccia and Rosenschein, 2006; Boutilier et al., 2012], which concerns the social welfare loss caused by agents reporting a ranking instead of a utility function. Nevertheless, our approach is significantly different because we focus on allocation problems for myopic and strategic agents, and we do not assume the existence of agents cardinal preferences nor a social welfare function, even though our theorem can be easily extended to study worst-case social welfare loss given a social welfare function as in Proposition 1. Finally, we are certainly not the first to study (optimistically or pessimistically) myopic agents in social choice [Brams et al., 1998; Lacy and Niou, 2000; Brams et al., 2006; Meir et al., 2014]. However, previous work focused on voting or cake cutting, while we focus on allocation of indivisible items. Analyzing outcomes for other types of agents, especially strategic agents, is a promising direction for future research. 2 Categorized Domain Allocation Problems Definition 1 A categorized domain is composed of p 1 categories of indivisible items, denoted by {D1, . . . , Dp}. In a categorized domain allocation problem (CDAP), the items are allocated to n agents without monetary transfer, such that each agent gets at least one item from each category. In a basic categorized domain for n agents, for each i p, |Di| = n, D = D1 Dp, and each agent s preferences are represented by a linear order over D. In a basic categorized domain allocation problem (basic CDAP), each agent gets exactly one item from each category. In this paper, we focus on basic categorized domains and basic CDAPs for non-sharable items [Chevaleyre et al., 2006], that is, each item can only be allocated to one agent. For any i p, we denote Di = {1, . . . , n}. Each element in D is called a bundle. For any j n, let Rj denote a linear order over D and let P = (R1, . . . , Rn) denote the agents (preference) profile. For any pair of bundles ~d,~e 2 D, we use ~d Rj ~e to denote that ~d is ranked higher than ~e by agent j. An allocation A is a mapping from {1, . . . , n} to D, such that Sn j=1[A(j)]i = Di, where for any j n and any i p, A(j) is the bundle allocated to agent j and [A(j)]i is the item in category i allocated to agent j. An allocation mechanism f is a mapping that takes a profile as input, and outputs an allocation. We use f j(P) to denote the bundle allocated to agent j by f for profile P. We now define three axioms for allocation mechanisms. The first two are common in the literature [Svensson, 1999]. A direct mechanism f satisfies strategy-proofness if no agent benefits from misreporting her preferences. That is, for any profile P, any agent j, and any linear order R0 j over D, f j(P) Rj f j(R0 j, R j), where R j is composed of preferences of all agents except agent j. f satisfies non-bossiness if no agent is bossy. An agent is bossy if she can misreport her preferences to change the allocation of some other agent without changing her own. That is, f is non-bossy if for any profile P, any agent j, and any linear order R0 j over D, [f j(P) = f j(R0 j, R j)] ) [f(P) = f(R0 j, R j)]. f satisfies category-wise neutrality if after applying a permutation over the items in any given category, the allocation is also permuted in the same way. That is, for any profile P, any category i, and any permutation Mi over Di, we have f(Mi(P)) = Mi(f(P)), where for any bundle ~d 2 D, Mi(~d) = (Mi([~d]i), [~d] i). When there is only one category, category-wise neutrality degenerates to the traditional neutrality for one-sided matching [Svensson, 1999]. When p 2, category-wise neutrality is weaker than the traditional neutrality. This is because neutrality requires that the allocation is insensitive to all permutations over items (bundles), while category-wise neutrality only requires such insensitivity for a specific class of permutations (the permutations that can be decomposed into multiple category-wise permutations). A serial dictatorship is characterized by a linear order K over {1, . . . , n} such that agents choose items in turns according to K. In each step, the designated agent chooses her top-ranked bundle that is still available. Example 1 Let n = 3 and p = 2. D = {1, 2, 3} {1, 2, 3}. Agents preferences are as follows. R1 = [12 21 32 33 31 22 23 13 11] R2 = [32 12 21 13 33 11 31 23 22] R3 = [13 12 11 22 32 21 33 31 23] Let K = [1B2B3]. In the first round of the serial dictatorship, agent 1 chooses 12; in the second round, agent 2 cannot choose 32 or 12 because item 2 in D2 is unavailable, so she chooses 21; in the final round, agent 3 chooses 33. 3 An Axiomatic Characterization of Serial Dictatorships Theorem 1 For any basic CDAP with p 2 and n 2, an allocation mechanism is strategy-proof, non-bossy, and category-wise neutral if and only if it is a serial dictatorship. Moreover, the three axioms are minimal for characterizing serial dictatorships. Proof sketch: We first present four lemmas that will be frequently used in the proof. The first three lemmas are standard whose proofs are omitted. The last one (Lemma 4) is new, whose proof uses new techniques involving the categorial structure. The first lemma (roughly) says that for all strategy-proof and non-bossy mechanisms f and all profiles P, if every agent j reports a different ranking without enlarging the set of bundles ranked above f j(P), then the allocation to all agents does not change in the new profile. This resembles (strong) monotonicity in voting. Lemma 1 Let f be a strategy-proof and non-bossy allocation mechanism over a basic categorized domain with p 2. For any pair of profiles P and P 0 such that for all j n, {~d 2 D : ~d R0 j f j(P)} {~d 2 D : ~d Rj f j(P)}, we have f(P 0) = f(P). For any linear order R over D and any bundle ~d 2 D, we say a linear order R0 is a pushup of ~d from R, if R0 can be obtained from R by raising the position of ~d while keeping the relative positions of other bundles unchanged. The next lemma states that for any strategy-proof and non-bossy mechanism f, if an agent reports her preferences differently by only pushing up a bundle ~d, then either the allocation to all agents does not change, or she gets ~d. Lemma 2 Let f be a strategy-proof and non-bossy allocation mechanism over a basic categorized domain with p 2. For any profile P, any j n, any bundle ~d, and any R0 j that is a pushup of ~d from Rj, either (1) f(R0 j, R j) = f(R) or (2) f j(R0 j, R j) = ~d. The next lemma states that strategy-proofness, non-bossiness, and category-wise neutrality altogether imply Paretooptimality, which states that for any profile P, there does not exist an allocation A where (1) all agents prefer their bundles in A to their bundles in f(P), and (2) some agents strictly prefer their bundles in A. Lemma 3 For any basic categorized domains with p 2, any strategy-proof, non-bossy, and category-wise neutral allocation mechanism is Pareto optimal. The fourth lemma states that for any strategy-proof and nonbossy allocation mechanism f, any profile P, and any pair of agents j1, j2, there is no bundle ~c that is contained in the items allocated to j1, j2 by f such that both j1 and j2 prefer ~c to their bundles allocated by f, respectively. Lemma 4 Let f be a strategy-proof and non-bossy allocation mechanism over a basic categorized domain with p 2 and n 2. For any profile P and any j1 6= j2 n, let ~a = f j1(P) and ~b = f j2(P), there does not exist ~c 2 {a1, b1} {a2, b2} {ap, bp} such that ~c Rj1 ~a and ~c Rj2 ~b, where ai is the i-th component of ~a. Proof: Suppose for the sake of contradiction that such a bundle ~c exists for some P, j1, and j2. Let ~d denote the bundle such that ~c [ ~d = ~a [ ~b. More precisely, for all i p, {ci, di} = {ai, bi}. For example, if ~a = 1213,~b = 2431, and ~c = 1211, then ~d = 2433. The rest of the proof derives a contradiction by proving a series of observations illustrated in Table 1. In each step, we prove that the boxed bundles are allocated to agent j1 and agent j2 respectively, and all other agents get their top-ranked bundles. Step 1. Let ˆRj1 = [~c ~a ~d ~b others], ˆRj2 = [~c ~b ~a ~d others], where others represents an arbitrary linear order over the remaining bundles, and for any j 6= j1, j2, let ˆRj = [f j(P) others]. By Lemma 1, f( ˆP) = f(P). Table 1: The 6 steps in the proof for Lemma 4. ˆRj1 : ~c ~a ~d ~b others ˆRj2 : ~c ~b ~a ~d others Other j : f j(P) others ˆRj1 : ~c ~a ~d ~b others Rj2 : ~c ~a ~b ~d others Other j : f j(P) others Rj1 : ~c ~b ~a ~d others Rj2 : ~c ~a ~b ~d others Other j : f j(P) others Rj1 : ~c ~b ~a ~d others Rj2 : ~c ~a ~d ~b others Other j : f j(P) others Rj1 : ~c ~a ~b ~d others Rj2 : ~c ~a ~d ~b others Other j : f j(P) others Rj1 : ~c ~a ~b ~d others Rj2 : ~c ~a ~b ~d others Other j : f j(P) others Step 2. Let Rj2 = [~c ~a ~b ~d others] be a pushup of ~a from ˆRj2. We will prove that f( Rj2, ˆR j2) = f( ˆP) = f(P). Since Rj2 is a pushup of ~a from ˆRj2, by Lemma 2, f j2( Rj2, ˆR j2) is either ~a or ~b. We now show that the former case is impossible. Suppose for the sake of contradiction f j2( Rj2, ˆR j2) = ~a, then f j1( Rj2, ˆR j2) cannot be ~c, ~a, or ~d since otherwise some item will be allocated twice. This means that f( Rj2, ˆR j2) is Pareto dominated by the allocation where j1 gets ~d, j2 gets ~c, and all other agents get their top-ranked bundles. This contradicts the Pareto-optimality of f (Lemma 3). Hence f j2( Rj2, ˆR j2) = ~b = f j2( ˆP). By non-bossiness we have f( Rj2, ˆR j2) = f( ˆP) = f(P). Step 3. Let Rj1 = [~c ~b ~a ~d others] be a pushup of ~b from ˆRj1. We will prove that in f( Rj1, Rj2, ˆR {j1,j2}), j1 gets~b, j2 gets ~a, and all other agents get the same items as in f(P). Since Rj1 is a pushup of ~b from ˆRj1, by Lemma 2, f j1( Rj1, Rj2, ˆR {j1,j2}) is either ~a or ~b. We now show that the former case is impossible. Suppose for the sake of contradiction that f j1( Rj1, Rj2, ˆR {j1,j2}) = ~a. By nonbossiness, f j2( Rj1, Rj2, ˆR {j1,j2}) = ~b. This means that f( Rj1, Rj2, ˆR {j1,j2}) is Pareto-dominated by the allocation where j1 gets ~b, j2 gets ~a, and all other agents get their topranked bundles. This contradicts the Pareto-optimality of f (Lemma 3). Step 4. Let Rj2 = [~c ~a ~d ~b others] be a pushup of ~d from Rj2. By Lemma 1, f( Rj1, Rj2, ˆR {j1,j2}) = f( Rj1, Rj2, ˆR {j1,j2}). Step 5. Let Rj1 = [~c ~a ~b ~d others] be a pushup of ~a from Rj1. We will prove that f( Rj1, Rj2, ˆR {j1,j2}) = f( Rj1, Rj2, ˆR {j1,j2}). Since Rj1 is a pushup of ~a from Rj1, by Lemma 2, f j1( Rj1, Rj2, ˆR {j1,j2}) is either ~a or ~b. We now show that the former case is impossible. Suppose for the sake of contradiction that f j1( Rj1, Rj2, ˆR {j1,j2}) = ~a. Then in f( Rj1, Rj2, ˆR {j1,j2}), agent j2 cannot get ~c, ~a, or ~d, which means that f( Rj1, Rj2, ˆR {j1,j2}) is Pareto-dominated by the allocation where j1 gets ~c, j2 gets ~d, and all other agents get their top-ranked bundles. This contradicts the Pareto-optimality of f. Hence, f j1( Rj1, Rj2, ˆR {j1,j2}) = ~b. By non-bossiness f( Rj1, Rj2, ˆR {j1,j2}) = f( Rj1, Rj2, ˆR {j1,j2}). Step 6. We note that Rj1 is a pushup of ~b from ˆRj1 (and ~b is still below ~a). By Lemma 1, f( Rj1, Rj2, ˆR {j1,j2}) = f( ˆRj1, Rj2, ˆR {j1,j2}). We note that the right hand side is the profile in Step 2. Contradiction. Finally, the observations in Step 5 and Step 6 imply that when agents preferences are Rj1 and Rj2 as in Step 6, agent j2 has incentive to report Rj2 as in Step 5 to improve the bundle allocated to her (from ~b to ~a). This contradicts the strategy-proofness of f and completes the proof of Lemma 4. It is easy to check that any serial dictatorship satisfies strategy-proofness, non-bossiness and category-wise neutrality. We now prove that any mechanism f satisfying the three axioms must be a serial dictatorship. Let R be a linear order over D that satisfies the following conditions: (1, . . . , 1) (2, . . . , 2) (n, . . . , n). For any j < n, the bundles ranked between (j, . . . , j) and (j + 1, . . . , j + 1) are those satisfying the following two conditions: (1) at least one component is j, and (2) all components are in {j, j +1, . . . , n}. Let Bj denote these bundles. Formally, Bj = {~d : 8l, dl j and 9l0, dl0 = j}. For any j and any ~d,~e 2 Bj, if the number of j s in ~d is strictly larger than the number of j s in ~e, then ~d ~e. The next claim states that f agrees with a serial dictatorship on the profile (R , . . . , R ) where all agents have the same preferences R that we have just defined. We will later show that f agrees with the same serial dictatorship on all profiles. Claim 1 Let P = (R , . . . , R ). For any l n, there exists jl n such that f jl(P ) = (l, . . . , l). Proof: The claim is proved by induction on l. Let l = 1, for the sake of contradiction suppose there is no j1 with f j1(P ) = (1, . . . , 1). Then there exist a pair of agents j and j0 such that both ~a = f j(P ) and ~b = f j0(P ) contain 1 in at least one category. Let ~c be the bundle obtained from ~a by replacing items in categories where ~b takes 1 to 1. More precisely, we let ~c = (c1, . . . , cp), where ci = 1 if ai = 1 or bi = 1 ai otherwise . It follows that ~c R ~a and ~c R ~b because the number of 1 s in ~c is strictly larger than the number of 1 s in ~a and ~b. By Lemma 4, this contradicts the assumption that f is strategy-proof and non-bossy. Hence there exists j1 n with f j1(P ) = (1, . . . , 1). Suppose that the claim is true for l l0. We next prove that there exists jl0+1 such that f jl0+1(P ) = (l0 + 1, . . . , l0 + 1). This follows after a similar reasoning to the l = 1 case. Formally, suppose for the sake of contradiction there does not exist such a jl0+1. Then, there exist two agents who get ~a and ~b in f(P ) such that both ~a and~b contain l0 +1 in at least one category. By the induction hypothesis, items {1, . . . , l0} in all categories have been allocated, which means that all components of ~a and ~b are at least as large as l0 + 1. Let ~c be the bundle obtained from ~a by replacing items in all categories where~b takes l0 + 1 to l0 + 1. We have ~c R ~a and ~c R ~b, leading to a contradiction by Lemma 4. Therefore, the claim holds for l = l0 + 1. This completes the proof of Claim 1. W.l.o.g. let j1 = 1, j2 = 2, . . ., jn = n denote the agents in Claim 1. We next show that for all profiles, f agrees with the serial dictatorship 1 B 2 B B n. For any profile P 0 = (R0 1, . . . , R0 n), we define n bundles as follows. Let ~d1 denote the top-ranked bundle in R0 1. For any l 2, let ~dl denote agent l s top-ranked available bundle supposing that items in ~d1, . . . , ~ dl 1 have already been allocated. That is, ~dl is the most preferred bundle in {~d : 8l0 < l, ~d \ ~dl0 = ;} according to R0 l. In other words, ~d1, . . . , ~dn are the bundles allocated to agents 1 through n by the serial dictatorship 1B2 Bn. We next prove that this is exactly the allocation by f. For any i m, we define a category-wise permutation Mi such that for all l n, Mi(l) = [~dl]i, where we recall that [~dl]i is the item in the i-th category in ~dl. Let M = (M1, . . . , Mm). It follows that for all l n, M(l, . . . , l) = ~dl. By category-wise neutrality and Claim 1, in f(M(P )) agent l gets M(f l(P )) = ~dl. Comparing M(P ) to P 0, we notice that for all l n and all bundles ~e, if ~dl M(R ) ~e then ~dl R0 l ~e. This is because if there exists ~e such that ~dl M(R ) ~e but ~e R0 l ~dl, then ~e is still available after { ~d1, . . . , ~ dl 1} have been allocated, and ~e is ranked higher than ~dl in R0 l. This contradicts the selection of ~dl. By Lemma 1, f(P 0) = f(M(P )) = M(f(P )) = (~d1, . . . , ~dn). We recall that by definition (~d1, . . . , ~dn) is the output of the serial dictatorship 1B2 Bn. This proves that f is the serial dictatorship w.r.t. the order 1 B 2 B B n. Remarks. The theorem is somewhat negative because it shows that we have to sacrifice one of strategy-proofness, category-wise neutrality, or non-bossiness. Among the three axiomatic properties, we feel that non-bossiness is the least natural one. 4 Categorial Sequential Allocation Mechanisms Given a linear order O over {1, . . . , n} {1, . . . , p}, the categorial sequential allocation mechanism (CSAM) f O allocates the items in np steps as illustrated in Protocol 1. In each step t, suppose the t-th element in O is (j, i), (equivalently, t = O 1(j, i)). Agent j is called the active agent in step t and she chooses an item dj,i that is still available from Di. Then, dj,i is broadcast to all agents and we move on to the next step. Protocol 1: Categorial sequential allocation mechanism (CSAM) f O. Input: An order O over {1, . . . , n} {1, . . . , p}. 1 Broadcast O to all agents.1 2 for t = 1 to np do 3 Let (j, i) be the t-th element in O. 4 Agent j chooses an available item dj,i 2 Di. 5 Broadcast dj,i to all agents. CSAMs are different from sequential allocation protocols [Bouveret and Lang, 2011] and the draft mechanism [Budish and Cantillon, 2012], where in each step the active agent can choose any available item from any category. Example 2 The serial dictatorship w.r.t. K = [j1 B B jn] is a CSAM w.r.t. (j1, 1)B(j1, 2)B B(j1, p)B B(jn, 1)B (jn, 2) B B (jn, p). Similar to sequential allocation protocols [Bouveret and Lang, 2011], CSAMs can be implemented in a distributed way. Communication cost for CSAMs is much lower than that for direct mechanisms, where agents report their preferences in full to the center, which requires (npp log n) bits per agent, and thus the total communication cost is (np+1p log n). For CSAMs, the total communication cost of Protocol 1 is (n2p log n+np(n log n)) = (n2p log np), which has (np 2 log n log n+log p) multiplicative saving. In light of this, CSAMs preserve more privacy as well. To analyze the outcome and efficiency of CSAMs, we consider two types of myopic agents. For any 1 i p, we let Di,t denote the set of available items in Di at the beginning of round t. Optimistic agents. An optimistic agent chooses the item in her top-ranked bundle that is still available. Pessimistic agents. A pessimistic agent j in round t chooses an item dj,i from Di,t, such that for all d0 i 2 Di,t with d0 i 6= dj,i, agent j prefers the worst available bundle whose i-th component is dj,i to the worst available bundle whose i-th component is d0 i. In this paper, we assume that whether an agent is optimistic or pessimistic is fixed before applying a CSAM. Example 3 Let n = 3, p = 2. Consider the same profile as in Example 1, which can be simplified as follows. Agent 1 (optimistic): 12 21 others 11 Agent 2 (optimistic): 32 others 22 Agent 3 (pessimistic): 13 others 33 31 23 1As one of the anonymous reviewers suggested, the order O does not need to be broadcast in the beginning. This gives a better justification for myopic agents and may also reduce manipulation. Let O = [(1, 1) B (2, 2) B (3, 1) B (3, 2) B (2, 1) B (1, 2)]. Suppose agent 1 and agent 2 are optimistic and agent 3 is pessimistic. When t = 1, agent 1 (optimistic) chooses item 1 from D1. When t = 2, item 32 is the top-ranked available bundle for agent 2 (optimistic), so she chooses 2 from D2. When t = 3, the available bundles are {2, 3} {1, 3}. If agent 3 chooses 2 from D1, then the worst-case available bundle is 23, and if agent 3 chooses 3 from D1, then the worst-case available bundle is 31. Since agent 3 prefers 31 to 23, she chooses 3 from D1. When t = 4, agent 3 chooses 3 from D2. When t = 5, agent 2 choses 2 from D1 and when t = 6, agent 1 choses 1 from D2. Finally, agent 1 gets 11, agent 2 gets 22, and agent 3 gets 33. 5 Rank Efficiency of CSAMs for Myopic Agents For any linear order R over D and any bundle ~d, we let Rank(R, ~d) denote the rank of ~d in R, such that the highest position has rank 1 and the lowest position has rank np. Given a profile P = (R1, . . . , Rn) and a mechanism f, the rank efficiency of f is a vector (Rank(R1, f 1(P)), . . . , Rank(Rn, f n(P))) that is composed of the ranks of bundles agents receive. We recall that a CSAM f O is characterized by a linear order over all (agent,category) pairs. Given f O, we introduce the following notation for all j n to characterize the worstcase rank efficiency of f O. Let Oj denote the linear order over the categories {1, . . . , p} according to which agent j chooses items. For any i p, let kj,i denote the number of items in Di that are still available right before agent j chooses from Di. Formally, kj,i = 1 + |{(j0, i) : (j, i) BO (j0, i)}|. Let Kj denote the smallest index in Oj such that no agent can interrupt agent j from choosing all items in her topranked bundle that is available in round (j, Oj(Kj)). Formally, Kj is the smallest number such that for any l with Kj < l p, between the round when agent j chooses an item from category Oj(Kj) and the round when agent j chooses an item from category Oj(l), no agent chooses an item from category Oj(l). We note that Kj is defined only by O and is thus independent of agents preferences. Example 4 Let O = [(1, 1) B (1, 2) B (1, 3) B (2, 1) B (2, 2) B (2, 3) B (3, 1) B (3, 2) B (3, 3)]. That is, f O is a serial dictatorship. Then O 3 = 1 B 2 B 3. K1 = K2 = K3 = 1. k1,1 = k1,2 = k1,3 = 3, k2,1 = k2,2 = k2,3 = 2, k3,1 = k3,2 = k3,3 = 1. Let O be the order in Example 3, that is, O = [(1, 1) B (2, 2) B (3, 1) B (3, 2) B (2, 1) B (1, 2)]. O1 = 1 B 2. K1 = 2 because (2, 2) is between (1, 1) and (1, 2) in O. k1,1 = 3, k1,2 = 1. O2 = 2 B 1. K2 = 2 because (3, 1) is between (2, 2) and (2, 1). k2,1 = 1, k2,2 = 3. O3 = 1 B 2. K3 = 1 because no agent chooses an item from D2 between (3, 1) and (3, 2). k3,1 = k3,2 = 2. Theorem 2 For any CSAM f O, any combination of optimistic and pessimistic agents, any j n, and any profile: Upper bound for optimistic agents: if j is optimistic, then the rank of the bundle allocated to her is at most np + 1 Qp l=Kj kj,Oj(l). Upper bound for pessimistic agents: if j is pessimistic, then the rank of the bundle allocated to her is at most np Pp l=1(kj,Oj(l) 1). Moreover, there exists a profile P where the bounds for all agents are simultaneously tight. For the same profile P, there exists an allocation where at least n 1 agents get their topranked bundles, and the remaining agent gets her top-ranked or second-ranked bundle. Proof sketch: For any optimistic agents, the upper bound is calculated by counting how many bundles must be ranked below the final allocation once no agent can interrupt her from choosing the top bundle (among available ones). For pessimistic agents, in each step j we know that at least kj,Oj(l) 1 other bundles must be ranked below the final allocation. The proof for the matching lower bounds is by construction and is quite involved. Intuitively, the construction ensures that for all agents, the bundles mentioned in the proof for the upper bounds are the only bundles ranked below the final allocation. We note that Theorem 2 works for any combination of optimistic and pessimistic agents, which is much more general than the setting with all-optimistic agents and the setting with all-pessimistic agents. Theorem 2 can be used to compare various CSAMs with optimistic and pessimistic agents. Given a CSAM f O, the worst-case utilitarian rank is the worst (largest) total rank of the bundles (w.r.t. respective agent s preferences) allocated by f O. The worst-case egalitarian rank is the worst (largest) rank of the least-satisfied agent. The worst case is taken over all profiles of n agents. Due to the space limit, we only present one proposition without proof. Proposition 1 Among all CSAMs, serial dictatorships with all-optimistic agents have the best worst-case utilitarian rank and the worst worst-case egalitarian rank. The proposition is proved by applying Theorem 2. For any serial dictatorship with all-optimistic agents, the worst-case utilitarian rank is n(np + 1) Pn j=1 jp and the worst-case egalitarian rank is np (when all agents have the same preferences). 6 Summary and Future Work We have initiated a research agenda to study item allocation in categorized domains. There are many open questions and directions for future research, including strategic agents and minimax-regret agents. For general CDAPs, we are excited to explore generalizations of CP-nets [Boutilier et al., 2004], LP-trees [Booth et al., 2010], and soft constraints [Pozza et al., 2011] for preference representation. Based on these new languages we can analyze fairness and computational aspects of CSAMs and other mechanisms. Mechanism design for CDAPs with sharable, non-sharable, and divisible items is also an important and promising topic for future research. Acknowledgments This work is supported by the National Science Foundation under grant IIS-1453542. We thank Vincent Conitzer, Jerome Lang, David Parkes, anonymous reviewers of this paper, and participants of ISAIM-14 and the Economics and Computation Program at Simons Institute for the theory of computing for helpful comments and suggestions. [Bhattacharya et al., 2013] Arka A. Bhattacharya, David Culler, Eric Friedman, Ali Ghodsi, Scott Shenker, and Ion Stoica. Hierarchical scheduling for diverse datacenter workloads. In Proc. So CC, pages 4:1 4:15, 2013. [Booth et al., 2010] Richard Booth, Yann Chevaleyre, J erˆome Lang, J erˆome Mengin, and Chattrakul Sombattheera. Learning conditionally lexicographic preference relations. In Proc. ECAI, pages 269 274, 2010. [Boutilier et al., 2004] Craig Boutilier, Ronen Brafman, Carmel Domshlak, Holger Hoos, and David Poole. CP-nets: A tool for representing and reasoning with conditional ceteris paribus statements. Journal of Artificial Intelligence Research, 21:135 191, 2004. [Boutilier et al., 2012] Craig Boutilier, Ioannis Caragiannis, Simi Haber, Tyler Lu, Ariel D. Procaccia, and Or Sheffet. Optimal social choice functions: A utilitarian view. In Proc. ACM EC, pages 197 214, 2012. [Bouveret and Lang, 2011] Sylvain Bouveret and J erˆome Lang. A general elicitation-free protocol for allocating indivisible goods. In Proc. IJCAI, pages 73 78, 2011. [Brams et al., 1998] Steven J. Brams, D. Marc Kilgour, and William S. Zwicker. The paradox of multiple elections. Social Choice and Welfare, 15(2):211 236, 1998. [Brams et al., 2006] Steven J. Brams, Michael A. Jones, and Chris- tian Klamler. Better Ways to Cut a Cake. Notices of the AMS, 53(11):1314 1321, 2006. [Brandt et al., 2013] Felix Brandt, Vincent Conitzer, and Ulle En- driss. Computational social choice. In G. Weiss, editor, Multiagent Systems. MIT Press, 2013. [Budish and Cantillon, 2012] Eric Budish and Estelle Cantillon. The Multi-Unit Assignment Problem: Theory and Evidence from Course Allocation at Harvard. American Economic Review, 102(5):2237 71, 2012. [Cechl arov a et al., 2014] Katar ına Cechl arov a, Pavlos Eirinakis, Tam as Fleiner, Dimitrios Magos, Ioannis Mourtos, and Eva Potpinkov a. Pareto optimality in many-to-many matching problems. In Discrete Optimization, volume 14, pages 160 169, 2014. [Cechl arov a et al., 2015] Katar ına Cechl arov a, Pavlos Eirinakis, Tam as Fleiner, Dimitrios Magos, David F. Manlove, Ioannis Mourtos, Eva Oceˇl akov a, and Baharak Rastegari. Pareto Optimal Matchings in Many-to-Many Markets with Ties. In SAGT, pages 27 39 , 2015. [Chevaleyre et al., 2006] Yann Chevaleyre, Paul E. Dunne, Ulle Endriss, J erˆome Lang, Michel Lemaitre, Nicolas Maudet, Julian Padget, Steve Phelps, Juan A. Rodr ıguez-Aguilar, and Paulo Sousa. Issues in multiagent resource allocation. Informatica, 30:3 31, 2006. [Ehlers and Klaus, 2003] Lars Ehlers and Bettina Klaus. Coalitional strategy-proof and resource-monotonic solutions for multiple assignment problems. Social Choice Welfare, 21:265 280, 2003. [Ghodsi et al., 2011] Ali Ghodsi, Matei Zaharia, Benjamin Hind- man, Andy Konwinski, Scott Shenker, and Ion Stoica. Dominant Resource Fairness: Fair Allocation of Multiple Resource Types. In Proc. NSDI, pages 323 336, 2011. [Ghodsi et al., 2012] Ali Ghodsi, Vyas Sekar, Matei Zaharia, and Ion Stoica. Multi-resource Fair Queueing for Packet Processing. In Proc. SIGCOMM, pages 1 12, 2012. [Hatfield, 2009] John William Hatfield. Strategy-proof, efficient, and nonbossy quota allocations. Social Choice and Welfare, 33(3):505 515, 2009. [Huh et al., 2013] Woonghee Tim Huh, Nan Liu, and Van-Anh Truong. Multiresource Allocation Scheduling in Dynamic Environments. Manufacturing and Service Operations Management, 15(2):280 291, 2013. [Konishi et al., 2001] Hideo Konishi, Thomas Quint, and Jun Wako. On the Shapley Scarf economy: the case of multiple types of indivisible goods. Journal of Mathematical Economics, 35(1):1 15, 2001. [Koutsoupias and Papadimitriou, 1999] Elias Koutsoupias and Christos Papadimitriou. Worst-case Equilibria. In Proc. STACS, pages 404 413, 1999. [Lacy and Niou, 2000] Dean Lacy and Emerson M.S. Niou. A problem with referendums. Journal of Theoretical Politics, 12(1):5 31, 2000. [Meir et al., 2014] Reshef Meir, Omer Lev, and Jeffrey S. Rosen- schein. A Local-Dominance Theory of Voting Equilibria. In Proc. ACM EC, pages 313 330, 2014. [Moulin, 1995] Herv e Moulin. Cooperative Microeconomics: A Game-Theoretic Introduction. Prentice Hall, 1995. [P apai, 2000a] Szilvia P apai. Strategyproof assignment by hierar- chical exchange. Econometrica, 68(6):1403 1433, 2000. [P apai, 2000b] Szilvia P apai. Strategyproof multiple assignment using quotas. Review of Economic Design, 5:91 105, 2000. [P apai, 2001] Szilvia P apai. Strategyproof and nonbossy multiple assignments. Journal of Public Economic Theory, 3(3):257 71, 2001. [Pozza et al., 2011] Giorgio Dalla Pozza, Maria Silvia Pini, Francesca Rossi, and K. Brent Venable. Multi-agent soft constraint aggregation via sequential voting. In Proc. IJCAI, pages 172 177, 2011. [Procaccia and Rosenschein, 2006] Ariel D. Procaccia and Jef- frey S. Rosenschein. The Distortion of Cardinal Preferences in Voting. In Proc. CIA, pages 317 331. 2006. [S onmez and Unver, 2011] Tayfun S onmez and M. Utku Unver. Matching, Allocation, and Exchange of Discrete Resources. In Jess Benhabib, Alberto Bisin, and Matthew O. Jackson, editors, Handbook of Social Economics, chapter 17, pages 781 852. North-Holland, 2011. [Svensson, 1999] Lars-Gunnar Svensson. Strategy-proof allocation of indivisible goods. Social Choice and Welfare, 16(4):557 567, 1999. [Xia, 2014] Lirong Xia. Assigning indivisible and categorized items. ISAIM-14 special session on Computational Social Choice, 2014.