# altruism_in_facility_location_problems__6c211cc0.pdf Altruism in Facility Location Problems Houyu Zhou1, Hau Chan2, Minming Li1 1Department of Computer Science, City University of Hong Kong 2School of Computing, University of Nebraska-Lincoln houyuzhou2-c@my.cityu.edu.hk, hchan3@unl.edu, minming.li@cityu.edu.hk We study the facility location problems (FLPs) with altruistic agents who act to benefit others in their affiliated groups. Our aim is to design mechanisms that elicit true locations from the agents in different overlapping groups and place a facility to serve agents to approximately optimize a given objective based on agents costs to the facility. Existing studies of FLPs consider myopic agents who aim to minimize their own costs to the facility. We mainly consider altruistic agents with well-motivated group costs that are defined over costs incurred by all agents in their groups. Accordingly, we define Pareto strategyproofness to account for altruistic agents and their multiple group memberships with incomparable group costs. We consider mechanisms satisfying this strategyproofness under various combinations of the planner s objectives and agents group costs. For each of these settings, we provide upper and lower bounds of approximation ratios of the mechanisms satisfying Pareto strategyproofness. Introduction In recent decades, facility location problems (FLPs) have been widely studied within the context of mechanism design without money (Chan et al. 2021). In the most basic mechanism design version of FLPs, initiated by (Moulin 1980; Procaccia and Tennenholtz 2009), a planner seeks to place a facility (e.g., school, library, or park) to best serve a set of agents in a real line, based on the ideal locations of the agents. Because agent ideal locations are unknown to the planner, the planner must elicit agent locations to determine a facility location that best serves the agents. As there is a potential for the agents to misreport private locations to manipulate the facility location, the main research agenda in mechanism design for FLPs is to design a strategyproof mechanism to elicit the true private agents locations and place the facility that (approximately) optimizes a given planner s objective (e.g., minimizing the total or maximum distance of the agents to the facility). Beyond placing a physical facility geographically, FLPs conceptually can be used to model general settings for selecting an overall representative to represent the preferences of the agents within some well-defined representative space. For example, selecting the temperature for a classroom given Copyright 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. the preferences of the students (Bartholdi and Trick 1986) and selecting an overall committee view to represent individuals with different political views (Chan et al. 2021). Our Research Agenda with Altruistic Agents. Previous mechanism design studies in FLPs have focused only on myopic (egoistic) agents in which each agent cares about their own cost to the facility (e.g., the distance between their ideal location and the facility location). However, in many real-world settings (e.g., group decision-making), agents exhibit group behavior (Hogg and Tindale 2001), altruistic behavior (Schroeder et al. 1995; Penner 2021), or prosocial behavior (Eisenberg 2014; Schroeder and Graziano 2015), where altruistic agents act to benefit others in their affiliated groups without expecting anything in return (Monroe 1996; Fehr and Fischbacher 2003). For instance, in the social psychology literature (Hogg and Tindale 2001), altruistic agents within the same group exhibit group behavior that is consistent with the overall goal of the group. In group-selected altruism (Okasha 2005), altruistic agents can make efforts to take actions that help their groups. Examples of altruistic behavior are well-documented and include doing nice things to one s family and relatives (e.g., donating organs, caregiving for relatives, and fostering children), helping one s friends (e.g., loaning money and helping with work), and advocating for their organized groups (e.g., donating money to an organization, joining the same day of protest, and fighting for a common cause). Motivated by the altruistic behavior of agents in realworld situations, our focus is to study and model altruistic agents in FLPs. Conceptually, FLPs with altruistic agents model situations ranging from altruistic agents advocating for facility accessibility of their own groups to altruistic agents lobbying for a committee to represent their group views on some issue. For instance, when analyzing voter behavior, the altruism theory of voting has examined the social preferences of voters that consider the welfare of others (Edlin, Gelman, and Kaplan 2007; Jankowski 2002). We also consider the setting where agents belong to multiple groups, such as deciding an activity event location where the students can belong to different school clubs (e.g., math clubs, debate clubs, etc.) or deciding the location of a multipurpose recreation center where the citizens belong to multiple local communities (e.g., senior associations, family/rel- The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) ative/regional groups, etc.). The closer the location is to the group, the more convenient it is for that group. To model these situations, we address the following key questions. (1) How should one model altruistic agents with multiple groups in mechanism design settings for FLPs? (2) How should one design desirable mechanisms to (approximately) optimize a given objective with altruistic agents? Our Contribution We consider the FLPs where n altruistic agents are divided into m groups in which each agent can belong to multiple groups. We study two well-studied classical objectives, the social cost and the maximum cost, proposed by (Procaccia and Tennenholtz 2009) and two well-motivated group-fair objectives, the maximum total group cost (mtgc) and the maximum average group cost (magc), considered in (Zhou, Li, and Chan 2022; Marsh and Schilling 1994) for FLPs. Our aim is to design mechanisms that satisfy the corresponding strategyproof definitions. In our work, we have two main points of contribution, one conceptual and one technical. Conceptual Contribution. Different from the previous work, which only considers the myopic agents whose costs are their own distances from the facility, we study the altruistic agents who care about their groups and define the altruistic cost. Motivated by existing economic, decision theory, and computational studies on modeling altruistic behavior, we adopt two types of altruistic costs for the agents, the altruistic total cost and the altruistic maximum cost. We defined these costs for each of the agents groups separately. Because an agent s altruistic cost depends on the private information of other agents in their groups, our problems are situated in the challenging interdependent valuation mechanism design domains (Mezzetti 2004; Jehiel and Moldovanu 2001; Bergemann and Morris 2008). No studies have considered the proposed interdependent valuation settings for FLPs. As an agent can belong to multiple groups, the agent can have separate altruistic costs. The agents may not be able to compare/combine them and treat them as multiobjectives (Roijers et al. 2013; Hayes et al. 2022). Thus, we propose Pareto strategyproofness (PSP) concept to ensure that each agent cannot misreport their location to benefit their groups simultaneously. No studies have considered the proposed PSP concept, which can be applied to other mechanism design domains. Technical Contribution. We model the altruistic agents by using the altruistic total cost and the altruistic maximum cost. Table 1 summarizes our upper and lower bound results. For the altruistic total cost, we first present a profile preprocessing method which helps us to design PSP mechanisms. For the social cost, we present a series of mechanisms that are extended from placing the facility at the largest group median location (Majority-Med) or the median location (Weighted-Med, Union-Med) via the profile preprocessing method. However, all Objectives Cost Functions Altruistic total cost Altruistic max cost social cost UB: m(4m 5) 2 LB: m LB: n 2 max cost UB: 2 UB: 2 LB: 2 LB: 2 mtgc UB: 3 UB: |Gmax| 2 + 1 LB: 3 LB: |Gmax| magc UB: 3 UB: |Gmax| 2 + 1 LB: 3 LB: |Gmax| Table 1: Result Summary. For each entry, the first line is the upper bound, and the second line is the lower bound. n is the number of agents. m is the number of groups. |Gmax| is the size of the largest group. of them achieve large approximation ratios (at least 2m 1). Finally, we leverage the advantages of both Majority-Med and Union-Med to design a new PSP mechanism (Union Trunc-Med) for minimizing the social cost, which achieves an approximation ratio of m(4m 5) 3m 4 . We present a lower bound of m for this setting. For the maximum cost, we show that all of the mechanisms we proposed have an approximation ratio of 2. We also present a lower bound of 2. For the magc and magc, we use Majority-Med and give approximation ratios of 3. We also present tight lower bounds. For the altruistic maximum cost, we observe that none of the mechanisms we mentioned above is PSP. We design two new PSP mechanisms (Majority-Mid and Weighted-Mid) which are in a similar spirit as Majority-Med and Weighted-Med. We use Weighted-Mid for the social cost and the maximum cost, and use Weighted-Mid for the other two group-fair objectives. We show the tight upper bounds and lower bounds for four objectives, implying that we complete the picture of this setting. Although the mechanisms are similar to those for the altruistic total cost, we emphasize that showing the approximation ratios and lower bounds is a challenging problem and requires different techniques since the altruistic total cost is different from the altruistic maximum cost. Due to the space limit, most of the proofs are omitted. Related Work The classical (mechanism design variants of) facility location problems (FLPs) were first studied by (Moulin 1980), which characterized mechanisms that are strategyproof, Pareto efficient, and anonymous for single-peaked preferences on a line. However, finding strategyproof mechanisms with good approximation ratios in FLPs remains a challenging problem, which was first studied by (Procaccia and Tennenholtz 2009). They proved that placing the facility at the median and the leftmost location can achieve tight approximation ratios while guaranteeing the strategyproofness The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) for minimizing the social cost and the maximum cost, respectively. However, we observe that neither of the mechanisms mentioned above can guarantee Pareto strategyproofness when agents are altruistic. Other works and variations on FLPs can be found in a recent survey (Chan et al. 2021). One of the important notions in our paper is group fairness. Recently, there is an increased interest in studying fairness in FLPs (Cai, Filos-Ratsikas, and Tang 2016; Chen et al. 2021; Ding et al. 2020; Liu et al. 2021; Lam 2021; Zhou, Li, and Chan 2022). The work of (Cai, Filos-Ratsikas, and Tang 2016; Chen et al. 2021) studied the minimax envy objective that aims to minimize the (normalized) maximum difference between any two agents costs. In addition, (Ding et al. 2020; Liu et al. 2021) studied the envy ratio objective, which aims to minimize the maximum over the ratios between any two agents utilities, and (Lam 2021) considered the Nash Welfare objective in FLPs. (Aziz et al. 2022a,b, 2023) considered proportional fairness in FLPs, where the distance of a facility from a group of agents should depend both on the size of the group as well as how closely the agents are clustered (this group is just a set of agents, not a pre-defined group membership). All of these works considered fairness for individual agents, and there is no notion of group memberships. The work of (Filos-Ratsikas and Voudouris 2021) studied the FLPs with agents who are partitioned into multiple districts with equal size, which can be regarded as each agent having her own group. They focused on the social cost objective, rather than group-fair objectives. The work of (Zhou, Li, and Chan 2022) studied group-fair FLPs with (disjoint) groups (i.e., each agent belongs to a single group) and group-fair objectives (including magc and magc) for myopic agents. In our paper, we also study two group-fair objectives they proposed for altruistic agents to enrich recent studies on group fairness in FLPs. Preliminaries We consider facility location problems (FLPs) with groups of altruistic agents. Let N = {1, 2, ..., n} be a set of agents on the real line and G = {G1, ..., Gm} be the set of groups of agents. Each agent i N has profile ri = (xi, gi) where xi R is the location of agent i and gi G is the group membership of agent i. We use |Gj| to denote the number of agents in group Gj where j [m] and |gi| to denote the number of groups agent i belongs to. Without loss of generality, we assume that x1 x2 ... xn. A profile set r = {r1, r2, .., rn} is a collection of locations and group memberships of agents. A deterministic mechanism is a function f that maps profile r to a facility location y R. Let d(a, b) = |a b| be the distance between any a, b R. Optimization Objectives. We consider the two classical cost objectives, minimizing the social cost and the maximum cost (Procaccia and Tennenholtz 2009; Chan et al. 2021). Given a facility location y and profile set r, the social cost and the maximum cost are defined as sc(y, r) = P i N d(y, xi) and mc(y, r) = maxi N {d(y, xi)}. We also consider two group-fair cost objectives (Zhou, Li, and Chan 2022; Marsh and Schilling 1994). One is minimizing the maxi- mum total group cost (magc), which is mtgc(y, r) = maxj [m] n P i Gj d(y, xi) o . The other is minimizing the maximum average group cost (magc), which is defined as magc(y, r) = maxj [m] n P i Gj d(y, xi)/|Gj| o . Our goal is to design mechanisms that enforce some forms of strategyproofness (discussed below) while approximately optimizing an objective function when the agents are altruistic. We measure the performance of a mechanism f by comparing the objective value that f achieves and the objective value achieved by the optimal solution. If there exists a number α such that for any profile set r, the output from f is within α times the objective value achieved by the optimal solution, then we say the approximation ratio of f is α. Altruistic Agents. In previous studies of FLPs, agents are myopic. The myopic cost of agent i is defined as the distance between the facility location and her location, c(f(r), xi) = d(f(r), xi), where f is a mechanism and r is a profile. For an altruistic agent, existing studies in economic, decision theory, and computational studies (see, e.g., (Simon 2016; Simon, Saari, and Keller 2020; Chen et al. 2014; Carvalho Rodrigues and Xavier 2017)) have proposed to model the altruistic agent s cost to be some aggregation of the agent s cost and the costs of the other agents in their groups. The simplest and most widely considered one is the aggregated sum, which captures the altruistic total cost of the agents and the welfare of their groups. Specifically, for all agents in group Gj, the altruistic total cost is defined to be the total cost of the agents in group Gj, atc(y, Gj) = P i Gj d(y, xi), which coincides with a typical utilitarian objective for FLPs with a single group. Another commonly studied social welfare objective studied in FLPs is the egalitarian objective. In particular, the altruistic maximum cost is defined to be maximum cost among the agents in Gj, amc(y, Gj) = maxi Gj{d(y, xi)}. In this paper, we consider altruistic agent cost as the altruistic total cost or the altruistic maximum cost. Strategyproofness Concepts. For the standard myopic costs, we often consider the standard strategyproof concept when designing mechanisms. Definition 1. A mechanism f is strategyproof (SP) if and only if an agent can never benefit by reporting a false location, regardless of the reporting of the other agents. More formally, given any profile set r and any reported profile set r , let r i = (x i, gi) be a profile with the false location reported by agent i. We have c(f(ri, r i), xi) c(f(r ), xi) where r i is the reported profile of all agents except agent i. When an agent is altruistic or belongs to multiple groups, the standard SP concept does not apply directly. Moreover, when the agent has multiple altruistic costs (one per group) naturally, the agent might not be able to combine separate altruistic costs into a single cost objective due to these costs being incomparable or methods to combine objectives being observable/unknown (e.g., weights are not known to the designer or agent) (Roijers et al. 2013; Hayes et al. 2022; Sawaragi, Nakayama, and Tanino 1985). Therefore, we in- The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) troduce the Pareto strategyproofness definitions for altruistic agents with multiple group memberships. Definition 2. A mechanism f is ex-ante Pareto strategyproof if and only if an agent cannot benefit at least one of their groups without hurting any other group the agent belongs to by reporting a false location, regardless of the reportings of the other agents. More formally, given any profile set r, any reported profile set r , let r i = (x i, gi) be a profile with the false location reported by agent i. For any agent i, we have j gi, atc(f(ri, r i), Gj) < atc(f(r ), Gj) or j gi, atc(f(ri, r i), Gj) = atc(f(r ), Gj) where r i is the reported profile of all agents except agent i. We also have the same argument when we consider the altruistic maximum cost. Different from strategyproofness, there does not exist any ex-ante Pareto strategyproof mechanism with a bounded approximation ratio for both altruistic agent costs. Proposition 1. There does not exist any ex-ante Pareto strategyproof mechanism with a bounded approximation ratio for both the altruistic total cost and the altruistic maximum cost and all four objectives even if there is only one group. The intuitive explanation of Proposition 1 is that if an agent misreporting makes the facility farther away from agent i s ideal location, then agent i may misreport to move the facility back. Hence, an agent dominant strategy cannot be decided before their group members makes decisions. Hence, we consider a relaxed version of Pareto strategyproofness definition. Definition 3. A mechanism f is ex-post Pareto strategyproof (PSP) if and only if an agent cannot benefit at least one of their groups without hurting any other group the agent belongs to by reporting a false location, given the true profile of the other agents. More formally, given any profile set r, any reported profile set r , let r i = (x i, gi) be a profile with the false location reported by agent i. For any agent i, we have j gi, atc(f(r), Gj) < atc(f(r , r i), Gj) or j gi, atc(f(r), Gj) = atc(f(r , r i), Gj) where r i is the profile of all agents except agent i. We also have the same argument when we consider the altruistic maximum cost. The major difference is that the ex-ante Pareto strategyproof definition imposes that reporting truthfully is a dominant strategy for every agent while the ex-post Pareto strategyproof only requires that an agent will report truthfully if all the other agents report truthfully. Our ex-post Pareto strategyproof definition extends the standard ex-post implementation concepts when each agent has only a single interdependent cost function (Bergemann and Morris 2008) and FLPs (Aziz et al. 2020). For simplicity, we use Pareto strategyproofness (PSP) to denote ex-post Pareto strategyproof in the remaining part of this paper. Existing Mechanisms with Altruistic Agents When considering a new setting, an immediate question is, do existing mechanisms still work? We first consider two wellknown mechanisms proposed by (Procaccia and Tennenholtz 2009): placing the facility at a left-median agent location or placing the facility at the leftmost agent location, which achieve the best approximation ratio for the social cost and the maximum cost, respectively, under the setting of myopic agents. Proposition 2. Placing the facility at the median (tiebreaking by selecting the left-median) or the leftmost agent location is not Pareto strategyproof for both the altruistic total cost and the altruistic maximum cost. Indeed, given any constant k, we can use the same analysis as above to show that placing the facility at the kth agent location is not Pareto strategyproof. Besides the two well-known classical mechanisms, we also consider a strategyproof mechanism proposed by Zhou, Li, and Chan (2022), which is designed for the magc and the magc. Proposition 3. Placing the facility at the leftmost median agent of the largest group, breaking ties in favor of the smallest index, is not Pareto strategyproof for both the altruistic total cost and the altruistic maximum cost. The major reason that causes non-PSP under the altruistic total cost setting is the tie-breaking rule for the group with an even number of agents. If the tie is broken by selecting the right-median agent, we can also construct a profile that can be used to show non-PSP. Moreover, if we do not place the facility at a group median location, the approximation ratio can be unbounded, no matter what the objective is. For instance, consider an example with only one group. If we do not place the facility at the group median agent, all the agents can report their locations to the group median to make the mechanism output the group median to avoid an unbounded approximation ratio. For the altruistic maximum cost, we can observe that placing the facility at any median or group median cannot satisfy PSP since the agent cost function is totally different. Therefore, a major challenge is to design PSP mechanisms for both objectives. Altruistic Total Cost As we have discussed, placing the facility at a group median can lead to non-PSP. Hence, we first propose a profile preprocessing method to avoid that situation. Let lmed(Gj) be the left-median agent of Gj, rmed(Gj) be the right-median agent of Gj. If the number of agents is odd in Gj, we have lmed(Gj) = rmed(Gj). Profile Preprocessing. Given any profile r, map every agent i from xi to P(xi) = minj gi xrmed(Gj) if xi minj gi xrmed(Gj) maxj gi xlmed(Gj) else if xi maxj gi xlmed(Gj) xi otherwise Note that all mapping is based on the original group median location, it is possible that two agents switch the locations, e.g., lmed(Gj) and rmed(Gj) switch their locations after mapping. Below, we show how Profile Preprocessing helps us design PSP mechanisms. Proposition 4 (Repellency). A mechanism f is PSP if it satisfies repellency, i.e., for every profile r and every i N, it holds that P(xi) f(r) implies that f(r i, r i) f(r) for all r i The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) P(xi) > f(r) implies that f(r i, r i) f(r) for all r i Moreover, if we want to place the facility at a certain group median, placing the facility at that group median after profile preprocessing can achieve the same effect. Proposition 5 (Consistency). For every profile r and group Gg, it holds that xlmed(Gg) P(xlmed(Gg)) xrmed(Gg) and xlmed(Gg) P(xrmed(Gg)) xrmed(Gg) after Profile Preprocessing. Therefore, we can design a PSP mechanism by mapping all agents by Profile Preprocessing first, then place the facility at a certain group median since it can ensure that 1) any misreporting makes the facility farther away from P(xi) (Prop 4), 2) the facility location is also within the left median and right median of the same group as the original profile (Prop 4). Although we find a way to design a PSP mechanism, it is still a challenge to design mechanisms with a good approximation ratio. Next, we present different PSP mechanisms to approximately optimize different objectives. Social Cost In this section, we first extend the mechanism which places the facility at the median agent of the largest group to our setting with Profile Preprocessing. Then we consider two median mechanisms which not only consider the largest group but also consider all the other groups. Finally, we analyze the strengths and weaknesses of these mechanisms and present a mechanism with a smaller approximation ratio. Majority Median Mechanism (Majority-Med). Given any profile r, map all agents by Profile Preprocessing. Place the facility at the left-median of the largest group Gg, breaking ties in favor of the smallest index. Proposition 6. Majority-Med is Pareto strategyproof and has an approximation ratio of 2m 1 for minimizing the social cost. Since Majority-Med is a combination of Profile Preprocessing and the existing mechanism for optimizing the group-fair objectives, it is not surprising that it achieves a larger approximation ratio for the social cost. Recalling the best mechanism for the social cost under the myopic agent setting is placing the facility at the left-median agent location. A natural idea is designing a median-like mechanism by leveraging group memberships. Weighted Group Median Mechanism (Weighted-Med). Given any profile r, map all agents by Profile Preprocessing. Without loss of generality we assume that P(xlmed(G1)) ... P(xlmed(Gm)). Place the facility at y = P(xlmed(Gg)) where g = arg ming { Pg 2 Pm j=1 |Gj|}. Proposition 7. Weighted-Med is Pareto strategyproof and has an approximation ratio of 3m for minimizing the social cost. The major reason that Weighted-Med achieves an even larger approximation ratio is that the agents in multiple groups will be over counted in terms of the weight. Consider an example where one agent in {G1, ..., Gm 1} is at 0, m 1 agents in G1 to Gm 1, respectively, are at 1, and 2m 1 agents in Gm are at 1. Weighted-Med places the facility at 0 with the social cost of 3m 1 while the optimal location is 1 with the social cost of 1. Hence, we use the union operation instead of the addition operation. Union Group Median Mechanism (Union-Med). Given any profile r, map all agents by Profile Preprocessing. Without loss of generality we assume that P(xlmed(G1)) ... P(xlmed(Gm)). Let G[j] = G1 G2 Gj. Place the facility at y = P(xlmed(Gg)) where g = arg ming {|G[g ]| 1 2n}. Proposition 8. Union-Med is Pareto strategyproof and has an approximation ratio of 2m 1 for minimizing the social cost. Although Union-Med has the same approximation ratio as Majority-Med, one can see that the scenarios causing those approximation ratios are totally different since Majority-Med only leverages the largest group while Union-Med takes all the groups into consideration. What if we combine the spirit of these two mechanisms together, i.e., considering part of groups? Finally, we propose a new mechanism with a smaller approximation ratio. Union Larger Group Median Mechanism (Union Trunc -Med). Given any profile r, map all agents by Profile Preprocessing. Let |Gmax| be the number of agents in the largest group. Let Gl = {Gj||Gj| 1 λ|Gmax|, j [m]} where λ = 3m 4 2m 2, and |Gl| = ml. We denote the jth element in G \ Gl as Gj and the jth element in Gl as Gl j. Without loss of generality we assume that P(xlmed(Gl 1) ... P(xlmed(Gl ml)) and P(xlmed(G1)) ... P(xlmed(Gm ml)). Define G[jl] = G1l G2l Gjl. Place the facility at y = P(xlmed(Ggl)) where gl = arg mingl{|G[gl]| 1 The intuitive explanation is that we place the facility at the union group median among all larger groups. When λ is equal to 1, Union Trunc-Med is equivalent to Majority-Med, when λ approaches , Union Trunc-Med is equivalent to Union-Med. Theorem 1. Union Trunc-Med is Pareto strategyproof and has an approximation ratio of m(4m 5) 9 (when m approaches ) for minimizing the social cost. Proof. We can show Union Truc-Med satisfies repellency since any misreporting by agent i will move the facility farther away from P(xi). Then we consider the approximation ratio. Given any profile r, let y be the facility location output by Union Trunc-Med and y be the optimal facility location. Without loss of generality, we assume that y y . If gl = ml, due to the consistency of Profile Preprocessing, for any group j, there are at least |Gj| 2 group members at or on the left of y, and at most |Gj| 2 group members at or on the right of y . Hence, there are at least maxjl [gl]{ |Gjl| 2 agents at or on the left of y, and at most ml 2 |Gmax| agents at or on the right of y . In addition, there are at most The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) (m ml) |Gmax| 2 agents at or on the right of y . Hence, the approximation ratio is at most m. If gl ml 1, due to the consistency of Profile Preprocessing, for each Gjl where jl gl, there are at least half of group members at or on the left of y, implying the number of such agents is at least L = maxjl [gl]{ |Gjl| 2 . In addition, there are at most half of group members at or on the right of y , implying the number of such agents is at most R1 gl 2 |Gmax|. The number of the remaining agents in any group of Gl is at most R2 L+R1 since |G[gl]| 1 2|Gml|. There are at most R3 = P j [m ml] |Gl| (m ml) |Gmax| λ agents in any group belonging to G\Gl. Therefore, there are at most R1 L+R2+R3 more agents in ( , y] than there are in [y , ), implying sc(y, r) sc(y , r) + (R1 + R2 + R3 L)(y y). Moreover, sc(y , r) L(y y) since there are at least L agents at or on the left of y. Therefore, we have the approximation ratio ρ = sc(y,r) sc(y ,r) R1+R2+R3 L . Let R2 be x|Gx|, which can be seen as x groups each with |Gx| agents. Then the approximation ratio is at most max x,|Gx|,gl,L,R1,R3 R1 + R2 + R3 s.t. |Gmax| R2 = x|Gx|, R2 L + R1 1 x, |Gx| |Gmax| R3 (m gl x)|Gmax| 2m 2, 1 gl m 1 which can be simplified as max x,|Gx|,gl 2 |Gmax| + x|Gx| + (m gl x) |Gmax| s.t. x|Gx| gl 2 |Gmax| + |Gmax| 2 , λ = 3m 4 2m 2 1 x, |Gx| |Gmax|, 1 gl m 1 If |Gx| |Gmax| λ , the objective function is monotonically increasing with x increasing. The objective function reaches the maximum when x = gl 2|Gx||Gmax|+ |Gmax| 2|Gx| since 2 |Gmax|+ |Gmax| 2 . Then we plug x into the objective to yield λ + 2gl + 1)|Gmax| (gl+1)|Gmax| 2|Gx| |Gmax| where the equality is satisfied |Gx| = |Gmax| and gl = ml 1 = m 1. If |Gx| |Gmax| λ , the objective function is monotonically decreasing with x increasing. The objective function reaches the maximum when x = 1 since x 1. Then we plug x into the objective to yield ρ k|Gmax| + 2|Gx| + 2(m gl 1) |Gmax| where the equality holds by |Gx| = |Gmax| λ and gl = 1. Then we show a lower bound in this setting. Theorem 2. Any deterministic Pareto strategyproof mechanism has an approximation ratio of at least m for minimizing the social cost. Maximum Cost For the maximum cost, one can see that placing the facility at a certain agent location can achieve a 2-approximation ratio (but may not be PSP), which satisfies all of our Pareto strategyproof mechanisms. Theorem 3. All the mechanisms (Majority-Med, Weighted-Med, Union-Med, Union Trunc-Med) have an approximation ratio of 2 for minimizing the maximum cost. Theorem 4. Any deterministic Pareto strategyproof mechanism has an approximation ratio of at least 2 for minimizing the maximum cost. Maximum Total & Average Group Cost In this subsection, we use Majority-Med for both magc and magc. Theorem 5. Majority-Med has an approximation ratio of 3 for both magc and magc objectives. Theorem 5 also implies Majority-Med can achieve an approximation ratio of 3 under the setting of myopic agents and multiple group memberships, which generalize the corresponding result of Zhou, Li, and Chan (2022). Then we present a lower bound. Theorem 6. Any deterministic Pareto strategyproof mechanism has an approximation ratio of at least 3 for both magc and magc objectives. Altruistic Maximum Cost When we consider the altruistic maximum cost, none of the mechanisms we mentioned are Pareto strategyproof. Hence, we need to design new mechanisms for this setting. We first introduce the following PSP mechanism, which is similar to Weighted-Med. Weighted Group Middle Mechanism (Weighted-Mid). Let mid(Gj) be the middle of group Gj, and without loss of generality we assume that xmid(G1) ... xmid(Gm). Place the facility at y = xmid(Gk) where k = arg mink {Pk 2 Pm j=1 |Gj|}. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) Proposition 9. Weighted-Mid is Pareto strategyproof. Next, we will use Weighted-Mid to maximize the social cost and the maximum cost. Social Cost & Maximum Cost Theorem 7. Weighted-Mid has an approximation ratio of n 2 for minimizing the social cost. Theorem 8. Any deterministic Pareto strategyproof mechanism has an approximation ratio of at least n 2 for minimizing the social cost. Theorem 9. Weighted-Mid has an approximation ratio of 2 for minimizing the maximum cost. The lower bound can be inferred from Theorem 4. Corollary 1. Any deterministic Pareto strategyproof mechanism has an approximation ratio of at least 2 for minimizing the maximum cost. Therefore, Weighted-Mid is the best mechanism for both the social cost and the maximum cost. Maximum Total & Average Group Cost For the magc objective, we first show that Weighted-Mid has an approximation ratio of at least |Gmax|, The case where |Gmax| = 1 is equivalent to the myopic setting, thus we only consider |Gmax| 2. Proposition 10. Weighted-Mid has an approximation ratio of at least |Gmax| for minimizing the magc. Because Weighted-Mid is designed for the classical objectives, it is natural that it cannot guarantee a smaller approximation under a totally different objective. Hence, we propose a new mechanism for group-fair objectives, which leverages the idea of designing Majority-Med. Majority Group Middle Mechanism (Majority-Mid). Place the facility at the middle of the largest group Gg, breaking ties in favor of the smallest index. Next, we provide its approximation ratio for the magc. Theorem 10. Majority-Mid is Pareto strategyproof and has an approximation ratio of |Gmax| 2 +1 for both magc and magc. Proof. For the Pareto strategyproofness, we can show that every agent misreporting can only move the facility farther away from one group middle point they belong to. Then we prove the approximation ratio. Let y be the mechanism s output and y be the optimal output. Without loss of generality, we assume that y < y . Give any profile r, we observe that mtgc(y, r) mtgc(y , r)+|Gmax|(y y) since there are at most |Gmax| agents in the same group at or on the right of y . Further, we have the approximation ratio ρ = mtgc(y, r) mtgc(y , r) 1 + |Gmax|(y y) mtgc(y , r) . We observe that the smaller mtgc(y , r) is, the larger the approximation ratio is. Hence, we need to calculate the minimum value of mtgc(y , r). Let the location of the leftmost agent in the largest group be xl and the location of the rightmost agent in the largest group be xr. Let L = xr xl. If |Gmax| = 1, L is 0 and mtgc(y , r) y y since there exists an agent at y. Then we have the approximation ratio ρ 1 + |Gmax|(y y) y y = 2. If |Gmax| 2, we discuss it by cases: y y L 2 and y y > L 2 . If y y L 2 , it means that y < y xr. We have mtgc(y , r) L since there is at least one agent at xl and at least one agent at xr. Therefore, we have the approximation ratio ρ 1 + |Gmax|(y y) L , which reaches the maximum 1 + |Gmax| 2 when y y = L/2. If y y > L 2 , y is on the right of xr and we have mtgc(y , r) (y y+ L 2 )+(|Gmax| 1)(y y L = |Gmax|(y y) (|Gmax| 2)L since there is at least one agent at xl and at most |Gmax| 1 agents at xr. We further have the approximation ratio ρ 1 + |Gmax|(y y) |Gmax|(y y) (|Gmax| 2) L which reaches the maximum 1 + |Gmax| 2 when y y = L 2 . Here we briefly discuss the approximation ratio of Majority-Mid for minimizing the magc since the analyzes are similar to the proof of magc. Specifically, the condition for equivalence of all inequalities in the proof is that both mtgc(y, r) and mtgc(y , r) are achieved by the groups with size |Gmax|. Therefore, we have mtgc(y, r) = |Gmax| magc(y, r) and mtgc(y , r) = |Gmax| magc(y , r), implying the same upper bounds. Then we present a tight lower bound. Theorem 11. Any deterministic Pareto strategyproof mechanism has an approximation ratio of at least |Gmax| 2 + 1 for both magc and magc. Conclusion We consider the facility location problems (FLPs) with altruistic agents where the agents belong to a set of overlapping groups. We model the altruistic agents using the altruistic total cost and the altruistic maximum cost. For both agent cost functions, we design new mechanisms and show that the existing mechanisms that satisfy various strategyproof notions approximately optimize several standard and groupfair objectives (i.e., total cost, maximum cost, maximum total group cost, and maximum average group cost). We provide lower bounds for each setting. There are many directions that can be further explored. The first is whether the current results can be improved, i.e., proposing mechanisms with better approximation ratios or providing higher lower bounds. Moreover, studying altruistic agents in other facility location problems, such as obnoxious facility location problems, is an interesting direction. Besides the Pareto strategyproofness, there are some other methods to model the agents with multiple objectives that can be explored, such as the sum of the objectives and the maximum among the objectives. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) Acknowledgments The work described in this paper was partially supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China [Project No. City U 11213620]. Hau Chan is supported by the National Institute of General Medical Sciences of the National Institutes of Health [P20GM130461], the Rural Drug Addiction Research Center at the University of Nebraska-Lincoln, and the National Science Foundation under grant IIS:RI # 2302999. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding agencies. We would like to thank the anonymous reviewers for their feedback. Aziz, H.; Chan, H.; Lee, B. E.; and Parkes, D. C. 2020. The capacity constrained facility location problem. Games and Economic Behavior, 124: 478 490. Aziz, H.; Lam, A.; Lee, B. E.; and Walsh, T. 2022a. Strategyproof and Proportionally Fair Facility Location. In Hansen, K. A.; Liu, T. X.; and Malekian, A., eds., Web and Internet Economics - 18th International Conference, WINE 2022, Troy, NY, USA, December 12-15, 2022, Proceedings, volume 13778 of Lecture Notes in Computer Science, 351. Springer. Aziz, H.; Lam, A.; Li, B.; Ramezani, F.; and Walsh, T. 2023. Proportional Fairness in Obnoxious Facility Location. In Agmon, N.; An, B.; Ricci, A.; and Yeoh, W., eds., Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2023, London, United Kingdom, 29 May 2023 - 2 June 2023, 2646 2648. ACM. Aziz, H.; Lam, A.; Suzuki, M.; and Walsh, T. 2022b. Random Rank: The One and Only Strategyproof and Proportionally Fair Randomized Facility Location Mechanism. In Neur IPS. Bartholdi, J.; and Trick, M. A. 1986. Stable matching with preferences derived from a psychological model. Operations Research Letters, 5(4): 165 169. Bergemann, D.; and Morris, S. 2008. Ex post implementation. Games and Economic Behavior, 63(2): 527 566. Cai, Q.; Filos-Ratsikas, A.; and Tang, P. 2016. Facility Location with Minimax Envy. In Kambhampati, S., ed., Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016, New York, NY, USA, 9-15 July 2016, 137 143. IJCAI/AAAI Press. Carvalho Rodrigues, F.; and Xavier, E. 2017. Non Cooperative Facility Location Games: a Survey. Revista de Inform atica Te orica e Aplicada, 24(1): 59 90. Chan, H.; Filos-Ratsikas, A.; Li, B.; Li, M.; and Wang, C. 2021. Mechanism Design for Facility Location Problems: A Survey. In Zhou, Z., ed., Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Virtual Event / Montreal, Canada, 19-27 August 2021, 4356 4365. ijcai.org. Chen, P.-A.; Keijzer, B. D.; Kempe, D.; and Sch afer, G. 2014. Altruism and Its Impact on the Price of Anarchy. ACM Transactions on Economics and Computation, 2(4). Chen, X.; Fang, Q.; Liu, W.; Ding, Y.; and Nong, Q. 2021. Strategyproof mechanisms for 2-facility location games with minimax envy. Journal of Combinatorial Optimization, 1 17. Ding, Y.; Liu, W.; Chen, X.; Fang, Q.; and Nong, Q. 2020. Facility location game with envy ratio. Computers & Industrial Engineering, 148: 106710. Edlin, A.; Gelman, A.; and Kaplan, N. 2007. Voting as a Rational Choice: Why and How People Vote To Improve the Well-Being of Others. Rationality and Society, 19(3): 293 314. Eisenberg, N. 2014. Altruistic emotion, cognition, and behavior (PLE: Emotion). Psychology Press. Fehr, E.; and Fischbacher, U. 2003. The nature of human altruism. Nature, 425(6960): 785 791. Filos-Ratsikas, A.; and Voudouris, A. A. 2021. Approximate Mechanism Design for Distributed Facility Location. In Caragiannis, I.; and Hansen, K. A., eds., Algorithmic Game Theory - 14th International Symposium, SAGT 2021, Aarhus, Denmark, September 21-24, 2021, Proceedings, volume 12885 of Lecture Notes in Computer Science, 49 63. Springer. Hayes, C. F.; R adulescu, R.; Bargiacchi, E.; K allstr om, J.; Macfarlane, M.; Reymond, M.; Verstraeten, T.; Zintgraf, L. M.; Dazeley, R.; Heintz, F.; Howley, E.; Irissappane, A. A.; Mannion, P.; Now e, A.; Ramos, G.; Restelli, M.; Vamplew, P.; and Roijers, D. M. 2022. A practical guide to multi-objective reinforcement learning and planning. Autonomous Agents and Multi-Agent Systems, 36(1): 26. Hogg, M. A.; and Tindale, R. S. 2001. Blackwell Handbook of Social Psychology: Group Processes. Blackwell Publishers Ltd. Jankowski, R. 2002. Buying a Lottery Ticket to Help the Poor: Altruism, Civic Duty, and Self-interest in the Decision to Vote. Rationality and Society, 14(1): 55 77. Jehiel, P.; and Moldovanu, B. 2001. Efficient Design with Interdependent Valuations. Econometrica, 69(5): 1237 1259. Lam, A. 2021. Balancing Fairness, Efficiency and Strategy Proofness in Voting and Facility Location Problems. In Dignum, F.; Lomuscio, A.; Endriss, U.; and Now e, A., eds., AAMAS 21: 20th International Conference on Autonomous Agents and Multiagent Systems, Virtual Event, United Kingdom, May 3-7, 2021, 1818 1819. ACM. Liu, W.; Ding, Y.; Chen, X.; Fang, Q.; and Nong, Q. 2021. Multiple facility location games with envy ratio. Theoretical Computer Science, 864: 1 9. Marsh, M. T.; and Schilling, D. A. 1994. Equity measurement in facility location analysis: A review and framework. European Journal of Operational Research, 74(1): 1 17. Mezzetti, C. 2004. Mechanism Design with Interdependent Valuations: Efficiency. Econometrica, 72(5): 1617 1626. Monroe, K. R. 1996. The Heart of Altruism: Perceptions of a Common Humanity. Princeton University Press. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) Moulin, H. 1980. On strategy-proofness and single peakedness. Public Choice, 35(4): 437 455. Okasha, S. 2005. Altruism, Group Selection and Correlated Interaction. The British Journal for the Philosophy of Science, 56(4): 703 725. Penner, P. S. 2021. Altruistic Behavior: An Inquiry into Motivation. Brill. Procaccia, A. D.; and Tennenholtz, M. 2009. Approximate mechanism design without money. In Chuang, J.; Fortnow, L.; and Pu, P., eds., Proceedings 10th ACM Conference on Electronic Commerce (EC-2009), Stanford, California, USA, July 6 10, 2009, 177 186. ACM. Roijers, D. M.; Vamplew, P.; Whiteson, S.; and Dazeley, R. 2013. A Survey of Multi-Objective Sequential Decision Making. Journal of Artificial Intelligence Research, 48(1): 67 113. Sawaragi, Y.; Nakayama, H.; and Tanino, T. 1985. Theory of multiobjective optimization. Elsevier. Schroeder, D. A.; and Graziano, W. G. 2015. The Oxford Handbook of Prosocial Behavior. Oxford University Press. Schroeder, D. A.; Penner, L. A.; Dovidio, J. F.; and Piliavin, J. A. 1995. The Psychology of Helping and Altruism: Problems and Puzzles. Mc Graw-Hill. Simon, J. 2016. On the existence of altruistic value and utility functions. Theory and Decision, 81(3): 371 391. Simon, J.; Saari, D.; and Keller, L. R. 2020. Interdependent Altruistic Preference Models. Decision Analysis, 17(3): 189 207. Zhou, H.; Li, M.; and Chan, H. 2022. Strategyproof Mechanisms for Group-Fair Facility Location Problems. In Raedt, L. D., ed., Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022, 613 619. ijcai.org. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)