# the_egocentric_logic_of_preferences__c186d0d1.pdf The Egocentric Logic of Preferences Junli Jiang1 and Pavel Naumov2 1Institute of Logic and Intelligence, Southwest University, China 2University of Southampton, the United Kingdom walk08@swu.edu.cn, p.naumov@soton.ac.uk The paper studies preferences of agents about other agents in a social network. It proposes a logical system that captures the properties of such preferences, called likes . The system can express nested constructions agent likes humbled people , agent likes those who like humbled people , etc. The main technical results are a model checking algorithm and a sound, complete, and decidable axiomatization of the proposed system. 1 Introduction INVESTIGATOR: Have your student ever advocated the overthrow of the United States Government? As a young faculty who just started to teach in the United States, the second author of this paper did not expect this question. He instantly understood that answering yes would jeopardize the student s chances to get security clearance required for the student s job with the US Government. However, he did not know that answering yes to this question and expressing a sympathy for the student could disqualify the author from ever working for the US Department of Defense. Indeed, under US law, .. . sympathy with persons or organizations that advocate the overthrow of the United States Government is a disqualifying factor for a security clearance at the US Department of Defense [Congress, 2022, Subtitle A, Chapter I, Subchapter D, Part 147, Subpart A, 147.3]. In this paper we propose a logical system for reasoning about preferences (or likes ) of agents about other agents which is based on egocentric logic [Prior, 1968]. The egocentric approach has also been used in [Grove and Halpern, 1991; Grove and Halpern, 1993; Grove, 1995; Seligman et al., 2013; Epstein and Naumov, 2021]. In an egocentric logic, the statements capture properties of the agents. As a result, the satisfaction is a relation between an agent and a formula: the student advocated the overthrow. In general, we write a φ if property φ is true about agent a. In this paper, we propose modality L ( likes those who ) that captures agents preferences about other agents. For example, it is clear from the opening example that the investigator preferred job applicants who have not advocated the overthrow of the US Government: the investigator L ( advocated the overthrow ). In general, we write a Lφ if agent a likes (prefers) any agent with property φ to an agent without property φ. At this point, placement of an agent on the left-hand-side of the relation seems to be not very justified. The reason for doing this appears clear only when one starts nesting modalities. Indeed, because the agent is on the left-hand-side of , formula Lφ captures a property of an agent, just like formula φ does. As a result, it can be nested. Statement LLφ means likes those who like φ and statement L Lφ means likes those who do not like φ . For example, the investigator L L( advocated the overthrow ). (1) means that the investigator likes those who do not sympathise with the people advocating the overthrow of the US Government. Note that statements like the one above can not be expressed in traditional multiagent modal language where the agent is placed in subscript. This is because in such a language, the operator La in statement La( advocated the overthrow ) takes a statement about agents as an argument and returns a statement about the world ( agent a likes those who advocate the overthrow ) as the value. Such operators cannot be nested. Statement (1) illustrates the capability of an egocentric logic, but it does not capture correctly the security clearance requirements in the United States. Indeed, imagine a bit unusual situation when a job candidate does not sympathise with the people advocating the overthrow of the US Government, but yet advocates for this herself. Cited above Title 32 National Defence of US code of federal regulations disqualifies such person from being employed by the Department of Defense as well. Thus, the investigator L( ( advocated the overthrow ) L( advocated the overthrow )). It is interesting to note that the above regulation does not disqualify those who sympathise with the sympathisers: the investigator L( ( advocated the overthrow ) L( advocated the overthrow ) LL( advocated the overthrow )). Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI-22) In this paper, we propose a formal semantics for modality L in the egocentric setting, describe a model checking algorithm for formulae that use this modality, and give a sound, complete, and decidable axiomatization of the properties of this modality. To illustrate the capability of our logical system, we add to it for all friends modality F as well as constants humbled η and egotistic ε. We assume that friends do not have opposite preferences, that humbled agents do not think that they are better than their friends, and that egotistic people do not think that they are worse than their friends. The first logical system for reasoning about preferences, the logic of better , was proposed by Halld en [1957]. It captures the properties of the binary modality statement p is better than q . Similar logical systems have been proposed in [Von Wright, 1963; Doyle et al., 1991]. In these systems one can define likes modality Lφ to mean that statement φ is better than statement φ . Lang, van der Torre, and Weydert considered a conditional version L(φ | ψ) of this modality, which stands for statement φ is preferred over φ assuming that ψ is true [2002]. Van Benthem, Girard, and Roy proposed a logical system for a related modality statement φ holds in all worlds better than current [2009]. The same modality is also investigated in [Liu, 2011; Christoff et al., 2021]. Lorini and Schwarzentruber proposed to define desirability without using a preference relation. They say that a statement is desirable if it is true in all worlds indistinguishable from the current world that are labeled as good [2011]. Unlike us, none of these works used the egocentric approach. As a result, they cannot express statements like those who like , like those who do not like , etc. The paper is structured as follows. First, in sections 2 through 4 we propose the formal semantics of modalities L and F as well as constants η and ε and illustrate them with two examples. Second, in Section 5, we describe a model checking algorithm for our logical system and analyse its complexity. Third, in Section 6, we give a sound, complete, and decidable logical system that captures the interplay between modalities L, F and constants η, ε. The proofs of the soundness and the completeness are omitted due to the space constraint. 2 Social Network In this section, we introduce the notion of a social network that serves as a foundation for the semantics of our logic. In the paper, we assume a fixed set of propositional variables P. Definition 1. A tuple (A, { a}a A, F, H, E, π) is called a social network if 1. A is a (possibly empty) finite set of agents , 2. a is a strict partial order preference relation on A, for each agent a A, 3. F(a) A is a set for any agent a A, called a circle of friends , such that (i). a F(a) for all agents a A, (ii). if b a c, then c f b, for all agents a, b, c A and each agent f F(a), 4. H A is a set of humbled agents; it will be assumed that f h h for any humbled agent h H and any agent f F(h), 5. E A is a set of egotistic agents; it will be assumed that e e f for any egotistic agent e E and any agent f F(e), 6. π(p) is a subset of A for each variable p P. Item 3(i) of the above definition states that each agent is her own friend. We make this assumption to simplify one of the axioms of our logical system. Our results in this paper can be easily adjusted if this assumption is removed. Item 3(ii) states that friends cannot have opposite preferences. The similarity of preferences between friends has been observed in psychology [Selfhout et al., 2009] and it is the underlying assumption of the balance theory [Harary, 1953]. Note that we do not assume that friendship is symmetric: if a F(b), then it is not necessarily that b F(a). Items 4 and 5 capture our assumptions that humbled agents do not think that they are better than their friends and that egotistic agents do not think that they are worse than their friends. In this paper, we primarily use terms friend , humbled , and egotistic to show-off the expressive power of our language. We do not imply that items 3, 4, and 5 capture the full depth of human friendship or the notions of humbleness and egotism. Note that propositional variables, just like all formulae in our language, represent noun-free fragments of propositions such as is humbled or is rich . Prior calls them subjectless predicates [1968]. Grove and Halpern use term relative propositions because they are relative to an agent [Grove and Halpern, 1991; Grove and Halpern, 1993; Grove, 1995]. To reflect this, item 6 of Definition 1 specifies the valuation π(p) of a proposition variable p as a set of agents. Informally, these are the agents for which propositional variable p is true. Let us now turn to an example of a social network. Recently, a lot has been said about polarisation of opinions in the society. From Brexit debates [Del Vicario et al., 2017], ethnic integration in Netherlands [Oosterwaal and Torenvlied, 2010], to politics in the US [Sides and Hopkins, 2015], it has been observed that the people split into two groups that do not interact and do not value the opinions of each other. In reality, there is nothing new about such split. The same situation existed in 18th century England, as highlighted in political satire book Gulliver s Travels [Swift, 1995]. Following Swift, we capture such polarisation in Lilliput s Social Network depicted in Figure 1. All agents in the network are divided into big endians and little endians. We assume that there is at least one agent in each group. Directed edges labeled with a group of agents represent the preferences of all agents in the group. Thus, each big endian prefers each big endian over each little endian and each little endian prefers each little endian over each big endian. We also assume that each agent a in this setting has a set of friends F(a), not depicted in the figure. Proposition 1. A little endian cannot be a friend of a big endian. A big endian cannot be a friend of a little endian. Proof. Suppose that there is a little endian l and a big endian Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI-22) Little Endians Big Endians Little Endians Big Endians Figure 1: Lilliput s Social Network. b such that l F(b). Thus, l b b and see Figure 1. Statement l b b implies that b l l by item 3(ii) of Definition 1 and the assumption l F(b), which contradicts to statement (2). The proof of the other statement is similar. Although the Lilliput s Social Network is based on a fictional book, castes define a similar social structure in India. There, similar to our claim in Proposition 1, caste was and is a defining factor in friendships [Dhanaraj, 2016]. In the United States, about forty percent of registered voters do not have a single close friend who supports the other major party candidate [Dunn, 2020]. 3 Syntax and Semantics The language Φ of our system is defined by the grammar: φ := η | ε | p | φ | φ φ | Fφ | Lφ. We read η as is humbled , ε as is egotistic , F as for all friends , and L as likes those who . We assume that F is an abbreviation for F . We read F as has a friend who . We assume that disjunction , conjunction , and biconditional are defined in our language in the standard way. Next, we define the semantics of our logical system. Definition 2. For any agent a A of a social network (A, { a}a A, F, H, E, π) and any formula φ Φ, the satisfaction relation a φ is defined recursively as follows. 1. a η, if a H, 2. a ε, if a E, 3. a p, if a π(p), 4. a φ, if a φ, 5. a φ ψ, if a φ or a ψ, 6. a Fφ, if f φ for all agents f F(a), 7. a Lφ, when for all agents b, c A, if b φ and c φ, then b a c. Some of the existing works on preferences are based on ceteris paribus principle [Von Wright, 1963]. If this principle is applied to modality like , then one would say that an agent a likes φ if she prefers agents for whom φ is true to those for who φ is not true, everything else being equal. Capturing everything else being equal assumption semantically is a non-trivial task. One of possible ways to do it is proposed in [Van Benthem et al., 2009]. In this paper, we do not use ceteris paribus principle. Item 7 of the above definition states that an agent a likes φ if she prefers each agent for whom φ is true to each agent for whom φ is not true. We use Aφ as an abbreviation for formula φ Lφ L φ. Because of the next lemma, we read A as for all agents . Lemma 1. a Aφ iff b φ for all agents b A. Proof. ( ) : Suppose that there is an agent b A such that b φ. Note that the assumption a Aφ of the lemma implies that a φ and a Lφ. Then, by item 7 of Definition 2, the assumption b φ implies that b a a. At the same time, the assumption b φ implies b φ by item 4 of Definition 2. Note that the assumption a Aφ of the lemma also implies that a φ and a L φ. Then, a a b again by item 7 of Definition 2. Finally, note that statements b a a and a a b are inconsistent because relation a is a strict partial order by item 2 of Definition 1. ( ) : Suppose that a φ Lφ L φ. Thus, one of the following cases takes place. Case I: a φ. Then, there is an agent b such that b φ. Case II: a Lφ. Hence, by item 7 of Definition 2, there are agents b, c A such that b φ, c φ, and b a c. So, there is an agent b such that b φ. Case III: a L φ. Thus, again by item 7 of Definition 2, there are agents b, c A such that b φ, c φ, and b a c. Therefore, c φ by item 4 of Definition 2. The lemma below follows from item 7 of Definition 2. Lemma 2. For any social network, if a φ ψ for each agent a A, then a Lφ Lψ for each agent a A. In this section, we illustrate the formal semantics of modalities L, F given in Definition 2. 4.1 Lilliput s Social Network First, let us return to the Lilliput s Social Network from Figure 1 and consider several statements with modality L. The proposition below shows that an agent likes little endians if and only if she herself is a little endian. Proposition 2. a L( is a little endian ) iff agent a is a little endian. The assumption on page 2 that the set of little endians is nonempty is important for Proposition 2. Indeed, by item 7 of Definition 2, the statement a L( is a little endian ) is vacuously true for any agent a if the set of little endians is empty. The next proposition states that an agent likes big endians if and only if she herself is a big endian. Proposition 3. a L( is a big endian ) iff agent a is a big endian. Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI-22) The proposition below states that an agent likes those who like little endians if and only if she herself is a little endian. Proposition 4. a LL( is a little endian ) iff agent a is a little endian. Finally, the proposition states that an agent likes those who do not like little endians if and only if she herself is a big endian. Proposition 5. a L L( is a little endian ) iff agent a is a big endian. 4.2 The World According to Steve Jobs This example originates from the quote Steve Jobs has a saying that A players hire A players; B players hire C players; and C players hire D players. [Kawasaki, 2004, p. 101] We capture this quote through the diagram depicted in Figure 2. Here all agents are partitioned into groups A, B, C, and D. As with our previous example, we assume that each group is not empty. The hiring preference relation of each group is depicted in the diagram using directed edges. Figure 2: The World According to Steve Jobs. The following propositions about A-, B-, C-, and Dpayers are true in the context of this example. Proposition 6. A-players can not be friends with B-players or C-players. B-players cannot be friends with C-players. Proof. To prove that A-players can not be friends with Bplayers, consider any A-player a and any B-player b. Recall that we assume that each group contains at least one player. Let third player be any C-player c. Then, c a a and a b c, see Figure 2. Therefore, a / F(b) and b / F(a) by item 3(ii) of Definition 1. The proofs of the other parts of the proposition are similar, but one needs to consider a Dplayer as a third player . Proposition 7. Humbled A-players cannot have friends among D-players. Proof. Suppose a and d are, respectively, an A-player and a D-player such that a H and d F(a). Then, d a a by item 4 of Definition 1, which contradicts to Figure 2. Note that a F(b) is not equivalent to b F(a) per Definition 1. In particular, in spite of the previous proposition, a humbled A-player can be a friend of a D-player. Proposition 8. a L( is an A-player ) iff agent a is an Aplayer. Proof. ( ) : By our assumption, each of the groups of players is nonempty. Let agent a be any A-player and agent n be any non-A-player. Assumption a L( is an A-player ) implies that n a a by item 7 of Definition 2. Therefore, agent a is an A-player, see Figure 2. ( ) : Suppose that agent a is an A-player. Consider any A-player a and non-A-player n. By item 7 of Definition 2, it suffices to show that n a a . Note that n a a is true because agents a and a are A-players and agent n is not an A-player, see Figure 2. The proofs of the next two propositions are similar. Proposition 9. a L( is a C-player ) iff agent a is a Bplayer. Proposition 10. a L( is a D-player ) iff agent a is a Cplayer. Proposition 11. a L( is a B-player ) for any agent a. Proof. By our assumption, each of the groups of players is nonempty. Let agent b be any B-player and agent n be any non-B-player. Thus, n a b, see Figure 2. Therefore, we have a L( is a B-player ) by item 7 of Definition 2. Proposition 12. a LL( is an A-player ) iff agent a is an A-player. Proposition 13. a LL( is a D-player ) iff agent a is a B-player. We conclude this example with the observation that in our setting D-agents do not like any non-trivial group of people. In other words, they only can like the group of all agents and the empty group of agents. Proposition 14. a Lφ Aφ A φ for each D-player a. 5 Model Checking In this section, we describe a model checking algorithm for formulae in language Φ. 5.1 Model Representation Before describing the algorithm, let us discuss how preference relation a is represented in the input. One possible way to describe a transitive relation is to give the set of pairs of agents in the relation. Another, a more compact one, is to define the relation as a transitive closure of the given set of pairs [Aho et al., 1972]. a1 a2 a3 a4 a5 Figure 3: A Social Network. For example, to describe the relation depicted in Figure 3 the first way, one should list pairs (a1, a2), (a1, a3), (a1, a4), Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI-22) (a1, a5), (a2, a3), (a2, a4), (a2, a5), (a3, a4), (a3, a5), and (a4, a5). To describe the same relation the second way, it is sufficient only to list pairs (a1, a2), (a2, a3), (a3, a4), and (a4, a5). In general, the size of the first representation could be as large as big O of square of the size of the second one. Since the efficiency of an algorithm is measured as a function of the size of its input, the larger is the size of the input representation, the more efficient the algorithm appears to be. An honest algorithm analysis should use the most compact representation of the data. Following this principle, we assume that the preference relation in a social network is defined as a transitive closure of the set of pairs given in the description of the relation. 5.2 Multipath Problem We describe the model checking algorithm in the next section. Here we discuss a graph problem whose solution is used by the model checking algorithm. Let us first review some terminology. A directed graph (V, E) is defined by a set of vertices V and a set of directed edges E. By a cut of the graph we mean a partitioning of the set V into two disjoint sets. We allow either of these two sets to be empty. An example of a cut is depicted in Figure 4, where the vertices are divided into white and black. Figure 4: Graph Partitioning Problem. Given a cut (X, Y ) of a directed acyclic graph (DAG), decide if there is a directed path from each vertex in set X to each vertex in set Y . For example, if X is the set of all white vertices in Figure 4 and Y is the set of black vertices, then the answer is negative because there is no directed path from vertex b to vertex d. The brute-force approach to multipath problem is to use depth-first-search algorithm to verify for each vertex x X and each vertex y Y that there is a directed path from x to y. The depth-first-search takes O(|V | + |E|) time, where |V | is the number of vertices and |E| is the number of edges in the directed acyclic graph (V, E). Thus, the brute-force approach execution time is O(|V |3 + |V |2 |E|). The multipath problem can be solved in O(|V |2+|V | |E|) using a different approach, consisting of two steps. On the first step, topological sorting algorithm is used to generate a linear ordering of the vertices of the graph: v1, . . . , vn. On the second step, dynamic programming is used to recursively compute for each vertex vi the set of all vertices R(vi) X from which vertex vi is reachable by a directed R(vi) = {vi} R (vi), if vi X, R (vi), if vi / X, (3) where R (vi) = [ (vj,vi) E R(vj). (4) b a e d f c h Figure 5: Topological Sort Figure 5 illustrates this algorithm on the example of the DAG depicted in Figure 4. We assume that the topological sorting algorithm returned ordering c, e, b, d, a, h, f. In Figure 5, sets R(v) are depicted in the rectangles under each vertex. Thus, for example, R(d) = {c, e}, because {c, e} is the set of all white vertices from which vertex d can be reached. The content of the rectangles is computed recursively using equation (3). For example, R(h) = R(b) R(d) because vertex h does not belong to set X and it has only two incoming edges: (b, h) and (d, h). After sets R(v) are computed for each v V , to get the answer to multipath problem, one only needs to check if R(v) = X for each vertex v Y . In our example, the answer is negative because R(d) = {c, e} = {b, c, e} = X. It is well known that topological sorting could be done in O(|V |+|E|) time using, for example, depth-first-search algorithm [Cormen et al., 2009, pp. 612-614]. We represent sets of vertices as Boolean arrays. Namely, if V = {v1, . . . , vn} and R V , then set R is represented by a Boolean array a[ ] of length n such that a[i] = t iff vi R. Under such representation, it takes O(|V |) time to compute the union of any two subsets of V . Note that the total number of the unions computed during the second step of our algorithm is O(|V | + |E|) because each edge (vj, vi) in formula (4) is used exactly once during the entire computation. Thus, the total execution time of the second step is O(|V |(|V | + |E|)). Therefore, the total execution time of the described solution of the multipath problem is O(|V | + |E|) + O(|V |(|V | + |E|)) = O(|V |2 + |V | |E|). 5.3 Model Checking Problem By the model checking problem we mean deciding if a φ is true for a given formula φ and a given agent a of a given social network. In this subsection, we show how the multipath problem from the previous subsection could be used to solve the model checking problem for language Φ. For an arbitrary social network (A, { a}a A, F, H, E, π), by |A| we mean the number of agents in this network. By | a | we mean the size of relation a using the second (more compact) representation. For example, the size of the strict order depicted in Figure 3 is 4. By |{ a}a A| we denote P a A | a |. Finally, Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI-22) by |φ| we mean the size (number of symbols) of a formula φ Φ. Theorem 1. Model checking of formula φ for a given agent in a given social network could be accomplished in time O(|φ|(|A|3 + |A| |{ a}a A|)) in the worst case. Proof. To check a φ, we consider a dynamic programming algorithm that for each subformula ψ of φ recursively checks for each agent b A if b ψ is true and stores the result for the future use. If formula ψ is η, ε, or a propositional variable, then checking if b ψ is true takes a constant time. In this case, checking b ψ for each agent b A takes time O(|A|). If formula ψ has either the form ψ1 or the form ψ1 ψ2, then b ψ can be checked in a constant time using stored results for b ψ1 and b ψ2. Then, checking b ψ for each agent b A takes time O(|A|). If formula ψ has the form Fψ1, then, checking b Fψ1 takes time O(|F(b)|) using pre-computed answers for validity of x ψ1, where x F(b). Since F(b) A, it takes O(|A|2) to check b Fψ1 for all agents b A. If formula ψ has the form Lψ1, then to check if b ψ is true it suffices to check that there is a b-path from each element of set X = {x A | x ψ1} to each element of set Y = {y A | y ψ1}. This can be accomplished in time O(|A|2 + |A| | b |) using the solution of the multipath problem described in the previous subsection. Then, checking b ψ for all agents b A takes b A (|A|2 + |A| | b |) b A |A|2 + |A| X = O(|A|3 + |A| |{ b}b A|). Thus, in the worst case, it takes O(|A|3 + |A| |{ b}b A|) time to check for each agent b A if b ψ is true. Therefore, it takes O(|φ|(|A|3 + |A| |{ b}b A|)) to do the same for all subformulae ψ of formula φ. 6 Axioms In this section, we give a sound, complete, and decidable axiomatization of our logical system. In addition to tautologies in language Φ, the system contains the following axioms. 1. Distributivity: A(φ ψ) (Aφ Aψ) and F(φ ψ) (Fφ Fψ), 2. Friends are Agents: Aφ Fφ, 3. Self-Friendship: Fφ φ, 4. Negative Introspection: Aφ A Aφ, 5. Substitution: A(φ ψ) (Lφ Lψ), 6. Friends Think Alike: Lφ FLψ A(φ ψ) A(ψ φ), 7. Humbleness: η φ Lφ Fφ, 8. Egotisticness: ε φ Lφ F φ. The meaning of the first five axioms above is straightforward. The Friends Think Alike axiom captures the requirement of item 3(ii) of Definition 1 that preferences of the friends must be compatible. Specifically, it states that if an agent likes those for whom φ is true and a friend of the agent likes those for whom ψ is true, then either each φ-agent is a ψ-agent or each ψ-agent is a φ-agent. This is the most nontrivial among the axioms of our system. The Humbleness axiom is better understood in the form η Lφ F φ φ. In this form, it says that if a humbled agent likes those for whom φ is true and she has a friend for whom φ is false, then φ must be false for her as well. This property follows from item 4 of Definition 1. Similarly, the Egotisticness axiom can be better understood if it is stated in the equivalent form ε Lφ Fφ φ. In this form, it says that if an egotistic agent likes those for whom φ is true and she has a friend for whom φ is true, then φ must be true for her as well. This axiom captures the requirement of item 5 of Definition 1. We write φ and say that formula φ Φ is a theorem of our logical system if it can be derived from the above axioms using the Modus Ponens and the Necessitation inference rules: φ, φ ψ The proofs of the next two theorems are omitted due to the space constraint. Theorem 2 (soundness). If φ, then a φ for each agent a of each social network. Theorem 3 (completeness). If a φ for each agent a of each social network, then φ. We conclude this section with an observation that our logical system is decidable, which follows from the completeness theorem. Theorem 4. Set {φ Φ | φ} is decidable. Proof. Set {φ Φ | φ} is recursively enumerable because it is axiomatizable. Set {φ Φ | φ} is recursively enumerable because our logical system is sound and complete with respect to social networks with finite sets of agents. Therefore, set {φ Φ | φ} is decidable. 7 Conclusion We have proposed an egocentric logic of preferences. Unlike the previous works on preference and desire logics, the egocentric approach allows us to express nested statements of the form agent a likes those who like humbled people . Our main technical results are a model checking algorithm and a sound, complete, and decidable logical system describing the properties of this modality. Acknowledgments Junli Jiang acknowledges the support of the Key Research Funds for the Key Liberal Science Research Base of Chongqing (NO.16SKB036) and the Fundamental Research Funds for the Central Universities (SWU1409412). Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI-22) [Aho et al., 1972] Alfred V. Aho, Michael R Garey, and Jeffrey D. Ullman. The transitive reduction of a directed graph. SIAM Journal on Computing, 1(2):131 137, 1972. [Christoff et al., 2021] Zo e Christoff, Norbert Gratzl, and Olivier Roy. Priority merge and intersection modalities. The Review of Symbolic Logic, pages 1 32, 2021. [Congress, 2022] US Congress. Title 32 national defense. https://www.ecfr.gov/current/title-32, 2022. Accessed: 2022-05-01. [Cormen et al., 2009] Thomas H Cormen, Charles E Leiserson, Ronald L Rivest, and Clifford Stein. Introduction to algorithms. MIT press, 2009. [Del Vicario et al., 2017] Michela Del Vicario, Fabiana Zollo, Guido Caldarelli, Antonio Scala, and Walter Quattrociocchi. Mapping social dynamics on facebook: The brexit debate. Social Networks, 50:6 16, 2017. [Dhanaraj, 2016] Christina Thomas Dhanaraj. Caste, friendship & solidarity. https://www.roundtableindia.co.in/ caste-friendship-solidarity, 2016. Accessed: 2022-05-01. [Doyle et al., 1991] Jon Doyle, Yoav Shoham, and Michael P Wellman. A logic of relative desire. In International Symposium on Methodologies for Intelligent Systems, pages 16 31. Springer, 1991. [Dunn, 2020] Amina Dunn. Few Trump or Biden supporters have close friends who back the opposing candidate. https://www.pewresearch.org/fact-tank/2020/09/18/fewtrump-or-biden-supporters-have-close-friends-whoback-the-opposing-candidate, 2020. Accessed: 2022-0501. [Epstein and Naumov, 2021] Sophia Epstein and Pavel Naumov. Epistemic logic of know-who. In Proceedings of Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21), 2021. [Grove and Halpern, 1991] Adam J. Grove and Joseph Y. Halpern. Naming and identity in a multi-agent epistemic logic. In James F. Allen, Richard Fikes, and Erik Sandewall, editors, Proceedings of the 2nd International Conference on Principles of Knowledge Representation and Reasoning (KR 91). Cambridge, MA, USA, April 22-25, 1991, pages 301 312. Morgan Kaufmann, 1991. [Grove and Halpern, 1993] Adam J Grove and Joseph Y Halpern. Naming and identity in epistemic logics part i: the propositional case. Journal of Logic and Computation, 3(4):345 378, 1993. [Grove, 1995] Adam J Grove. Naming and identity in epistemic logic part ii: a first-order logic for naming. Artificial Intelligence, 74(2):311 350, 1995. [Halld en, 1957] S oren Halld en. On the Logic of better . Almqvist & Wiksells, 1957. [Harary, 1953] Frank Harary. On the notion of balance of a signed graph. The Michigan Mathematical Journal, 2(2):143 146, 1953. [Kawasaki, 2004] Guy Kawasaki. The art of the start: The time-tested, battle-hardened guide for anyone starting anything. Penguin, 2004. [Lang et al., 2002] J erˆome Lang, Leendert van der Torre, and Emil Weydert. Utilitarian desires. Autonomous agents and Multi-agent Systems, 5(3):329 363, 2002. [Liu, 2011] Fenrong Liu. Reasoning about preference dynamics, volume 354. Springer Science & Business Media, 2011. [Lorini and Schwarzentruber, 2011] Emiliano Lorini and Franc ois Schwarzentruber. A logic for reasoning about counterfactual emotions. Artificial Intelligence, 175(3):814 847, 2011. [Oosterwaal and Torenvlied, 2010] Annemarije Oosterwaal and Ren e Torenvlied. Politics divided from society? three explanations for trends in societal and political polarisation in the netherlands. West European Politics, 33(2):258 279, 2010. [Prior, 1968] Arthur N Prior. Egocentric logic. Noˆus, pages 191 207, 1968. [Selfhout et al., 2009] Maarten HW Selfhout, Susan JT Branje, Tom FM ter Bogt, and Wim HJ Meeus. The role of music preferences in early adolescents friendship formation and stability. Journal of adolescence, 32(1):95 107, 2009. [Seligman et al., 2013] Jeremy Seligman, Fenrong Liu, and Patrick Girard. Facebook and the epistemic logic of friendship. In 14th conference on Theoretical Aspects of Rationality and Knowledge (TARK 13), January 2013, Chennai, India, pages 229 238, 2013. [Sides and Hopkins, 2015] John Sides and Daniel J Hopkins. Political polarization in American politics. Bloomsbury Publishing USA, 2015. [Swift, 1995] Jonathan Swift. Gulliver s travels. In Gulliver s Travels. Springer, 1995. [Van Benthem et al., 2009] Johan Van Benthem, Patrick Girard, and Olivier Roy. Everything else being equal: A modal logic for ceteris paribus preferences. Journal of philosophical logic, 38(1):83 125, 2009. [Von Wright, 1963] Georg Henrik Von Wright. The logic of preference. Edinburgh University Press, 1963. Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI-22)