# multiagent_belief_base_revision__7adf6310.pdf Multi-Agent Belief Base Revision Emiliano Lorini1 and Franc ois Schwarzentruber2 1IRIT-CNRS, Toulouse University, France 2Univ Rennes, IRISA, CNRS, France emiliano.lorini@irit.fr, francois.schwarzentruber@ens-rennes.fr We present a generalization of belief base revision to the multi-agent case. In our approach agents have belief bases containing both propositional beliefs and higher-order beliefs about their own beliefs and other agents beliefs. Moreover, their belief bases are split in two parts: the mutable part, whose elements may change under belief revision, and the core part, whose elements do not change. We study a belief revision operator inspired by the notion of screened revision. We provide complexity results of model checking for our approach as well as an optimal model checking algorithm. Moreover, we study complexity of epistemic planning formulated in the context of our framework. 1 Introduction Since the seminal work of Alchourr on, G ardenfors and Makinson (AGM) [Alchourr on et al., 1985], belief revision has been one of the central concepts in knowledge representation (KR). An important distinction in KR is between belief set (or theory) change and belief base change. While belief sets are deductively closed, belief bases may not be. This implies that belief sets are necessarily infinite since they contain all tautologies, whereas belief bases can be finite. Finiteness of belief bases offers an advantage for AI applications whereby autonomous agents (i.e., robots, embodied conversational agents) are conveniently described by the finite body of information in their databases. Existing formal theories of belief base change [Hansson, 1993; 1999; Benferhat et al., 2002] are mainly focused on the single-agent case and describe beliefs by propositional logic formulas. In more recent times, there has been a growing interest in understanding and modelling belief change in the context of multi-agent systems. Epistemic logic (EL) [Fagin et al., 1995] and dynamic epistemic logic (DEL) [van Ditmarsch et al., 2007] have become the main tools for formalizing epistemic states (e.g., knowledge, belief) and their dynamics in a multi-agent setting. Some extensions of DEL by the notion of belief revision were proposed [van Benthem, 2007; Aucher, 2005; van Ditmarsch, 2005; Baltag and Smets, 2008]. DEL-based approaches to belief revision are based on Kripke semantics which assign to each agent a plausibility ordering (also called epistemic entrenchment ordering) on possible worlds, or a qualitative plausibility measure in the style of rank-based theory [Spohn, 1988] and possibility theory [Dubois and Prade, 1988]. The latter allow to capture minimal change in an agent s belief revision process. The problem of such semantics is that they are not compact thereby making model checking not exploitable in practice. Indeed, the epistemic model with its possible worlds and agents plausibility orderings with respect to which a given property has to be checked is often huge and its correspondence with the informal description of the initial situation is not always immediate. To overcome these limitations of DEL, we present a generalization of belief base revision to the multi-agent case. Unlike existing single-agent belief base revision approaches in which agents beliefs are propositional, in our approach agents can have both propositional beliefs and higher-order beliefs about their own beliefs and other agents beliefs. The idea of using belief bases as a semantics for multi-agent epistemic logic was put forth in [Lorini, 2018; 2020]. In this paper we enrich and expand this idea by the notion of belief revision. Our approach to multi-agent belief revision is more parsimonious than standard DEL-based approaches. Indeed, it only uses belief bases and no plausibility ordering or plausibility measure is needed. This provides an advantage for formal verification since the formulation of model checking in our approach is more compact than in existing DEL-based approaches using Kripke semantics. Compactness also has an impact on the representation of epistemic planning, namely, the generalization of classical planning in which the goal to be achieved is not necessarily a state of the world but some belief states of one or more agents. The initial proposal for epistemic planning was to use a standard logic of knowledge or belief together with a representation of actions in terms of event models of DEL [Bolander and Andersen, 2011; L owe et al., 2011]. Given the high expressivity of DEL, it turned out that checking existence of a solution plan becomes quickly undecidable [Bolander and Andersen, 2011; Aucher and Bolander, 2013; Lˆe Cong et al., 2018]. Epistemic planning with simple event models leading to a decidable fragment was studied by [Kominis and Geffner, 2015]. Moreover, decidable epistemic planning in a fragment of standard epistemic logic in which states are only described by conjunc- Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI-21) tions of epistemic literals (i.e., formulas that do not contain any conjunction or disjunction) and in a variant of epistemic logic based on the notion of observability was studied, respectively, by [Muise et al., 2015] and [Cooper et al., 2016]. The standard formulation of epistemic planning in DEL is based on model checking [Bolander et al., 2015]. This requires a full description of the initial epistemic model that, as emphasized above, is not well-suited for implementation and practical applications. On the contrary, our approach only requires to describe the agents initial belief bases leading to a compact representation of epistemic planning in which epistemic actions are private belief revision operations. Outline. Section 2 presents a modal language for modeling both explicit and implicit beliefs of multiple agents. While an agent s explicit belief is a piece of information in the agent s belief base, we call implicit belief a statement that can be inferred from the agent s explicit beliefs. The language has special operators for representing the consequences of an agent s private belief base revision operation. Section 3 presents the semantics exploiting belief bases with respect to which our language is interpreted. Taking inspiration from the notion of screened revision [Makinson, 1997] and, more generally from non-prioritized belief base revision [Hansson et al., 2001], we assume that an agent s belief base is split in two parts: the mutable part, whose elements may change under belief revision, and the core part, whose elements do not change. Sections 4 and 5 are devoted, respectively, to the formulation of model checking and epistemic planning in our setting and to the study and their complexities. In Section 6, we illustrate epistemic planning with the help of a concrete example. 2 Language This section presents a language for representing agents individual beliefs of both explicit and implicit type as well as the consequences of a private belief base revision operation on an agent s beliefs. An agent s explicit beliefs are of two types: core beliefs and mutable beliefs. Core beliefs are stable pieces of information in an agent s belief base that do not change under belief revision. On the contrary, mutable beliefs are volatile, they may change over time if they turn out to be inconsistent with what the agent perceives. Assume a countably infinite set of atomic propositions Atm = {p, q, . . .} and a finite set of agents Agt = {1, . . . , n}. We define the language in two steps. First define the language L0 by: α ::= p | α | α1 α2 | c iα | m i α, where p ranges over Atm and i ranges over Agt. L0 is the language for representing explicit beliefs. The formula c iα is read agent i has the core belief that α , whereas m i α is read agent i has the mutable (or volatile) belief that α .We define the explicit belief operator iα, read agent i explicitly believes that α , as follows: iα def = c iα m i α. The language L extends L0 and is defined by: ϕ ::= α | ϕ | ϕ1 ϕ2 | 2c iϕ | 2iϕ | [ iα]ϕ, where α ranges over L0 and i ranges over Agt. The other Boolean constructions , , , and are defined from α, and in the standard way. The formula 2c iϕ is read ϕ is derivable from agent i s core beliefs , while 2iϕ is read ϕ is derivable from agent i s explicit beliefs . For simplicity, 2iϕ is also read agent i implicitly believes that ϕ . We define the dual operators 3c i and 3i as follows: 3c iϕ def = 2c i ϕ, and 3iϕ def = 2i ϕ. Formula 3c iϕ has to be read ϕ is compatible (or consistent) with agent i s core beliefs , whereas 3iϕ has to be read ϕ is compatible (or consistent) with agent i s explicit beliefs . The construction [ iα]ϕ is read ϕ holds after agent i has privately revised her belief base with α . 3 Semantics In this section, we present a formal semantics for the language L exploiting belief bases. It refines the belief base semantics from [Lorini, 2020] by splitting an agent s belief base in two parts, the core and the mutable belief base. Unlike the standard Kripke semantics in which possible worlds and epistemic alternatives are primitive, they are here defined from the primitive concept of belief base. Definition 1 (State). A state is a tuple B = (Ci)i Agt, (Mi)i Agt, V where Ci L0 is agent i s core belief base, Mi L0 is agent i s mutable belief base and V Atm is the actual environment. The set of all states is denoted by S. The following definition specifies truth conditions for formulas in the sublanguage L0. Definition 2 (Satisfaction relation). Let B = (Ci)i Agt, (Mi)i Agt, V S. Then, B |= p p V , B |= α B |= α, B |= α1 α2, B |= α1 and B |= α2, B |= c iα α Ci, B |= m i α α Mi. Observe in particular the set-theoretic interpretation of the explicit belief operators in the previous definition: agent i has the core (resp. mutable) belief that α if and only if α is included in her core (resp. mutable) belief base. Given β L0, we define S(β) = {B S : B |= β}. S(β) is the set of states induced by the integrity constraint β. This constraint limits the set of states with respect to which formulas are evaluated. Conceptually, it captures the information in the agents common ground, that is, the body of information that the agents commonly believe to be the case [Stalnaker, 2002]. We are interested in a specific subclass of states in which agents belief bases are consistent. Definition 3 (Belief consistent state). Let B = (Ci)i Agt, (Mi)i Agt, V S and β L0. The state B is said to be β-belief consistent (BCβ) if and only if ||Bi||S(β) = for every i Agt, where ||X||S(β) = {B S(β) : α X, B |= α} for each X L0 and Bi = Ci Mi. The set of all β-belief consistent states is denoted by SBC β. Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI-21) The set ||X||S(β) in the previous definition is called X s truth set under the integrity constraint β. Note that Definition 3 can be formulated in terms of propositional consistency. To see this, let LPROP be the propositional language built from the set of atomic formulas Atm+ = Atm {pc i,α, pm i,α : i Agt and α L0} and let tr be the following translation from L0 to LPROP: tr(p) = p, tr( α) = tr(α), tr(α1 α2) = tr(α1) tr(α2), tr( c iα) = pc i,α, tr( m i α) = pm i,α. For each X L0, we define tr(X) = {tr(α) : α X}. Moreover, we say that X is propositionally consistent if and only if Cn tr(X) , where Cn is the classical deductive closure operator over the propositional language LPROP. The following holds. Proposition 1. Let X L0 and β L0. Then, ||X||S(β) = if and only if tr(X {β}) is propositionally consistent. Proof. Let W be the set of all valuations for the propositional language LPROP. There exists a bijection f : W S such that f(w) |= α iff w |= tr(α), for all w W and α L0. Propositional consistency of tr(X {β}) means that we can find a valuation w W where all formulas in tr X {β} hold. Thus, there exists B S(β) such that f(w) = B and B |= β for all β X. The left-to-right direction can be proved in an analogous way. By the previous proposition, the notion of belief consistent state can be defined in an equivalent way by replacing ||Bi||S(β) = with Cn tr(Bi {β}) in Definition 3. We denote by MCS(α,X,Y ,β) the set of maximally Y - consistent subsets of the set X of L0-formulas including α under the integrity constraint β. The definition will be used at a later stage to precisely define the interpretation of the belief revision operator. Definition 4. Let X, Y L0 and α L0. Then, X MCS(α,X,Y ,β) if and only if: X X {α}, α X , ||X Y ||S(β) = , and there is no X with X X X and ||X Y ||S(β) = . By Proposition 1, the set MCS(α,X,Y ,β) can be defined in an equivalent way by replacing ||X Y ||S(β) = by Cn tr(X Y {β}) in the third item and ||X Y ||S(β) = by Cn tr(X Y {β}) in the fourth item of Definition 4. The following definition introduces the concept of epistemic alternative. Definition 5 (Epistemic alternatives). Let i Agt. Then, Ri and Rc i are the binary relations on the set of states S such that, for all B = (Ci)i Agt, (Mi)i Agt, V , B = (C i)i Agt, (M i )i Agt, V S: BRc i B if and only if α Ci : B |= α, BRi B if and only if α Bi : B |= α, where we recall that Bi = Ci Mi is agent i s belief base. BRc i B means that B is an epistemic alternative relative to agent i s core beliefs at B, while BRi B means that B is an epistemic alternative relative to agent i s whole belief base at B. The idea of the previous definitions is that an agent s epistemic alternative is computed from the information in her belief base. Clearly, we have Ri Rc i since the core belief base is contained in the whole belief base. The following definition specifies the interpretation of formulas in L. They are interpreted with respect to pairs (B, β) with B S the actual state and β L0 the integrity constraint. (We omit Boolean cases, defined in the usual way.) Definition 6 (Satisfaction relation). Let B = (Ci)i Agt, (Mi)i Agt, V S and β L0. Then: (B, β) |= α B |= α, (B, β) |= 2c iϕ for all B S(β) : if BRc i B then (B , β) |= ϕ, (B, β) |= 2iϕ for all B S(β) : if BRi B then (B , β) |= ϕ, (B, β) |= [ iα]ϕ rev(B,i,α,β), β |= ϕ, where B |= α is defined as in Definition 1 and rev(B,i,α,β) = (C i)i Agt, (M i )i Agt, V such that V = V , j Agt : C j = Cj, and j Agt : M j = ( Mj if i = j or ||Ci {α}||S(β) = , T X MCS(α,Mi,Ci,β) X otherwise. As the previous definition highlights, agent i implicitly believes that ϕ if and only if ϕ holds in all agent i s epistemic alternatives that are compatible with the integrity constraint β. Since the latter are computed from agent i s belief base, the implicit belief operator 2i captures the information that is deducible from the information in agent i s belief base under the integrity constraint β. Similarly, the operator 2c i captures the information that is derivable from agent i s core beliefs under the integrity constraint β. The interpretation of the belief revision operator [ iα] is based on state update. Specifically, the result of the private belief revision operation iα occurring in the state B under the integrity constraint β is the updated state rev(B,i,α,β) in which (i) core belief bases or mutable belief bases of the agents different from i are not modified, and (ii) the input α is added to agent i s mutable belief base only if it is consistent with agent i s core beliefs and the integrity constraint. If the latter is the case, then agent i s updated mutable belief base is equal to the intersection of the subsets of agent i s mutable belief base which are maximally consistent with respect to agent i s core belief base under the integrity constraint and which include the input α. This belief revision operation can be conceived as a generalization of the notion of screened revision [Makinson, 1997] to the multi-agent case. Note that instead of taking the intersection of all maximally consistent subsets (MCSs) of the initial belief base, we could have taken a selection of them, e.g., only MCSs whose cardinality is maximal. This parallels the distinction between full Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI-21) meet and partial meet revision in the belief revision literature [Alchourr on et al., 1985]. This would have had no impact on the complexity results presented below in Sections 4 and 5. As the following proposition indicates, the belief revision operation iα is well-defined as it preserves belief consistency. Proposition 2. Let i Agt, α, β L0 and B SBC β. Then, rev(B,i,α,β) SBC β. We conclude this section by defining the notions of βvalidity (viz. validity under the integrity constraint β). Definition 7 (β-Validity). Let ϕ L and β L0. We say that ϕ is β-valid, noted |=β ϕ, if and only if, for every B SBC β we have (B, β) |= ϕ. We say that ϕ is β-satisfiable if and only if ϕ is not β-valid. Some interesting validities are listed in the following two propositions. Proposition 3. Let i, j Agt and β L0. Then, if |=β ϕ then |=β ϕ with {2c i, 2i, [ iα]}, (1) |=β 2c iϕ 2iϕ, (2) |=β 3i , (3) |=β [ jα]3i . (4) As item (1) indicates, the implicit belief and belief revision operators are closed under necessitation. According to validity (2), if a formula is derivable from an agent s core beliefs, then it is also derivable from the agent s whole belief base. According to validity (3), an agent s belief base must be consistent. This is guaranteed by the fact that the notion of validity of Definition 7 is defined relative to β-belief consistent states. Validity (4) is a consequence of Proposition 2: an agent s belief base remains consistent after the occurrence of a private belief revision operation. Proposition 4. Let i, j Agt, x {c, m} and β L0. Then, |=β 3c iα [ iα] m i α, (5) |=α c iα [ iα] c iα , (6) |=β x j α [ iα] x j α if i = j, (7) |=β ( m i α 3iα) [ iα] m i α . (8) Validity (5) expresses a form of success postulate: α is consistent with an agent s core belief base if and only if, after revising her beliefs with α, α is included in the agent s mutable belief base. Validities (6) and (7) just mean that core beliefs do not change as a consequence of belief revision. Moreover, since a belief revision operation is private, it only modifies the mutable beliefs of the agent who performs it. Finally, according to validity (8), if the input of belief revision is already consistent with an agent s belief base, then no mutable belief is removed from the agent s belief base. This means that the agent merely expands her belief base without contracting it. 4 Model Checking The following is a compact formulation of the model checking problem for the language L. In the formulation of model Algorithm 1 Generic algorithm for model checking. procedure mc(B, ϕ) case p: return B |= p case c iα: return α Ci case m i α: return α Mi case ψ: return not mc(B, ψ) case ψ1 ψ2: return mc(B, ψ1) and mc(B, ψ2) case 2c Gψ: for all B S(β) such that BRc i B do if not mc(B , ψ) return false case 2Gψ: for all B S(β) such that BRi B do if not mc(B , ψ) return false return true case [ iα]ψ: return mc(rev(B,i,α,β), ψ) checking we have to suppose B SBC β. This guarantees that agents beliefs are consistent in the actual state. Model checking with integrity constraint Given: ϕ L, β L0 and a finite B SBC β. Question: Do we have (B, β) |= ϕ? where the state B = (Ci)i Agt, (Mi)i Agt, V is said to be finite if V , every Ci and every Mi are finite. Following [Lorini, 2019], we can prove the following lower bound for our model checking problem. Theorem 1. Model checking with integrity constraint is PSPACE-hard. In the rest of this section, we explain that the model checking problem is in PSPACE. Algorithm 1 sums up the algorithm for the model checking of a formula ϕ in a given finite state B SBC β (we check that B SBC β with propositional consistency checking; if B SBC β then the input is not correct). The first three cases are base cases. The next two cases are for Boolean connectives. For the 2c iψ and 2iψ cases, we enumerate all possible states B containing correct subformulas of formulas in the initial B and in the initial formula ϕ. Remark that the current B only takes a polynomial amount of space. Checking BRc i B or BRi B can be performed in polynomial time. Finally, the last case is the revision case. It remains to show that T X MCS(α,Mi,Ci,β) X can be computed in polynomial space. To this end, we will consider all formulas and, for each of them, we check that it is in all subsets in MCS(α,Mi,Ci,β). To this aim, we explain how MCS(α,Mi,Ci,β) can be enumerated in polynomial space. To do that, we enumerate all X X {α} containing α. Then, we check that X Y {β} is propositionally consistent. We also check that any superset X of X containing just one extra formula is such that X Y {β} is inconsistent. As propositional consistency is in NP, the overall computation is in polynomial space. As the depth of nested calls in mc(B, ϕ) is bounded by the size of ϕ, we obtain the PSPACE membership. Theorem 2. Model checking with integrity constraint is in PSPACE. Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI-21) 5 Epistemic Planning In this section, we first consider one of the simplest epistemic planning problem, where actions are revision operators equipped with preconditions: (ϕ, [ iα]) where ϕ L is a precondition, i is an agent and α L0. The reader may easily imagine natural extensions in which actions could have ontic effects. For conveying the main idea of our approach, we keep the presentation as simple as possible, we stick to the following refined version of epistemic planning. A plan is a linear sequence of elements (ψ, [ iα]) with ψ L, i AGT and α L0. We define the formula π ϕ by induction: ϵ ϕ = ϕ, and (ψ, [ iα])π ϕ = ψ [ iα] π ϕ. Epistemic planning Given: ϕ L, β L0, a finite B SBC β, and a finite subset A of elements (ψ, [ iα]) with ψ L, i AGT and α L0. Question: Does there exist a finite sequence π of elements in A such that (B, β) |= π ϕ? Given how our framework is settled, the epistemic planning is not more difficult than classical planning (STRIPS for instance). Theorem 3. Epistemic planning is in PSPACE. Proof. Consider the following graph. Nodes are states. Edges are transitions from a state B to rev(B ,i,α,β) for some action (ψ, [ iα]) in A and ψ holds in B . That graph is of exponential size since we restrict formulas in bases to be subformulas of ϕ, β, subformulas of formulas appearing in B and formulas occurring in the repertoire A. Solving an epistemic planning instance reduces to reachability in that graph from the initial state B to a final state satisfying ϕ. We solve that reachability problem as follows. We store the current state starting from B. At each step, we non-deterministically choose the next action (ψ, [ iα]) to execute. We check that ψ holds in the current state and compute the next state. The algorithm accepts when the goal ϕ holds in the current base. As the model checking is in PSPACE, our algorithm is nondeterministic in polynomial space. We conclude by Savitch s theorem [Savitch, 1970]. Our actions modify the current state and then modify possible worlds implicitly (because possible worlds are computed from the current state). On the contrary, in the epistemic planning based on Dynamic Epistemic Logic (DEL) [Bolander et al., 2020], actions explicitly modify possible worlds. DEL allows for very complex actions and the DEL planning problem is undecidable. Our framework has the advantage of being very close to classical planning. We conclude this section by considering the following variant of epistemic planning, called uniform epistemic planning, in which we enforce an agent i to know that the plan is successful. In particular, uniform epistemic planning consists in checking whether agent i knows how to achieve a certain goal ϕ through the execution of a plan. Uniform epistemic planning Given: ϕ L, β L0, a finite B SBC β, an agent i AGT, and a finite subset A of elements (ψ, [ iα]) with ψ L, i AGT and α L0 Question: Does there exist a finite sequence π of elements in A such that (B, β) |= 2i π ϕ? Theorem 4. Uniform epistemic planning is in EXPSPACE. Proof. We now consider the graph whose nodes are information sets (sets of states) for agent i. There is an edge from I to I if there is an action (ψ, [ iα]) in A such ψ holds in all states in I and I is the set of states rev(B ,i,α,β) where B in I. Now, we ask for a path in that graph from the set of possible states for agent i in B to a set of states all satisfying ϕ. As the graph is doubly-exponential, uniform epistemic planning is in NEXPSPACE = EXPSPACE. We illustrate the planning problem defined in the previous section with the help of a concrete example. Let Agt = {i A, i B, i C, i M}. The four agents Ann (i A), Bob (i B), Charles (i C) and Mary (i M) are mobile robots meeting at a roundabout. Each robot has to decide at which moment it will pass. In order to avoid a collision, the robots have to pass sequentially one after the other. We assume an external planning agent is connected to each robot in the system. Its aim is to assign to each robot a priority level from the set of priority levels Lev = {1, . . . , 4} in such a way that a collision is avoided. Each priority level specifies a time in the sequence at which a robot has to pass. We assume that the set Atm includes three types of atomic formulas pr i,k, pai,k and oughti,k for every i Agt and k Lev. They have to be read, respectively, priority level k is assigned to robot i , robot i will pass at time k and robot i ought to pass at time k . Following reductionist theories of intention [Sellars, 1968] according to which an intention is reducible to a firstperson normative judgment about what one shall/ought to do, we introduce the following abbreviation: intendi,k def = 2ioughti,k, where intendi,k is read agent i intends to pass at time k . We assume the following three pieces of information are part of the robots common ground, that is, they constitute the integrity constraint of the planning problem under consideration: k Lev (pai,k ipr i,k), k,k Lev:k =k (pr i,k pr i,k ), k,k Lev:k =k ( ipr i,k ipr i,k ). According to α1, a robot will pass at a certain time k only if it explicitly believes that it has priority level k. According to α2, two different priority levels cannot be assigned to the same robot. According to α3, a robot cannot believe that two different priority levels were assigned to it. Constraints α2 Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI-21) and α3 presuppose that the external planning agent can assign at most one priority level to one robot. We suppose each robot i has the following piece of information αi in its core belief base: j =i paj,k oughti,k. Formula αi specifies what a robot should do depending on its priority level and the other robots priority levels. Specifically, if k is the only priority level which was assigned to robot i and no other robot has the same priority level, then robot i ought to pass at time k. To conclude, we suppose that each robot egocentrically believes that it has priority level 1, thereby being authorized to pass before the others. Specifically, for every robot i we assume pr i,1 is the only piece of information in its mutable belief base. Moreover, the priority levels have been assigned in the following way but have not been communicated yet to the robots: Mary should pass first,followed by Bob, then Ann, and Charles as last. To sum up, we define the actual state B = (Ci)i Agt, (Mi)i Agt, V as follows: (i) for every i Agt, Ci = {αi} and Mi = {pr i,1}, and (ii) V = {pr i M,1, pr i B,2, pr i A,3, pr i C,4, oughti M,1, oughti B,2, oughti A,3, oughti C,4}. The definition of V presupposes that in the initial situation the robots have not decided yet at which time to pass since the priority levels have not been communicated to them. The goal ϕG is to ensure that (i) each robot forms an intention to pass at certain time, and (ii) there are no two different robots with the same intention (i.e., the intention to pass at the same time). The latter condition is necessary to prevent the robots from colliding. Formally, k Lev intendi,k i,j Agt:i =j k Lev (intendi,k intendj,k). In order to achieve this goal, the external planning agent can (i) inform a robot that a certain priority level is assigned to it, and (ii) inform a robot that another robot has been informed that a certain priority level is assigned to it. We assume that informative actions of the planning agent are truthful, i.e., a robot can be informed that a certain fact is true only if the fact is indeed true. Thus, the repertoire of actions of the planning agent is defined as follows: A = (pr i,k, [ ipr i,k]) : i Agt and k Lev ( ipr i,k, [ j ipr i,k]) : i, j Agt, i = j and k Lev . It is routine exercise to verify that the following plan π =(pr i M,1, [ i Mpr i M,1])(pr i B,2, [ i Bpr i B,2]) (pr i A,3, [ i Apr i A,3])(pr i C,4, [ i Cpr i C,4]) ( i Bpr i B,2, [ i M i Bpr i B,2])( i Apr i A,3, [ i M i Apr i A,3]) ( i Cpr i C,4, [ i M i Cpr i C,4])( i Mpr i M,1, [ i B i Mpr i M,1]) ( i Apr i A,3, [ B i Apr i A,3])( i Cpr i C,4, [ B i Cpr i C,4]) ( i Mpr i M,1, [ A i Mpr i M,1])( i Bpr i B,2, [ A i Bpr i B,2]) ( i Cpr i C,4, [ A i Cpr i C,4])( i Mpr i M,1, [ i C i Mpr i M,1]) ( i Bpr i B,2, [ i C Bpr i B,2])( i Apr i A,3, [ i C i Apr i A,3]) is a solution of our epistemic planning problem. Indeed, we have (B, α1 α2 α3) |= π ϕG. The plan just consists in sequentially informing the robots about their priority levels and then sequentially informing the robots that the others have been informed about their priority levels. 7 Conclusion We proposed a multi-agent framework that encompasses explicit/implicit beliefs and private belief revision. Our syntactic approach has strong advantages compared to the traditional semantic approaches based on Kripke structures. First, whereas Kripke structures are typically exponential in the number of atomic propositions, our syntactic representation with belief bases scales up in general. Secondly, the syntactic approach enables to elegantly define revision operators, compared to existing approaches which use cumbersome plausibility orderings or plausibility measures on possible worlds [van Benthem, 2007; Aucher, 2005; van Ditmarsch, 2005; Baltag and Smets, 2008]. Thirdly, the distinction between explicit/implicit beliefs comes for free in our approach. Fourthly, our model checking problem is in PSPACE, and that gives a hope of having reasonably efficient implementations of both the model checking and the epistemic planning problem, whereas epistemic planning in DEL is undecidable. Perspectives of future work are manifold. Our framework can easily be adapted to capture introspective agents who have perfect knowledge of their own core and mutable belief base. To this aim, Definition 5 of epistemic alternative should be tuned as follows: BRc i B iff Ci = C i, Mi = M i and α Ci : B |= α, BRi B iff Ci = C i, Mi = M i and α Bi : B |= α. The binary relations Rc i and Ri so defined are transitive and Euclidean. Interestingly, unlike DEL in which introspection over beliefs could be lost due to private belief change, in our approach it is necessarily preserved. Indeed, the agents epistemic accessibility relations are recomputed from belief bases after the occurrence of a private belief revision operation. For this introspective variant, the model checking and epistemic planning remain in PSPACE. Future work will be devoted to an in-depth analysis of the logical properties of belief base revision in the case of introspective agents. Another direction of future work is to move beyond private belief base revision and to explore semi-private forms (e.g., an agent i sees that α while another agent j sees that i sees that α, but i is not aware of this). Last but not least, we plan to compare the expressivity of existing DEL-based approaches to belief revision and the expressivity of our belief base approach. Acknowledgments This work is partially supported by the EU ICT-48 2020 project TAILOR (No. 952215) and the ANR project Co Pains ( Cognitive Planning in Persuasive Multimodal Communication ). Support from the ANR-3IA Artificial and Natural Intelligence Toulouse Institute is also acknowledged. Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI-21) References [Alchourr on et al., 1985] Carlos E. Alchourr on, Peter G ardenfors, and David Makinson. On the logic of theory change: Partial meet contraction and revision functions. The Journal of Symbolic Logic, 50:510 530, 1985. [Aucher and Bolander, 2013] Guillaume Aucher and Thomas Bolander. Undecidability in epistemic planning. In IJCAI 2013, pages 27 33. AAAI Press, 2013. [Aucher, 2005] Guillaume Aucher. A combined system for update logic and belief revision. In Proceedings of PRIMA 2004, volume 3371 of LNAI, pages 1 18. Springer-Verlag, 2005. [Baltag and Smets, 2008] Alexandru Baltag and Sonja Smets. A qualitative theory of dynamic interactive belief revision. In Proceedings of LOFT 7, volume 3 of Texts in Logic and Games, pages 13 60. Amsterdam University Press, 2008. [Benferhat et al., 2002] Salem Benferhat, Didier Dubois, Henri Prade, and Mary-Anne Williams. A practical approach to revising prioritized knowledge bases. Studia Logica, 70(1):105 130, 2002. [Bolander and Andersen, 2011] Thomas Bolander and Mikkel B. Andersen. Epistemic planning for singleand multi-agent systems. Journal of Applied Non-Classical Logics, 21(1):9 34, 2011. [Bolander et al., 2015] Thomas Bolander, Martin Holm Jensen, and Franc ois Schwarzentruber. Complexity results in epistemic planning. In IJCAI 2015, pages 2791 2797. AAAI Press, 2015. [Bolander et al., 2020] Thomas Bolander, Tristan Charrier, Sophie Pinchinat, and Franc ois Schwarzentruber. DELbased epistemic planning: Decidability and complexity. Artificial Intelligence, 287, 2020. [Cooper et al., 2016] Martin C. Cooper, Andreas Herzig, Faustine Maffre, Fr ed eric Maris, and Pierre R egnier. A simple account of multi-agent epistemic planning. In Proceedings of the 22nd European Conference on Artificial Intelligence (ECAI 2016), pages 193 201, 2016. [Dubois and Prade, 1988] Didier Dubois and Henri Prade. Possibility theory: an approach to computerized processing of uncertainty. Plenum Press, 1988. [Fagin et al., 1995] R. Fagin, J. Halpern, Y. Moses, and M. Vardi. Reasoning about Knowledge. MIT Press, Cambridge, 1995. [Hansson et al., 2001] Sven O. Hansson, Eduardo Ferm e, John Cantwell, and Marcelo Falappa. Credibility limited revision. Journal of Symbolic Logic, 66:1581 1596, 2001. [Hansson, 1993] Sven O. Hansson. Theory contraction and base contraction unified. Journal of Symbolic Logic, 58(2):602 625, 1993. [Hansson, 1999] Sven O. Hansson. A survey of nonprioritized belief revision. Erkenntnis, 50:413 427, 1999. [Kominis and Geffner, 2015] Filippos Kominis and Hector Geffner. Beliefs in multiagent planning: from one agent to many. In ICAPS 2015, pages 147 155. AAAI Press, 2015. [Lˆe Cong et al., 2018] S ebastien Lˆe Cong, Sophie Pinchinat, and Franc ois Schwarzentruber. Small undecidable problems in epistemic planning. In IJCAI 2018, pages 4780 4786, 2018. [Lorini, 2018] Emiliano Lorini. In praise of belief bases: Doing epistemic logic without possible worlds. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18), pages 1915 1922. AAAI Press, 2018. [Lorini, 2019] Emiliano Lorini. Exploiting belief bases for building rich epistemic structures. In L. S. Moss, editor, Proceedings of the Seventeenth Conference on Theoretical Aspects of Rationality and Knowledge (TARK 2019), volume 297 of EPTCS, pages 332 353, 2019. [Lorini, 2020] Emiliano Lorini. Rethinking epistemic logic with belief bases. Artificial Intelligence, 282, 2020. [L owe et al., 2011] Benedikt L owe, Eric Pacuit, and Andreas Witzel. DEL planning and some tractable cases. In Proceedings of the 3rd International International Workshop on Logic, Rationality and Interaction (LORI 2011), pages 179 192. Springer Berlin Heidelberg, 2011. [Makinson, 1997] David Makinson. Screened revision. Theoria, 63:14 23, 1997. [Muise et al., 2015] Christian Muise, Vaishak Belle, Paolo Felli, Sheila A. Mc Ilraith, Tim Miller, Adrian R. Pearce, and Liz Sonenberg. Planning over multi-agent epistemic states: A classical planning approach. In AAAI 2015, pages 3327 3334. AAAI Press, 2015. [Savitch, 1970] Walter J. Savitch. Relationships between nondeterministic and deterministic tape complexities. Journal of Computer and System Sciences, 4(2):177 192, 1970. [Sellars, 1968] Wilfrid Sellars. Science and Metaphysics: Variations on Kantian Themes. Routledge & Kegan Paul Ltd, London, 1968. [Spohn, 1988] Wolfgan Spohn. Ordinal conditional functions: a dynamic theory of epistemic states. In W. L. Harper and B. Skyrms, editors, Causation in decision, belief change and statistics, pages 105 134. Kluwer, 1988. [Stalnaker, 2002] Robert Stalnaker. Common ground. Linguistics and philosophy, 25(5/6):701 721, 2002. [van Benthem, 2007] Johan van Benthem. Dynamic logic for belief revision. Journal of Applied Non-Classical Logics, 17(2):129 155, 2007. [van Ditmarsch et al., 2007] Hans P. van Ditmarsch, Wiebe van der Hoek, and Barteld Kooi. Dynamic Epistemic Logic. Kluwer Academic Publishers, 2007. [van Ditmarsch, 2005] Hans P. van Ditmarsch. Prolegomena to dynamic logic for belief revision. Synthese, 147(2):229 275, 2005. Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI-21)