# the_logic_of_doxastic_strategies__95b2150b.pdf The Logic of Doxastic Strategies Junli Jiang,1 Pavel Naumov2 1 Institute of Logic and Intelligence, Southwest University, China 2 University of Southampton, United Kingdom walk08@swu.edu.cn, p.naumov@soton.ac.uk In many real-world situations, there is often not enough information to know that a certain strategy will succeed in achieving the goal, but there is a good reason to believe that it will. The paper introduces the term doxastic for such strategies. The main technical contribution is a sound and complete logical system that describes the interplay between doxastic strategy and belief modalities. 1 Introduction During the 1991 Gulf War, American troops successfully used surface-to-air Patriot missiles to intercept Iraqimodified Scud tactical ballistic missiles originally designed by the Soviet Union in the 1950s. In fact, on 15 February 1991, President George H. W. Bush travelled to the Patriot manufacturing plant in Andover, Massachusetts to praise the missiles capability: You see, what has taken place here is a triumph of American technology . ..we are witnessing a revolution in modern warfare, a revolution that will shape the way that we defend ourselves for decades to come. For years, we ve heard that antimissile defenses won t work, that shooting down a ballistic missile is impossible like trying to hit a bullet with a bullet. Some people called it impossible; you called it your job. They were wrong, and you were right .. .Patriot is 41 for 42: 42 Scuds engaged, 41 intercepted. (Bush 1991) Just 10 days after this speech, an American radar detected an Iraqi Scud approaching Dhahran, Saudi Arabia. Traditionally, knowledge is modelled through an indistinguishability relation that captures an agent s own observations. However, modern autonomous agents often obtain their knowledge from the data that they receive from other systems. This was the case on 25 February 1991. By analysing the position, x, and the velocity, v, of the Scud, the Patriot concluded that intercepting the Scud was within its capabilities. We write this as K x, v( The Patriot has a strategy to destroy the Scud ) (1) Copyright 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. and say that dataset { x, v} informs the knowledge of the statement The Patriot has a strategy to destroy the Scud . In general, we write KXφ if knowing the values of all variables in set X is enough to conclude that statement φ is true. We refer to sets of data variables, like X, as datasets. Modality KX for a set X of Boolean variables is introduced by Grossi, Lorini, and Schwarzentruber (2015). For arbitrary datasets, it is proposed by Baltag and van Benthem (2021). The name data-informed knowledge is suggested by us in (Jiang and Naumov 2022a). We also proposed a dynamic logic for this modality (Deuser et al. 2024). Of course, for the Patriot to know that it has a strategy is not the same as to know what the strategy is. The distinction between knowing that a strategy exists and knowing what the strategy is , in the case of the traditional (indistinguishability-relation-based) knowledge, has been studied in various logics of know-how strategies ( Agotnes and Alechina 2019; Naumov and Tao 2017; Fervari et al. 2017; Naumov and Tao 2018c,b,a; Fervari, Vel azquez-Quesada, and Wang 2021; Li and Wang 2021). In our data-informed setting, in order for the Patriot to know the strategy (direction in which to launch a surfaceto-air missile), it also must know the exact time, t, since the position, x, and the velocity, v, of the Scud has been measured. We write this as [Patriot] x, v,t( Scud is destroyed ) (2) and say that the dataset { x, v, t} informs the strategy of Patriot to achieve the condition Scud is destroyed . In general, we write [C]Xφ if knowing the values of all variables in dataset X is enough to know a strategy for coalition C to achieve condition φ. We introduced this modality in (Jiang and Naumov 2022a). The values in the dataset { x, v, t}, obtained from the radar, gave the Patriot enough information to compute the strategy (direction of missile launch) that was guaranteed to destroy the Scud. Upon computing the strategy, the Patriot sent it to the launcher... Seconds later, Scud hit the barracks of the 14th Quartermaster Detachment of the US Army s 99th Infantry Division. To understand what went wrong that day in the skies above Dhahran, Saudi Arabia, one needs to acknowledge the fact that data does not always correctly reflect the state-ofaffairs of the real world. Even if the Patriot knows the true The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) values of x and v as measured by the radar, these values might not reflect the correct position and velocity of Scud due to, for example, failure in the radar operation. In the case of our statement (1), it is perhaps more sensible to say that by analysing the radar-provided values of x and v and trusting those values, the Patriot formed a belief that intercepting the Scud is within its capabilities: B x, v x, v( The Patriot has a strategy to destroy the Scud ). (3) In general, we write BT Xφ if under the assumption of the trust in dataset T, the values of variables in the dataset X inform belief φ. We proposed this trust-based belief modality in (Jiang and Naumov 2022b). As it turns out, the statement B Xφ is equivalent to the statement KXφ. In other words, any belief, which is not based on trust, is knowledge. Any such belief, of course, is true. In general, just like the other types of beliefs, trustbased beliefs are not always true. They might be false if they are based on trust in non-trustworthy data. This, however, was not the case on 25 February 1991. The position and the velocity of the Scud communicated by the radar correctly reflected the physical characteristics of the approaching ballistic missile. Thus, the Patriot indeed had the capability to intercept the Scud. As we have just observed when the trust in data is taken into account, the knowledge statement (1) becomes a belief statement (3). Of course, a similar adjustment should be made to statement (2). The Patriot did not actually know the strategy to destroy the Scud, it only had a belief that a particular strategy will work. This belief was based on the trust in the dataset { x, v, t}. We write this as: [Patriot] x, v,t x, v,t( Scud is destroyed ). (4) In general, we write [C]T Xφ if, under the assumption of trust in dataset T, dataset X informs a belief about some strategy of coalition C that it will achieve condition φ. We call such strategies doxastic (related to beliefs). Doxastic strategies are guaranteed to work if dataset T is not only trusted but also trustworthy. The Patriot system clock stored time (since boot) measured in tenths of a second. The actual time was computed by multiplying the stored value by 0.1 using a 24-bit fixed point register. This introduced an error of about 0.000000095 seconds. By the time the Scud appeared in Dhahran, the system had been running for over 100 hours, creating an accumulated time difference between the radar s and the launcher s clocks of about 0.34 seconds. A Scud travels over 500 meters in this time (Mignotte 2010). The strategy that the Patriot believed would destroy the Scud was based on trusted but non-trustworthy time data. The triumph of American technology pointed the missile launcher in a completely wrong direction, killing 28 American soldiers (Mignotte 2010) and wounding close to 100 (Apple 1991). A year later, on 11 November 1992, in Greensburg, Pennsylvania, a memorial was dedicated to the soldiers killed by the Scud attack. The monument is facing in the direction of Dhahran, Saudi Arabia (Legion 2017). In this paper, we introduce formal semantics for doxastic strategies and a sound and complete logical system that describes the interplay between modalities BT X and [C]T X. The paper is structured as follows. In the next section, we define the class of games that we use to model multiparty interactions. Next, we define the syntax and the formal semantics of our logical system. In Section 4, we list its axioms and inference rules. Section 5 contains the proof of the completeness. Section 6 discusses ex ante and ex post trust. Section 7 concludes. The proof of soundness and one of the lemmas from Section 5 can be found in the full version of this paper (Jiang and Naumov 2023). 2 Games In this section, we introduce the class of games that is used later to give a formal semantics of our logical system. Throughout the paper, we assume a fixed set of atomic propositions and a fixed set of data variables V . In addition, we assume a fixed set of actors A. We use the term actor instead of the more traditional agent to emphasise the fact that in our setting knowledge and beliefs come from data and they are not related to actors, who are only endowed with an ability to act. By a dataset we mean any subset of V . Informally, given a game, we think about each data variable as having a value in each state of the game. However, formally, it is only important to know, for each two given states, if a variable has the same or different values in those states. Thus, we only need to associate an indistinguishability relation x with each data variable x V . Informally, w x u if data variable x has the same value in states w and u. To model the trustworthiness of data variables, for each state w of a game we specify a dataset Tw V consisting of the variables that are trustworthy (but not necessarily trusted) in the state w. We introduced this way to model the trustworthiness of data in (Jiang and Naumov 2022b). Finally, in the definition below and throughout the paper, by Y X we mean the set of all functions from set X to set Y . Definition 1 A tuple (W, { x}x V , {Tw}w W , , M, π) is called a game if 1. W is a (possibly empty) set of states, 2. x is an indistinguishability equivalence relation on set W for each data variable x V , 3. Tw the set of data variables that are trustworthy in state w W, 4. is a nonempty set of actions , 5. M W A W is a mechanism of the game, 6. π(p) W for each atomic proposition p P. By a complete action profile we mean an arbitrary element of the set A. By a coalition we mean an arbitrary subset C A of actors. By an action profile of a coalition C we mean an arbitrary element of the set C. The mechanism of the game M specifies possible transitions of the game from one state to another. If (w, δ, u) M, then under complete action profile δ from state w the game can transition to state u. Note that we do not require the mechanism to be deterministic. We also do not require that The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) for each state w W and each complete action profile δ there is at least one state u such that (w, δ, u) M. If such state u does not exist, then we interpret this as a termination of the game upon the execution of profile δ in state w. 3 Syntax and Semantics Language Φ of our logical system is defined by the grammar φ ::= p | φ | (φ φ) | BT Xφ | [C]T Xφ, where p is an atomic proposition, C A is a coalition, and T, X V are datasets. We read expression BT Xφ as under the assumption of trust in dataset T, dataset X informs belief φ . And we read expression [C]T Xφ as under the assumption of trust in dataset T, dataset X informs a doxastic strategy of coalition C to achieve φ . We write f =C g if f(x) = g(x) for each element x C and w X u if w x u for each data variable x X. Definition 2 For any formula φ Φ, and any state w W of a game (W, { x}x V , {Tw}w W , , M, π), the satisfaction relation w φ is defined as follows 1. w p, if w π(p), 2. w φ, if w φ, 3. w φ ψ, if w φ or w ψ, 4. w BT Xφ, if u φ for each state u W such that w X u and T Tu, 5. w [C]T Xφ, when there is an action profile s C of coalition C such that for all states u, v W and each complete action profile δ A, if w X u, T Tu, s =C δ, (u, δ, v) M, and T Tv, then v φ. Note that, by item 4 of the above definition, w B Xφ states that condition φ is satisfied in each state u W indistinguishible from state w by dataset X. Thus, B X is equivalent to data-informed knowledge modality KX discussed in the introduction. Item 5 above captures our informal intuition that [C]T Xφ means that dataset X informs the knowledge of a strategy that guarantees the achievement of φ in states where dataset T is trustworthy. Note that there are multiple ways to formalise this: we can require T to be trustworthy in state u (before the transition), in state v (after the transition), or in both of these states. By including conditions T Tu and T Tv in item 5, we require T to be trustworthy ex ante (before transition) and ex post (after transition). We discuss an alternative approach in Section 6. w3 w4 Scud is destroyed Figure 1: A game. Data variables x and v are trustworthy in all states. Data variable t is trustworthy in states w2, w3, w4. Figure 1 depicts a (very simplistic) game capturing our introductory game. This game has a single actor, the Patriot. The actual state is w1, in which data variable t is not trustworthy. The only state which is { x, v, t}-indistinguishable from the current state and in which all variables in dataset { x, v, t} are trustworthy is state w2. Note that, in state w2, strategy s1 can be used to destroy the Scud. Thus, in the current state w1, dataset { x, v, t} informs the belief that the Patriot can use strategy s1 to destroy the Scud. Hence, in this game, formula (4) is satisfied in state w1. In addition to propositional tautologies in language Φ, our logical system contains the following axioms. 1. Truth: B Xφ φ, 2. Negative Introspection: BT Xφ B X BT Xφ, 3. Distributivity: BT X(φ ψ) (BT Xφ BT Xψ), 4. Trust: BT X(BT Y φ φ), 5. Monotonicity: BT Xφ BT X φ and [C]T Xφ [C ]T X φ, where T T , X X , and C C , 6. Cooperation: [C]T X(φ ψ) ([D]T Xφ [C D]T Xψ), where C D = , 7. Strategic Introspection: [C]T Xφ BT X[C]T Xφ, 8. Belief in Unavoidability: BT X[ ]T Y φ [ ]T Xφ, 9. Public Belief: BT φ [ ]T φ. The Truth axiom, the Negative Introspection axiom, the Distributivity axiom, the Trust axiom and the belief part of the Monotonicity axiom are the axioms for the trust-based beliefs as stated in (Jiang and Naumov 2022b). The most non-trivial among them is the Trust axiom. To understand the meaning of this axiom, note that although the principle BT Xφ φ is not universally true, it is true in all states where dataset T is trustworthy. This observation is captured in the Trust axiom. The Cooperation axiom is a variation of the axiom from the Logic of Coalition Power (Pauly 2002). The Strategic Introspection axiom states that dataset X informs a doxastic strategy iff X informs a belief in having a doxastic strategy. The Strategic Introspection axiom and the Belief in Unavoidability axiom (which we discuss below) can both be stated in alternative forms where the superscript of modality B is the empty set . To understand the meaning of the Belief in Unavoidability axiom, note that statement [ ] Xφ means that dataset X informs the knowledge that condition φ is unavoidably true in the next state. Similarly, [ ]T Xφ means that, under the assumption of trust in dataset T, dataset X informs the belief that condition φ is unavoidably true in the next state. The Belief in Unavoidability axiom states that if X informs the belief that Y informs the belief in unavoidability of φ, then X itself informs this belief in unavoidability of φ. Finally, observe that B φ means that statement φ is true in all states of the game. In other words, φ is public knowledge in the game. The Public Belief axiom states that if condition φ holds in all words where T is trustworthy, then it is believed to be unavoidably true in the next state. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) We write φ and say that formula φ Φ is a theorem of our system if it is derivable from the above axioms using the Modus Ponens and the Necessitation inference rules: In addition to the unary relation φ, we also consider a binary relation F φ. We write F φ if a formula φ Φ is provable from the set of formulae F Φ and the theorems of our logical system using only the Modus Ponens inference rule. It is easy to see that statement φ is equivalent to φ. We say that set F is inconsistent if there is a formula φ F such that F φ and F φ. The proof of the following theorem is in the full version of this paper (Jiang and Naumov 2023). Theorem 1 (soundness) If φ, then w φ for each state w of an arbitrary game. We conclude this section with two auxiliary results that will be used later in the proof of the completeness. The first of them is a form of the Positive Introspection principle for our belief modality B. Lemma 1 BT Xφ B XBT Xφ. PROOF. Formula B X BT Xφ BT Xφ is an instance of the Truth axiom. Thus, BT Xφ B X BT Xφ, by contraposition. Hence, taking into account the following instance B X BT Xφ B X B X BT Xφ of the Negative Introspection axiom, we have BT Xφ B X B X BT Xφ. (5) Also, the formula BT Xφ B X BT Xφ is an instance of the Negative Introspection axiom. Thus, by contraposition, B X BT Xφ BT Xφ. Hence, by the Necessitation inference rule, B ( B X BT Xφ BT Xφ). Thus, B X( B X BT Xφ BT Xφ) by the Monotonicity axiom and the Modus Ponens inference rule. Thus, by the Distributivity axiom and the Modus Ponens inference rule it follows that B X B X BT Xφ B XBT Xφ. The latter, together with statement (5), implies the statement of the lemma by propositional reasoning. The proof of the following lemma is in the full version of this paper (Jiang and Naumov 2023). Lemma 2 If φ1, .., φn ψ, then BT Xφ1, .., BT Xφn BT Xψ. 5 Strong Completeness The proof of the completeness theorem is split into four subsections. In Subsection 5.1, we define the canonical game. The truth lemma for the canonical game and the final step of the proof are given in Subsection 5.4. In the two subsections before, we state and prove auxiliary lemmas used in the proof of the truth lemma. A highly non-trivial proof of one of these auxiliary lemmas is located in (Jiang and Naumov 2023). X1, b1 X2, b2 X3, b3 X4, b4 X3, b6 X5, b5 T3,F3 T3,F3 T4,F4 T5,F5 Figure 2: Fragment of a canonical tree. 5.1 Canonical Game Throughout the rest of the paper, we assume that B is any set of cardinality larger than A, such as, for example, the powerset P(A). We explain the need for B after Definition 3 below. In this section, towards the proof of completeness, for an arbitrary maximal consistent set of formulae F0 Φ and an arbitrary dataset T0 V , we define a canonical game G(T0, F0) = (W, { x}x V , {Tw}w W , , M, π). The set of states in the canonical game is defined using the tree construction which goes back to the proof of the completeness theorem for epistemic logic of distributed knowledge (Fagin, Halpern, and Vardi 1992). Informally, in the tree construction, the states are the nodes of a tree. Formally, the states are defined first as finite sequences; the tree structure on these sequences is specified later. Definition 3 W is the set of all sequences T0, F0, X1, b1, T1, F1, . . . , Xn, bn, Tn, Fn such that n 0 and 1. Fi Φ is a maximal consistent set of formulae for each integer i 1, 2. Ti, Xi V are datasets for each integer i 1, 3. bi B for each integer i 1, 4. {φ | B Xiφ Fi 1} Fi for each integer i 1, 5. BTi Y ψ ψ Fi for each dataset Y V , each formula ψ Φ, and each integer i 0. For any two states u, w W of the form u = T0, F0, . . . , Xn 1, bn 1, Tn 1, Fn 1 w = T0, F0, . . . , Xn 1, bn 1, Tn 1, Fn 1, Xn, bn, Tn, Fn, we say that the states are adjacent. Note that the adjacency relation forms a tree structure (undirected graph without cycles) on set W. We say that the edge (u, w) is labelled with each data variable in set Xn. By F(w) and T(w) we denote sets Fn and Tn, respectively. Informally, it is convenient to visualise this tree with the edge marked by the pair Xn, bn and node w marked by the pair Tn, Fn. Figure 2 depicts a fragment of such visual representation. In this figure, the node T0, F0, X1, b1, T1, F1 is adjacent to the node T0, F0, X1, b1, T1, F1, X3, b3, T3, F3. The edge between them is labelled with each variable in set X3. By clones we call any two nodes w1 = T0, F0, . . . , Xn 1, bn 1, Tn 1, Fn 1, Xn, bn, Tn, Fn w2 = T0, F0, . . . , Xn 1, bn 1, Tn 1, Fn 1, Xn, b n, Tn, Fn The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) that differ only by the last b-value. For example, in Figure 2, nodes T0, F0, X1, b1, T1, F1, X3, b3, T3, F3 and T0, F0, X1, b1, T1, F1, X3, b6, T3, F3 are clones. The purpose of set B in our construction is to guarantee that each node (except for the root) has more clones than the cardinality of set A. Recall that a simple path in a graph is a path without selfintersections and that in any tree there is a unique simple path between any two nodes. Definition 4 For any states w, w W and any variable x V , let w x w if every edge along the unique simple path between nodes w and w is labelled with variable x. Lemma 3 Relation x is an equivalence relation on set W for each data variable x V . As we prove later in Lemma 9, for any state w W, set F(w) is the set of all formulae that are satisfied in state w of our canonical model. At the same time, set T(w) is the set of all data variables that are trustworthy in state w: Definition 5 Tw = T(w) for any state w W. The above definition explains the intuition behind item 5 of Definition 3. Indeed, the item states BT (w) Y φ φ F(w). Intuitively, this means that if BT (w) Y φ is true in a state w, where set T(w) is trustworthy by Definition 5, then statement φ must be true in state w. In this paper, as in several other works that extend Marc Pauly s logic of coalitional power (Goranko and van Drimmelen 2006; Naumov and Tao 2017; Goranko and Enqvist 2018), the mechanism of the canonical game is using the voting construction. In the standard version of this construction, if the goal of the coalition is to achieve φ, then all members of the coalition use action φ. Informally, this could be interpreted as voting for φ . In our case, a strategy is informed by a dataset X. Thus, for the strategy to work the coalition should somehow prove that it has access to the values of dataset X in the current state. To achieve this, we require each vote to be signed by naming a state that belongs to the same X-equivalence class as the current state. Thus, each action consists of a formula and a state ( signature ). Similar constructions are used in (Naumov and Tao 2018c; Jiang and Naumov 2022a). Definition 6 = Φ W. If d is a pair (x, y), then by pr1(d) and pr2(d) we mean elements x and y, respectively. Definition 7 Mechanism M consists of all triples (w, δ, v) W A W such that for each formula [C]T Xφ F(w), if 1. pr1(δ(a)) = φ for each actor a C and 2. w X pr2(δ(a)) for each actor a C, 3. T Tw, 4. T Tv, then φ F(v). Definition 8 π(p) = {w W | p F(w)} for each atomic proposition p P. This concludes the definition of the canonical game. As usual, the key step in the proof of completeness is a truth lemma proven by induction on the structural complexity of a formula. In our case, this is Lemma 9. To improve the readability of the proof of Lemma 9, we separate the nontrivial induction steps into separate lemmas stated in the next two subsections. 5.2 Properties of the Belief Modality In this subsection, we state and prove lemmas for modality B used in the induction step of the proof of Lemma 9. Lemma 4 For any formula BT Y φ Φ and any states w = T0, F0, . . . , bn 1, Tn 1, Fn 1, w = T0, F0, . . . , bn 1, Tn 1, Fn 1, Xn, bn, Tn, Fn if Y Xn, then BT Y φ F(w ) iff BT Y φ F(w). PROOF. ( ) : Let BT Y φ F(w ). Then, BT Y φ Fn 1. Thus, by Lemma 1 and the Modus Ponens inference rule, Fn 1 B Y BT Y φ. Hence, Fn 1 B Xn BT Y φ by the assumption Y Xn of the lemma, the Monotonicity axiom, and the Modus Ponens inference rule. Thus, B Xn BT Y φ Fn 1 because Fn 1 is a maximal consistent set. Then, BT Y φ Fn by item 4 of Definition 3. Therefore, BT Y φ F(w). ( ) : Let BT Y φ / F(w ). Thus, BT Y φ / Fn 1. Hence, BT Y φ Fn 1 because Fn 1 is a maximal consistent set of formulae. Then, Fn 1 B Y BT Y φ by the Negative Introspection axiom and the Modus Ponens inference rule. Thus, Fn 1 B Xn BT Y φ by the assumption Y Xn of the lemma, the Monotonicity axiom, and the Modus Ponens inference rule. Hence, because set Fn 1 is maximal, B Xn BT Y φ Fn 1. Then, BT Y φ Fn by item 4 of Definition 3. Thus, BT Y φ / Fn, because set Fn is consistent. Therefore, BT Y φ / F(w). Lemma 5 For any states w, u W and any formula BT Xφ F(w), if w X u and T Tu, then φ F(u). PROOF. By Definition 4, the assumption w X u implies that each edge along the unique path between nodes w and u is labelled with each variable in dataset X. Then, the assumption BT Xφ F(w) implies BT Xφ F(u) by applying Lemma 4 to each edge along this path. Note that the assumption T Tu of the lemma implies that T T(u) by Definition 5. Hence, F(u) BT (u) X φ by the Monotonicity axiom and the Modus Ponens inference rule. Thus, F(u) φ by item 5 of Definition 3 and the Modus Ponens inference rule. Therefore, φ F(u) because the set F(u) is maximal. Lemma 6 For any w W and any formula BT Xφ / F(w), there exists a state u W such that w X u, T = Tu, and φ / F(u). PROOF. Consider the set of formulae H = { φ} {ψ | B Xψ F(w)} {BT Y χ χ | Y V, χ Φ}. (6) Claim 1 Set H is consistent. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) PROOF OF CLAIM. Assume the opposite. Hence, there are formulae χ1, .., χn Φ, datasets Y1, .., Yn V , and formulae B Xψ1, . . . , B Xψm F(w) (7) such that BT Y1χ1 χ1, . . . , BT Ynχn χn, ψ1, . . . , ψm φ. Thus, by Lemma 2, BT X(BT Y1χ1 χ1), . . . , BT X(BT Ynχn χn), BT Xψ1, . . . , BT Xψm BT Xφ. Hence, BT Xψ1, . . . , BT Xψm BT Xφ by the Trust axiom applied n times. Then, B Xψ1, . . . , B Xψm BT Xφ by the Monotonicity axiom and the Modus Ponens inference rule applied m times. Thus, F(w) BT Xφ due to statement (7). Hence, BT Xφ F(w) because the set F(w) is maximal, which contradicts the assumption BT Xφ / F(w) of the lemma. Let H be any maximal consistent extension of set H and b be any element of set B. Suppose that w = T0, F0, .. ., Xn, bn, Tn, Fn. Consider sequence u = T0, F0, . . . , Xn, bn, Tn, Fn, X, b, T, H . (8) Note that u W by Definition 3, equation (6), and the choice of set H as an extension of set H. Also, observe that w X u by Definition 4 and equation (8). In addition, T = T(u) = Tu by equation (8) and Definition 5. Finally, φ H H = F(u) by equation (6), the choice of H as an extension of H, and equation (8). Therefore, φ / F(u) because the set F(u) is consistent. This concludes the proof of the lemma. 5.3 Properties of the Doxastic Strategy Modality We now state lemmas for modality [C] that will be used in the induction step of the proof of Lemma 9. Lemma 7 For an arbitrary state w W and any formula [C]T Xφ F(w) there is an action profile s C such that for all states w , v W and each complete action profile δ A if w X w , s =C δ, T Tw , T Tv, and (w , δ, v) M, then φ F(v). PROOF. Let action profile s C be such that s(a) = (φ, w) (9) for each actor a C. Consider any states w , v W and any complete action profile δ A such that w X w , (10) s =C δ, T Tw , T Tv, and (w , δ, v) M. (11) It suffices to show that φ F(v). The assumption [C]T Xφ F(w) of the lemma implies F(w) BT X[C]T Xφ by the Strategic Introspection axiom and propositional reasoning. Hence, BT X[C]T Xφ F(w) because set F(w) is maximal. Thus, [C]T Xφ F(w ) by Lemma 5 and assumption (10) and the part T Tw of assumption (11). Therefore, φ F(v) by Definition 7, and assumptions (9) and (11). The proof of the next lemma can be found in (Jiang and Naumov 2023). Lemma 8 For an arbitrary state w W, any formula [C]T Xφ F(w), and any action profile s C, there are states w , v W and a complete action profile δ A such that w X w , s =C δ, T Tw , T Tv, (w , δ, v) M, and φ / F(v). 5.4 Final Steps We are now ready to prove the truth lemma for our logical system and the strong completeness of the system. Lemma 9 w φ iff φ F(w) for each state w W and each formula φ Φ. PROOF. We prove the statement by induction on the complexity of formula φ. Suppose that formula φ is an atomic proposition p. Note that w p iff w π(p) by item 1 of Definition 2. At the same time, w π(p) iff p F(w) by Definition 8. Therefore, w p iff p F(w). If formula φ is a negation or an implication, then the statement of the lemma follows from the maximality and the consistency of the set F(w), items 2 and 3 of Definition 2, and the induction hypothesis in the standard way. Suppose that formula φ has the form BT Xψ. ( ) : Assume that BT Xψ / F(w). Then, BT Xψ F(w) because F(w) is a maximal consistent set of formulae. Thus, by Lemma 6, there is a state w W such that w X w , T Tw , and ψ F(w ). Hence, ψ / F(w ) because set F(w ) is consistent. Then, w ψ by the induction hypothesis. Therefore, w BT Xψ by item 4 of Definition 2 and statements w X w and T Tw . ( ) : Assume that BT Xψ F(w). Consider any state w such that w X w and T Tw . By item 4 of Definition 2, it suffices to show that w ψ, which is true by Lemma 5. Finally, suppose that formula φ has the form [C]T Xψ. ( ) : Assume that [C]T Xψ / F(w). Thus, [C]T Xψ F(w) because set F(w) is maximal. Hence, by Lemma 8, for any action profile s C, there are states w , v W and a complete action profile δ A such that w X w , s =C δ, T Tw , T Tv, (w , δ, v) M, and ψ / F(v). Then, by the induction hypothesis, for any action profile s C, there are states w , v W and a complete action profile δ A such that w X w , s =C δ, T Tw , T Tv, (w , δ, v) M, and v ψ. Therefore, w [C]T Xψ by item 5 of Definition 2. ( ) : Assume [C]T Xψ F(w). Thus, by Lemma 7, there is an action profile s C such that for all states w , v W and each complete action profile δ A if w X w , s =C δ, T Tw , T Tv, and (w , δ, v) M, then ψ F(v). Hence, by the induction hypothesis, there is an action profile s C such that for all states w , v W and each complete action profile δ A if w X w , s =C δ, T Tw , T Tv, and (w , δ, v) M, then v ψ. Therefore, w [C]T Xψ by item 5 of Definition 2. Theorem 2 (strong completeness) For any set of formulae F Φ and any formula φ Φ, if F φ, then there is a The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) state w of a game such that w f for each formula f F and w φ. PROOF. The assumption F φ implies that the set F { φ} is consistent. Let F0 be any maximal consistent extension of this set. Consider the canonical game G( , F0). First, we show that the sequence , F0 is a state of this canonical game. By Definition 3, it suffices to show that B Y ψ ψ F0 for each dataset Y V and each formula ψ Φ. The last statement is true by the Truth axiom and because set F0 is maximal. Finally, note that φ / F0 because set F0 is consistent and φ F0. Therefore, by Lemma 9 and because F F0, it follows that , F0 f for each formula f F and also , F0 φ. 6 Ex Ante and Ex Post Trust In item 5 of Definition 2, we require that T Tu and T Tv. In other words, we apply the assumption of trustworthiness of dataset T ex ante (before action) and ex post (after action). In general, one can consider that different datasets, A and P, are required to be trustworthy ex ante and ex post, respectively. That leads to a more general modality [C]A,P X φ, defined below: w [C]A,P X φ, when there is an action profile s C of coalition C such that for all states u, v W and each complete action profile δ A if w X u, A Tu, s =C δ, (u, δ, v) M, and P Tv, then v φ. One might wonder which of the data variables among x, v, and t should be trusted ex ante and which ex post for statement (4) to be true in our introductory example. Unfortunately, we do not have access to the Patriot code to answer this question, but we suspect that the Patriot missile constantly adjusts the trajectory based on the current speed and position of the target. To model such behaviour one would need to use multi-step games instead of one-shot (strategic) games that we consider in this paper. To illustrate ex ante and ex post trust, let us consider a different example where a governing body consisting of 25 members is about to vote on passing a certain regulation. Suppose that each of them votes yes or no by a paper ballot. After the vote, the ballots are counted by a tallyman and the number of yes votes, denoted by n, is announced. If n is more than 12, then the regulation is approved, see Figure 3. Figure 3: Voting game. Let us now further assume that a newspaper contacts all 25 members and ask them how they plan to vote. Suppose that 10 members state that they plan to support the proposal, 10 members say that they plan to vote against the proposal, and 5 members are undecided. Alice, Bob, and Cathy are among those 5 who are undecided. It appears that the coalition consisting of the three of them has a strategy to sway the outcome of the vote either way. If all three of them vote yes, then the regulation is approved. If they vote no, then it is rejected. Note, however, that both of these strategies are doxastic: they rely on the ex ant trust in the newspaper data and on the ex post trust in the tallyman computing n correctly. Let data variables yea and nay represent the number of members, according to the newspaper, who plan to vote for and against, respectively. Note that, in the current world w, yea = nay = 10. Thus, w [Alice,Bob,Cathy]{yea},{n} {yea} Approved , (12) w [Alice,Bob,Cathy]{nay},{n} {nay} Rejected . All subscripts and superscripts are important in both formulae above. For example, if subscript yea is dropped in statement (12), then the statement is no longer true: w [Alice,Bob,Cathy]{yea},{n} Approved . This is because, although the doxastic strategy exists, the empty dataset does not inform its existence. If yea is removed from the superscript, then the statement is also not true w [Alice,Bob,Cathy] ,{n} {yea} Approved because the newspaper s information is no longer assumed to be trusted. Without the ex ante trust in the existence of at least 10 members who are ready to support the regulation, the coalition consisting of Alice, Bob, and Cathy does not have a strategy to pass the regulation. Finally, without the ex post trust that the votes will be counted correctly, such strategy does not exist either: w [Alice,Bob,Cathy]{yea}, {yea} Approved . Note that formula [C]T,T X φ is equivalent to our original modality [C]T Xφ. The axioms of our logical system could be modified in a straightforward way for this more general modality. The most significant change is for the Public Belief axiom: BT φ [ ] ,T φ. Although we did not check the details, we believe that our soundness and completeness results can be adjusted for this more general modality. 7 Conclusion In this paper, we proposed the notion of doxastic strategies. We gave a sound and complete logical system for reasoning about such strategies and outlined a possible extension of this work that separates ex ante and ex post trust. Acknowledgments Junli Jiang acknowledges the support of the Innovation Research 2035 Pilot Plan of Southwest University (NO. SWUPilot Plan018). The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) References Agotnes, T.; and Alechina, N. 2019. Coalition Logic with Individual, Distributed and Common Knowledge. Journal of Logic and Computation, 29: 1041 1069. Apple, R. W., Jr. 1991. War in the Gulf: Scud Attack; Scud Missile Hits a U.S. Barracks, Killing 27. New York Times, February 26th. https://www.nytimes.com/1991/02/ 26/world/war-in-the-gulf-scud-attack-scud-missile-hits-aus-barracks-killing-27.html. Baltag, A.; and van Benthem, J. 2021. A simple logic of functional dependence. Journal of Philosophical Logic, 50: 1 67. Bush, G. H. 1991. Remarks to Raytheon Missile Systems Plant Employees in Andover, Massachusetts. https: //bush41library.tamu.edu/archives/public-papers/2711. Accessed: 2022-08-20. Deuser, K.; Jiang, J.; Naumov, P.; and Zhang, W. 2024. A Dynamic Logic of Data-Informed Knowledge. Journal of Philosophical Logic. (to appear). Fagin, R.; Halpern, J. Y.; and Vardi, M. Y. 1992. What can machines know? On the properties of knowledge in distributed systems. Journal of the ACM (JACM), 39(2): 328 376. Fervari, R.; Herzig, A.; Li, Y.; and Wang, Y. 2017. Strategically knowing how. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, 1031 1038. Fervari, R.; Vel azquez-Quesada, F. R.; and Wang, Y. 2021. Bisimulations for Knowing How Logics. The Review of Symbolic Logic, 1 37. Goranko, V.; and Enqvist, S. 2018. Socially Friendly and Group Protecting Coalition Logics. In Proceedings of the 17th International Conference on Autonomous Agents and Multiagent Systems, 372 380. International Foundation for Autonomous Agents and Multiagent Systems. Goranko, V.; and van Drimmelen, G. 2006. Complete axiomatization and decidability of Alternating-time temporal logic. Theoretical Computer Science, 353(1): 93 117. Grossi, D.; Lorini, E.; and Schwarzentruber, F. 2015. The ceteris paribus structure of logics of game forms. Journal of Artificial Intelligence Research, 53: 91 126. Jiang, J.; and Naumov, P. 2022a. Data-Informed Knowledge and Strategies. Artificial Intelligence, 309: 103727. Jiang, J.; and Naumov, P. 2022b. In Data We Trust: The Logic of Trust-Based Beliefs. In the 31st International Joint Conference on Artificial Intelligence (IJCAI-22). Jiang, J.; and Naumov, P. 2023. The Logic of Doxastic Strategies. ar Xiv:2312.07107. Legion, T. A. 2017. 14th Quartermaster Detachment Memorial. https://www.legion.org/memorials/238236/14thquartermaster-detachment-memorial. Accessed: 2022-0820. Li, Y.; and Wang, Y. 2021. Planning-based knowing how: A unified approach. Artificial Intelligence, 103487. Mignotte, M. 2010. The Patriot Missile Failure. https:// www.iro.umontreal.ca/ mignotte/IFT2425/Disasters.html. Accessed: 2022-08-20. Naumov, P.; and Tao, J. 2017. Coalition Power in Epistemic Transition Systems. In Proceedings of the 2017 International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 723 731. Naumov, P.; and Tao, J. 2018a. Second-Order Know-How Strategies. In Proceedings of the 2018 International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 390 398. Naumov, P.; and Tao, J. 2018b. Strategic Coalitions with Perfect Recall. In Proceedings of Thirty-Second AAAI Conference on Artificial Intelligence. Naumov, P.; and Tao, J. 2018c. Together We Know How to Achieve: An Epistemic Logic of Know-How. Artificial Intelligence, 262: 279 300. Pauly, M. 2002. A Modal Logic for Coalitional Power in Games. Journal of Logic and Computation, 12(1): 149 166. The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)