# shhh_the_logic_of_clandestine_operations__ae634378.pdf Shhh! The Logic of Clandestine Operations Pavel Naumov1 , Oliver Orejola2 1University of Southampton, United Kingdom 2Tulane University, United States p.naumov@soton.ac.uk, oorejola@tulane.edu An operation is called covert if it conceals the identity of the actor; it is called clandestine if the very fact that the operation is conducted is concealed. The paper proposes a formal semantics of clandestine operations and introduces a sound and complete logical system that describes the interplay between the distributed knowledge modality and a modality capturing coalition power to conduct clandestine operations. 1 Clandestine Games In this paper, we study games in which coalitions can engage in concealed operations. The US Department of Defense Dictionary of Military and Associated Terms distinguishes between covert and clandestine operations. Covert operations are planned and executed to conceal the identity of the actor. An operation is clandestine when the very fact that the operation is conducted is concealed [Office of the Chairman of the Joint Chiefs of Staff, 2020]. Thus, every clandestine operation is covert, but not every covert operation is clandestine. The focus of the current work is on clandestine operations. w c,r: Anadyr a: 3101 c,r c,r a: 3101 Figure 1: Cuban Missile Crisis Game. An example of a clandestine operation is 1962 Operation Anadyr conducted by the Soviet Union arm forces as a prelude to the Cuban Missile Crisis [Hansen, 2007]. The operation consisted of the delivery and deployment of ballistic missiles with nuclear warheads in Cuba to prevent an invasion of the island by the United States. Figure 1 depicts our representation of the Cuban Missile Crisis as a clandestine game between three players: the Americans (a), the Cubans (c), and the Russians (r). Operation Anadyr was executed by the Cubans and the Russians and consisted in transitioning the world from state w to state w . Propositional variable m denotes the statement Missiles are deployed in Cuba . It is false in state w and true in state w . Operation Anadyr was concealed in the sense that the Americans were not able to detect the transition of the world from state w to state w . In the diagram, the indistinguishability of these two states to Americans is shown using a dashed line. Although states w and w are indistinguishable to Americans, this does not prevent them from discovering the transition from state w to state w by executing an operation of their own. In fact, they did just that on October 14th, 1962, by conducting a clandestine operation Mission 3101 [Mc Auliffe, 1992]. Mission 3101 consisted of a U-2 spy plane secretly flying over Cuban territory to collect military intelligence. Mission 3101 also was concealed in the sense that, as shown in the diagram, the Cubans and the Russians were not able to detect its execution that transitioned the world from state w to state v. If the same Mission 3101 were to be executed in state w, it would hypothetically transition the world from state w to state u. The Americans can distinguish state v from state u based on the reconnaissance photos taken by the spy plane. This explains how the Americans were able to detect the execution of Operation Anadyr through operation Mission 3101. Coalition power in games with imperfect information has been studied before in synchronous settings where all agents act at once and, thus, everyone is aware that something happened [van der Hoek and Wooldridge, 2003; Agotnes and Alechina, 2012; Naumov and Tao, 2017; Naumov and Tao, 2018b; Naumov and Tao, 2018a; Agotnes and Alechina, 2019]. To capture clandestine operations it is crucial to use semantics in which an agent might be unaware of the game transitioning from one state to another as a result of the actions of other agents. Such a behaviour could be modelled, for example, by extending the semantics of the above logical systems with a single sleep action. Additionally, it should be required that any agent executing action sleep should not be able to distinguish the initial and the final state of any transition during which the agent used sleep. This approach would also need to settle who learns what if two or more disjoint coalitions execute clandestine operations synchronously. For the sake of the clarity of presentation, in this paper, we Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23) define the semantics of clandestine operations in terms of a class of asynchronous games that we call clandestine games that are described in the definition below. In this paper, we will assume a fixed set of agents A. By a coalition we mean any (possibly empty) subset of A. For any coalition C, by C we denote the complement of set C with respect to set A. Definition 1. Let a clandestine game be any such tuple (W, { a}a A, , M, π) that 1. W is a set of states . 2. a is an indistinguishability equivalence relation on set W for each agent a A. We write w C u if w a u for each agent a C. 3. is a nonempty set of operations . 4. M is a set of tuples (w, C, δ, u), where w, u W are states, C A is a coalition, and δ is an operation. It is assumed that set M, called mechanism , satisfies the following two conditions (a) concealment: for any two states w, u W, any coalition of agents C A, any operation δ , if (w, C, δ, u) M, then w C u, (b) nontermination: for any state w W, any coalition of agents C A, and any operation δ , there is at least one state u W such that (w, C, δ, u) M. 5. π(p) is a subset of W for each propositional variable p. The diagram in Figure 1 depicts an example of a clandestine game with four states (w, w , u, and v) and two operations (Anadyr and 3101). The indistinguishability relations are shown by dashed lines and the mechanism is depicted by directed lines. The diagram omits loop operations. This means, for example, that if the Americans execute Operation Anadyr in any of the states, then the game transitions back to the same state. The nontermination condition 4(b) guarantees that no operation can terminate a game without reaching some state. In a real-world setting, a variety of operations might be performed by any coalition. Some of them satisfy the concealment condition 4(a) of Definition 1, the others might not. We excluded non-concealed operations from our games to keep the presentation simple. If such operations are added to the models and the quantifier over operations δ in item 5 of Definition 2 below is simultaneously restricted to concealed operations only, then the soundness and the completeness results of this paper will remain true and no changes to their proofs will be necessary. In this paper, we propose a sound and complete logical system for reasoning about coalition power to conduct clandestine operations. The rest of the paper is organized as follows. In the next section, we discuss the interplay between knowledge and actions and explain why existing coalition power modalities do not capture the properties of clandestine operations. Then, we define the syntax and semantics of our logical system. In the section Coalition-Informant-Adversary Principle, we introduce and discuss the most non-trivial axiom of our system. In the section that follows, we list the remainder of the axioms. After that, we sketch the completeness of our logical system. The proof of soundness and some details of the completeness are in the full version of this paper [Naumov and Orejola, 2023]. 2 Knowledge and Actions In this section, we discuss how different forms of knowledge can be captured in the existing modal logics for reasoning about coalition power and explain why the power to perform a clandestine operation is not expressible in these logics. When discussing the interplay between knowledge and actions, it is common to distinguish ex-ante, interim, and expost knowledge. They refer to an agent s (or a coalition s) knowledge before the action, at the moment of the action, and after the action, respectively. One of the first logical systems describing the interplay between distributed knowledge modality KC and coalition power modality SC was introduced in [ Agotnes and Alechina, 2012]. Using their language, one can write KCSCϕ to state that coalition C knows ex-ante (before the action) that it has a strategy (joint action) to achieve ϕ. Using the same language, one can write SCKCϕ to state that coalition C has a strategy that would result in ϕ being known ex-post to the coalition. The language of [ Agotnes and Alechina, 2012] cannot be used to express interim knowledge. However, this could be done using seeing to it modality [Belnap and Perloff, 1990; Horty, 2001; Horty and Belnap, 1995; Horty and Pacuit, 2017; Olkhovikov and Wansing, 2019]. Knowing that a strategy exists, as in KCSCϕ, is different from actually knowing the strategy. If a coalition C knows ex-ante what strategy it can use to achieve ϕ, then we say that the coalition has a know-how strategy to achieve ϕ and denote this by HCϕ. Unless the coalition has a perfect recall, knowing ex-ante a strategy to achieve ϕ does not imply knowing ex-ante a strategy that results in knowing ex-post that ϕ is achieved. The latter, however, could be expressed as HCKCϕ. The interplay between coalitional know-how modality HC and distributed knowledge modality KC has been studied in [Naumov and Tao, 2017; Fervari et al., 2017; Naumov and Tao, 2018b; Naumov and Tao, 2018a; Agotnes and Alechina, 2019; Cao and Naumov, 2020]. Figure 2: Knowledge and Actions. In epistemic models, knowledge is usually captured by an indistinguishably relation between states. For example, in Figure 2 (left), w1 KCSCp. In other words, coalition C knows ex-ante that it has a strategy to achieve p. This is true Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23) because the coalition has such a strategy not only in state w1, but also in state w2, indistinguishable to the coalition from w1. Note that this is not a know-how strategy because the required strategy in state w1 (strategy δ1) is different from the required strategy in state w2 (strategy δ2). Thus, w1 HCp. Note also that in state w1 coalition C does not have a strategy to achieve ex-post knowledge of p. We write this as w1 SCKCp. This is true because state u1 is indistinguishable from state u0 where p is not satisfied. The situation is different in Figure 2 (centre). Here, coalition C has a strategy in state w1 to achieve p, but the coalition does not know this ex-ante because it cannot distinguish state w1 from state w2 where such a strategy does not exist. Using our notations, w1 SCp and w1 KCSCp. Note, however, that in this setting coalition also has a strategy to achieve expost knowledge of p because p is satisfied not only in state u1 but also in state u0, indistinguishable to C from state u1. We write this as w1 SCKCp The clandestine operations that we consider in this paper are know-how strategies. Furthermore, for the reason we discuss in the next section, they are know-how strategies to achieve ex-post knowledge. This alone would not require a new modality because it can be captured in existing knowhow logics as HCKCϕ. However, the last formula does not account for the concealed nature of clandestine operations. We capture the late by requiring the initial and the final state of the operation to be indistinguishable to the complement C of coalition C. Strategy δ depicted in Figure 2 (right) is a clandestine operation of coalition C to achieve p. In this paper, we introduce a new modality Cϕ to denote an existence of a clandestine operation of coalition C to achieve ϕ. This modality is not definable through existing modalities of coalition power, know-how, and seeing-to-it, because these existing modalities cannot capture the indistinguishably (by the complement of coalition C) of the initial and the final state of the operation. 3 Syntax and Semantics Language Φ of our logical system is defined by the grammar ϕ := p | ϕ | ϕ ϕ | KCϕ | Cϕ, where p is a propositional variable and C is a coalition. We read formula KCϕ as coalition C knows ϕ , and formula Cϕ as coalition C knows which clandestine operation it can execute to achieve ϕ . In both cases, the knowledge is distributed. We assume that Boolean constants and as well as disjunction are defined in the standard way. We use KC,Dϕ and C,Dϕ as shorthand for KC Dϕ and C Dϕ respectively. In the definition below, item 5 gives formal semantics of modality Cϕ, see Figure 3. Definition 2. For any state w W of a clandestine game (W, { a}a A, , M, π) and any formula ϕ Φ, satisfiability relation w ϕ is defined recursively as 1. w p if w π(p), 2. w ϕ if w ϕ, 3. w ϕ ψ if w ϕ or w ψ, 4. w KCϕ if u ϕ for any u W such that w C u, 5. w Cϕ if there is a nonempty coalition C C and an operation δ such that for any states w , u, u W, if w C w , (w , C , δ, u) M, and u C u , then u ϕ. Figure 3: Towards item 5 of Definition 2. In item 5 of the above definition, we introduce coalition C to capture the fact that in order for a coalition C to know a clandestine operation to achieve a certain goal, not all members of the coalition C have to take an active part in it. Recall that Definition 1 allows for some operations to be conducted by the empty coalition. Such operations can change the state of the game. However, according to the concealment condition of Definition 1, such change is not noticeable to any agent in the game. Informally, these operations could be thought of as nondeterministic transitions of the game that occur independently from the actions of the agents and are not noticeable to them. The presence of such transitions is not significant for the results in this paper. We do not exclude them for the sake of generality. At the same time, in Definition 2, we require coalition C to be nonempty. Intuitively, a coalition can ask some of its members to conduct an operation, but the coalition cannot ask the empty coalition, because operations of the empty coalition are system transitions not controlled by the agents. The restriction of C to nonempty coalitions is significant for our results. Item 5 of Definition 2 is using state w to express that the clandestine operation δ not only exists but it is known to coalition C. Note that this knowledge, captured through statement w C w , is the knowledge of the whole coalition C, not just its part C that executes the operation. In other words, we assume that some members of the coalition C could be passive informants. We explore this in the Coalition Informant-Adversary axiom of our logical system. Formula Cϕ states that coalition C knows a clandestine operation to achieve ϕ. Because clandestine games are asynchronous, an important question is for how long ϕ will remain true after the operation. If another coalition can undo the operation without C even noticing, then coalition C could only be sure that ϕ holds at the very moment the operation is completed. To avoid this, in item 5 of Definition 2, we require ϕ to be satisfied not only in the completion state u of the operation δ, but also in all states u indistinguishable from state u by coalition C. In other words, statement ϕ remains true until at least one of the members of coalition C takes part in another clandestine operation1. 1If non-concealed operations are added to Definition 1 as described in the previous section, then ϕ will remain until at least one of the members of coalition C becomes aware that another operation took place. Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23) 4 Coalition-Informant-Adversary Principle The most interesting axiom of our logical system is a principle that captures strategic information dynamics between three sets of agents: a coalition that conducts a clandestine operation, a group of informants who passively cooperate with the coalition by sharing knowledge but do not participate in the operation itself, and a group of adversaries who do not cooperate with the coalition at all. To understand this principle, let us first consider its simplified form without the adversaries: for any disjoint coalitions C and I, KI(KCϕ KCψ) ( Cϕ C,Iψ). (1) The assumption Cϕ of this principle states that before the operation (ex-ante) the coalition knows which clandestine operation it should conduct in order to know after the operation (ex-post) that ϕ is true. The other assumption KI(KCϕ KCψ) of this principle refers to ex-ante knowledge of a group of informants I. Because the operation is clandestine and C I = , the ex-ante and ex-post knowledge of I is the same. Thus, statement KCϕ KCψ will have to be true after the operation. In other words, after the operation coalition, C will know that not only condition ϕ, but also condition ψ is true. Coalition C alone, however, does not know this ex-ante and thus, it alone does not know an operation to achieve condition ψ. Nevertheless, recall that coalition I knows KCϕ KCψ ex ante. Thus, it knows ex-ante that KCϕ KCψ will have to be true after any clandestine operation that does not involve I. Therefore, the union of the coalitions C and I knows ex-ante the operation that C can conduct to achieve ψ. That is, C,Iψ. Note that the purpose of modality KI in the assumption KI(KCϕ KCψ) of principle (1) is to make sure that statement KCϕ KCψ is preserved during the clandestine operation of coalition C. If one were to consider an additional coalition, that we refer to as an adversary coalition A, then replacing modality KI with KA,I still guarantees that statement KCϕ KCψ is preserved during the operation (as long as A is also disjoint with C). Thus, one might think that the following form of principle (1) is also valid: KA,I(KCϕ KCψ) ( Cϕ C,Iψ). This statement, however, is not true. Assumptions KA,I(KCϕ KCψ) and Cϕ can only guarantee that coalition C knows ex ante an operation to achieve ϕ. If this operation is executed, then coalition C I will know ex-post that ϕ is true, but they might not know ex-ante that they will know this ex-post. To make sure that they indeed have such ex-ante knowledge, one more knowledge modality should be added to the formula: KC,IKA,I(KCϕ KCψ) ( Cϕ C,Iψ). Finally, note that if instead of preserving KCϕ KCψ, it is enough just to be able to preserve statement KCϕ KC,Iψ: KC,IKA,I(KCϕ KC,Iψ) ( Cϕ C,Iψ). As it turns out, the above formula is the final and the most general form of the Coalition-Informant-Adversary principle. In this paper, we show that this principle, in combination with several much more straightforward other axioms, forms a logical system that can derive all universally valid properties of clandestine operations. In addition to propositional tautologies in the language Φ, our logical system contains the following axioms: 1. Truth: KCϕ ϕ, 2. Negative Introspection: KCϕ KC KCϕ, 3. Distributivity: KC(ϕ ψ) (KCϕ KCψ), 4. Monotonicity: KC ϕ KCϕ, where C C, 5. Strategic Introspection: Cϕ KC Cϕ, 6. Coalition-Informant-Adversary: if C (I A) = , then KC,IKA,I(KCϕ KC,Iψ) ( Cϕ C,Iψ), 7. Nontermination: C , 8. Empty Coalition: ϕ. We write ϕ if a formula ϕ is provable from the above axioms using the Modus Ponens and the two Necessitation inference rules: ψ ϕ KCϕ ϕ, C = Cϕ . We write X ϕ if the formula ϕ is provable from the theorems of our logical system and the set of additional axioms X using only the Modus Ponens inference rule. The next two lemmas state well-known properties of S5 modality K. The next five lemmas are used in the proof of the completeness of our logical system. We give their proofs in [Naumov and Orejola, 2023]. Lemma 1. If ϕ1, ..., ϕn ψ, then KCϕ1, ..., KCϕn KCψ. Lemma 2. KCϕ KCKCϕ. Lemma 3. KF KEKF ϕ F ϕ, where F E. Lemma 4. KE KF ϕ F ϕ, where E F = . Lemma 5. KF (KEϕ ψ) KEϕ KF ψ, where E F. We show the following soundness theorem in [Naumov and Orejola, 2023] Theorem 1. For any state w of a clandestine game, if w χ for each formula χ X and X ϕ, then w ϕ. In the rest of this paper, we prove the completeness of our logical system. 6 Canonical Model As usual, the key step in the proof of completeness is the construction of a canonical model. The standard canonical model for epistemic logic of individual knowledge S5 defines states as maximal consistent sets of formulae. Two such sets are indistinguishable to an agent a if they contain the same Ka-formulae. Unfortunately, this approach does not work for distributed knowledge because any two sets that are indistinguishable to agents a and b would then only share Ka and Kb formulae. They might have different Ka,b-formulae. To address this issue, we define the canonical model using a tree whose nodes are labelled by maximal consistent sets Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23) X4 X5 X6 X7 X8 X9 C4 C5 C6 C7 C8 C9 Figure 4: Tree Construction. and whose edges are labelled by coalitions, see Figure 4. Informally, states are the nodes of this tree. Formally, any state is a sequence of labels along the path leading from the root of the tree to a node of the tree. Let us now define canonical clandestine game M(X0) = (W, { a}a A, , M, π) for an arbitrary maximal consistent set of formulae X0. Definition 3. Set W consists of all finite sequences X0, C1, . . . , Cn, Xn, such that n 0 and 1. Xi is a maximal consistent set of formulae for all i > 1, 2. Ci A is a coalition for all i n, 3. {ϕ Φ | KCiϕ Xi 1} Xi, for all i n. We define a tree structure on the set of states W by saying that state w = X0, C1, X1, C2, . . . , Cn, Xn and state w :: Cn+1 :: Xn+1 are connected by an undirected edge labeled with all agents in coalition Cn+1. For example, for the tree depicted in Figure 4, state X0, C2, X2 is adjacent to state X0, C2, X2, C8, X8 and the edge between them is labelled with all agents in coalition C8. Definition 4. For any two states w, w W and any agent a A, let w a w if all edges along the simple path between w and w are labelled with agent a. Note that, in the above definition, the path might consist of a single node. Lemma 6. Relation a is an equivalence relation on set W. Definition 5. Set of operations is the set of all formulae in language Φ. Informally, operation ϕ is a clandestine operation in the canonical game that achieves ϕ unnoticeable to the agents outside of the coalition that performed the operation and makes the result known to the coalition. This intuition is captured in the definition below. Throughout the paper, by hd(w) we denote the last element of the sequence w. Definition 6. Canonical mechanism M is a set of all tuples (w, C, ϕ, u) where w, u W are states, C A is a coalition, and ϕ Φ is a formula, such that w C u and if Cϕ hd(w), then KCϕ hd(u). Note that the requirement w C u in the above definition implies that mechanism M satisfies the concealment condition from Definition 1. Next, we show that M also satisfies the nontermination condition. Lemma 7. For any state w, any coalition C A, and any formula ϕ Φ, there is a state u W such that (w, C, ϕ, u) M. Proof. We consider the following two cases separately: Case I: Cϕ hd(w). Let X = {KCϕ} {ψ | KCψ hd(w)}. Claim. Set X is consistent. Proof of Claim. Assume the opposite. Thus, there are formulae KCψ1,. .. , KCψn hd(w) such that ψ1, . . . , ψn KCϕ. Hence, KCψ1, . . . , KCψn KC KCϕ. by Lemma 1. Then, hd(w) KC KCϕ by the assumption KCψ1, . . . , KCψn hd(w). Thus, hd(w) Cϕ by Lemma 4 and the Modus Ponens inference rule. Then, Cϕ / hd(w) because set hd(w) is consistent, which contradicts the assumption of the case. Let X be any maximal consistent extension of set X and u be the sequence w :: C :: X . Then, w W by Definition 3 as well as the choice of sets X and X . Finally, note that w C u by Definition 4 because u = w :: C :: X . Also, KCϕ X X = hd(u) by the choice of sets X and X and the choice of sequence u. Therefore, (w, C, ϕ, u) M by Definition 6. Case II: Cϕ / hd(w). Take u to be world w. Therefore, (w, C, ϕ, u) M by Definition 6. This concludes the proof of the lemma. Definition 7. π(p) = {w W | p hd(w)}. This concludes the definition of the canonical model M(X0) = (W, { a}a A, , M, π). 7 Completeness As usual, the proof of completeness is using an induction (or truth ) lemma to connect the syntax of our system with the semantics of the canonical model. In our case, this is Lemma 13. The next five lemmas are auxiliary statements that will be used in different cases of the induction step in the proof of Lemma 13. Lemma 8. KDϕ Xn iff KDϕ Xn+1 for any formula ϕ Φ, any n 0, and any state X0, C1, X1, C2, . . . , Xn, Cn+1, Xn+1 W, and any coalition D Cn+1. Proof. If KDϕ Xn, then Xn KDKDϕ by Lemma 2. Hence, Xn KCn+1KDϕ by the Monotonicity axiom, the assumption D Cn+1, and the Modus Ponens inference rule. Thus, KCn+1KDϕ Xn by the maximality of set Xn. Therefore, KDϕ Xn+1 by Definition 3. Suppose that KDϕ / Xn. Hence, KDϕ Xn by the maximality of set Xn. Thus, Xn KD KDϕ by the Negative Introspection axiom and the Modus Ponens inference rule. Hence, Xn KCn+1 KDϕ by the Monotonicity axiom, the assumption D Cn+1, and the Modus Ponens inference rule. Then, KCn+1 KDϕ Xn by the maximality of set Xn. Thus, KDϕ Xn+1 by Definition 3. Therefore, KDϕ / Xn+1 because set Xn+1 is consistent. Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23) Lemma 9. If KCϕ hd(w) and w C u, then ϕ hd(u). Proof. Assumption w C u implies that all edges along the unique simple path between nodes w and u are labeled with all agents in coalition C. Thus, KCϕ hd(u) by Lemma 8. Hence, hd(u) ϕ by the Truth axiom and the Modus Ponens inference rule. Therefore, ϕ hd(u) because set hd(u) is maximal. Lemma 10. If KCϕ / hd(w), then there is u W such that w C u and ϕ / hd(u). Proof. Consider set X = { ϕ} {ψ | KCψ hd(w)}. Claim. Set X is consistent. Proof of Claim. Suppose the opposite. Thus, there are formulae KCψ1, . . . , KCψn hd(w) such that ψ1, . . . , ψn ϕ. Hence, KCψ1, . . . , KCψn KCϕ by Lemma 1. Then, hd(w) KCϕ by the assumption KCψ1, . . . , KCψn hd(w). Thus, KCϕ hd(w) because set hd(w) is maximal, which contradicts the assumption of the lemma. Let X be any maximal consistent extension of set X and u be the sequence w :: C :: X . Then, w W by Definition 3 as well as the choice of sets X and X . Finally, ϕ X X = hd(u) by the choice of sets X and X and the choice of sequence u. Therefore, ϕ / hd(u) because set hd(u) is consistent. Lemma 11. For any formula Cϕ hd(w) and any three states w , u, u W, if w C w , (w , C, ϕ, u) M, and u C u , then ϕ hd(u ). Proof. Assumption Cϕ hd(w) implies that hd(w) KC Cϕ by the Strategic Introspection axiom and the Modus Ponens inference rule. Hence, KC Cϕ hd(w) because set hd(w) is maximal. Thus, Cϕ hd(w ) by Lemma 9 and the assumption w C w . Then, KCϕ hd(u) by Definition 6 and the assumption (w , C, ϕ, u) M. Therefore, ϕ hd(u ) by Lemma 9 and the assumption u C u . Lemma 12. If F ϕ / hd(w), then for any nonempty coalition E F and any action δ , there are states w , u, u such that w F w , (w , E, δ, u) M, u F u , and ϕ / hd(u ). In the proof of this lemma located below, we consecutively construct states w , u, and u . To guarantee that state u could be constructed after state u, we construct a state u such that set hd(u) contains formula KF ϕ. In this case, by Lemma 10, there must exist a state u such that u F u , and ϕ / hd(u ). One might think that state u could be constructed from state w in a similar fashion by guaranteeing first that set hd(w ) contains formula KEKF ϕ. However, there is a problem because Definition 6 states that if set hd(w ) contains formula Eδ, then set hd(u), in addition to formula KF ϕ, must also contain formula KEδ. Thus, there are two possible ways sets hd(w ) and hd(u) could be constructed: I. Set hd(u) contains KF ϕ and KEδ. In this case, set hd(w ) must contain formula KE ( KF ϕ KEδ). The last formula is equivalent to KE(KEδ KF ϕ), II. Set hd(u) contains only formula KF ϕ. In this case, set hd(w ) must contain formulae KEKF ϕ and Eδ. We visualise these two cases on the diagram in Figure 5. KE(KE ! KF 휑) Figure 5: Towards the Proof of Lemma 12. Unfortunately, there is no way to decide upfront which of these two ways could be used to construct a consistent set hd(w ). Thus, in the proof below we attempt to concurrently construct both versions of the set hd(w ) and prove that one of the two attempts succeeds by resulting in a consistent set hd(w). Finally, note that in both cases we must also guarantee that w F w . To achieve this, we include in set hd(w ) all such formulae ψ that KF ψ hd(w). In the proof below, the two different attempts to create a set hd(w ) are carried out by defining sets X and Y and proving that at least one of them is consistent. Set hd(w ) is later defined as a maximal consistent extension of either set X or set Y depending on which one is consistent. Proof. Consider the following two sets of formulae: X = { KE(KEδ KF ϕ)} {ψ | KF ψ hd(w)}, Y = { Eδ, KEKF ϕ} {ψ | KF ψ hd(w)}. Claim. Either set X or set Y is consistent. Proof of Claim. Suppose the opposite. Thus, there are KF ψ1, . . . , KF ψm, KF ψ 1, . . . , KF ψ n hd(w) (2) ψ1, . . . , ψm KE(KEδ KF ϕ), ψ 1, . . . , ψ n Eδ KEKF ϕ. Then, by the Strategic Introspection axiom, ψ1, . . . , ψm KE(KEδ KF ϕ), ψ 1, . . . , ψ n KE Eδ KEKF ϕ. Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23) Hence, by Lemma 1, KF ψ1, . . . , KF ψm KF KE(KEδ KF ϕ), KF ψ 1, . . . , KF ψ n KF (KE Eδ KEKF ϕ). Thus, by assumption (2), hd(w) KF KE(KEδ KF ϕ), (3) hd(w) KF (KE Eδ KEKF ϕ). The last statement, by Lemma 5, assumption E F of the lemma, and the Modus Ponens inference rule, implies that hd(w) KE Eδ KF KEKF ϕ. Then, by the Truth axiom and propositional reasoning, hd(w) Eδ KF KEKF ϕ. (4) Recall that set E is nonempty by the assumption of the lemma. Thus, there is at least one e E. Then, e F by the assumption E F of the lemma. Hence, e F \ E. Thus, F E. Then, KF KEKF ϕ F ϕ by Lemma 3. At the same time, hd(w) F ϕ by the assumption F ϕ / hd(w) of the lemma and the maximality of the set hd(w). Then, hd(w) KF KEKF ϕ by the contraposition of the Modus Ponens inference rule. Hence, KF KEKF ϕ hd(w) because set hd(w) is maximal. Thus, by propositional reasoning using statement (4), hd(w) Eδ. (5) At the same time, assumption E F of the lemma implies that (F \E) F = E. Then, E ((F \E) F) = E E = . Hence, the following formula KE,F \EKF ,F \E(KEδ KE,F \Eϕ) ( Eδ E,F \Eϕ) is an instance of the Coalition-Informant-Adversary axiom where C = E, I = F \ E, and A = F. Thus, using statement (5) and propositional reasoning, hd(w) KE,F \EKF ,F \E(KEδ KE,F \Eϕ) E,F \Eϕ. Note that E (F \ E) = F and F (F \ E) = E by the assumption E F of the lemma. In other words, hd(w) KF KE(KEδ KF ϕ) F ϕ. Then, hd(w) F ϕ by statement (3) and the Modus Ponens inference rule. Therefore, F ϕ hd(w) because set hd(w) is maximal, which contradicts assumption F ϕ / hd(w) of the lemma. The claim that we just proved states that either set X or set Y is consistent. We consider these two cases separately. Case I: set X is consistent. Let X be any maximal consistent extension of the set X and let state w be the sequence w :: F :: X . Note that w W by Definition 3 and the choice of set X, set X , and sequence w . Also, w F w by Definition 4 and the choice of sequence w . Note that KE(KEδ KF ϕ) X X = hd(w ) by the choice of set X, set X , and sequence w . Thus, KE(KEδ KF ϕ) / hd(w ) because set hd(w ) is consistent. Hence, by Lemma 10, there is a state u W such that w E u and KEδ KF ϕ / hd(u). Then, KEδ hd(u) and KF ϕ / hd(u) because hd(u) is a maximal consistent set. Statements w E u and KEδ hd(u) imply that (w , E, δ, u) M by Definition 6. Statement KF ϕ / hd(u) implies that there is a state u W such that u F u and ϕ / hd(u ) by Lemma 10. Case II: set Y is consistent. Let Y be any maximal consistent extension of the set Y and let state w be the sequence w :: F :: X . As in the previous case, w W by Definition 3 and the choice of set Y , set Y , and sequence w . Also, w F w by Definition 4 and the choice of w . Note that KEKF ϕ Y Y = hd(w ) by the choice of set Y , set Y , and sequence w . Thus, KEKF ϕ / hd(w ) as set hd(w ) is maximal consistent. Hence, by Lemma 10, there is a state u W such that w E u and KF ϕ / hd(u). (6) At the same time, Eδ Y Y = hd(w ) by the choice of set Y , set Y , and sequence w . Thus, Eδ / hd(w ) because set hd(w ) is consistent. Then, (w , E, δ, u) M by Definition 6 and because w E u by statement (6). Finally, KF ϕ / hd(u) by statement (6). Therefore, by Lemma 10, there exists a state u W such that u F u and ϕ / hd(u ). This concludes the proof of the lemma. The next truth lemma follows from the four previous lemmas in the standard way. Due to the space constraint, we give its proof in [Naumov and Orejola, 2023]. Lemma 13. w ϕ iff ϕ hd(w). Theorem 1. If X ϕ, then there is a state w of a clandestine game such that w χ for each formula χ X and w ϕ. Proof. If X ϕ, then set X { ϕ} is consistent. Let w be any maximal consistent extension of this set. Then, w χ for each formula χ X and w ϕ by Lemma 13. Therefore, w ϕ by item 2 of Definition 2. 8 Conclusion In this paper, we proposed a sound and complete logical system that describe properties of clandestine power modality Cϕ. A natural generalization of our work could be a study of partially-clandestine modality F Cϕ, that stands for coalition C knows an operation that it can use to achieve ϕ unnoticeable to anyone outside (friendly) coalition F . It is also possible to consider a broader class of clandestine operations that achieve a goal through several consecutive clandestine actions of the given coalition. This type of multistep operations is similar to multi-step strategies studied in know-how logics [Fervari et al., 2017; Li and Wang, 2017; Wang, 2018; Wang, 2015]. References [ Agotnes and Alechina, 2012] Thomas Agotnes and Natasha Alechina. Epistemic coalition logic: completeness and Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23) complexity. In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems Volume 2 (AAMAS), pages 1099 1106, 2012. [ Agotnes and Alechina, 2019] Thomas Agotnes and Natasha Alechina. Coalition logic with individual, distributed and common knowledge. Journal of Logic and Computation, 29:1041 1069, 11 2019. [Belnap and Perloff, 1990] Nuel Belnap and Michael Perloff. Seeing to it that: A canonical form for agentives. In Knowledge representation and defeasible reasoning, pages 167 190. Springer, 1990. [Cao and Naumov, 2020] Rui Cao and Pavel Naumov. Knowing the price of success. Artificial Intelligence, 284:103287, 2020. [Fervari et al., 2017] Raul Fervari, Andreas Herzig, Yanjun Li, and Yanjing Wang. Strategically knowing how. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, pages 1031 1038, 2017. [Hansen, 2007] James H. Hansen. Soviet deception in the cuban missile crisis. Studies in Intelligence, 46(1), 2007. [Horty and Belnap, 1995] John F Horty and Nuel Belnap. The deliberative STIT: A study of action, omission, ability, and obligation. Journal of Philosophical Logic, 24(6):583 644, 1995. [Horty and Pacuit, 2017] John Horty and Eric Pacuit. Action types in STIT semantics. The Review of Symbolic Logic, 10(4):617 637, 2017. [Horty, 2001] John F Horty. Agency and deontic logic. Oxford University Press, 2001. [Li and Wang, 2017] Yanjun Li and Yanjing Wang. Achieving while maintaining: A logic of knowing how with intermediate constraints. In Logic and Its Applications: Proceedings of 7th Indian Conference, ICLA 2017, Kanpur, India, January 5-7, 2017, pages 154 167. Springer, 2017. [Mc Auliffe, 1992] Mary S. Mc Auliffe. CIA Documents on the cuban missile crisis 1962, October 1992. https://www.cia.gov/library/center-for-thestudy-of-intelligence/csi-publications/books-andmonographs/Cuban%20Missile%20Crisis1962.pdf. [Naumov and Orejola, 2023] Pavel Naumov and Oliver Orejola. Shhh! the logic of clandestine operations. ar Xiv:2305.07035, 2023. [Naumov and Tao, 2017] Pavel Naumov and Jia Tao. Coalition power in epistemic transition systems. In Proceedings of the 2017 International Conference on Autonomous Agents and Multiagent Systems (AAMAS), pages 723 731, 2017. [Naumov and Tao, 2018a] Pavel Naumov and Jia Tao. Strategic coalitions with perfect recall. In Proceedings of Thirty-Second AAAI Conference on Artificial Intelligence, 2018. [Naumov and Tao, 2018b] Pavel Naumov and Jia Tao. Together we know how to achieve: An epistemic logic of know-how. Artificial Intelligence, 262:279 300, 2018. [Office of the Chairman of the Joint Chiefs of Staff, 2020] Office of the Chairman of the Joint Chiefs of Staff. DOD dictionary of military and associated terms, January 2020. [Olkhovikov and Wansing, 2019] Grigory K Olkhovikov and Heinrich Wansing. Inference as doxastic agency. part i: The basics of justification STIT logic. Studia Logica, 107(1):167 194, 2019. [van der Hoek and Wooldridge, 2003] Wiebe van der Hoek and Michael Wooldridge. Cooperation, knowledge, and time: Alternating-time temporal epistemic logic and its applications. Studia Logica, 75(1):125 157, 2003. [Wang, 2015] Yanjing Wang. A logic of knowing how. In Logic, Rationality, and Interaction, pages 392 405. Springer, 2015. [Wang, 2018] Yanjing Wang. A logic of goal-directed knowing how. Synthese, 195(10):4419 4439, 2018. Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23)