# a_gametheoretic_perspective_on_inconsistency_handling__6c77a241.pdf A Game-Theoretic Perspective on Inconsistency Handling Yakoub Salhi Univ. Artois, CNRS, CRIL, F-62300 Lens, France salhi@cril.fr This paper introduces a game-theoretic framework for restoring consistency in propositional bases. The process is modeled as an interactive dialogue between two agents: a Proponent, who seeks to isolate a unique, consistent subset by posing strategic questions, and an Opponent, who aims to obstruct that goal through adversarial responses. We show that this framework provides a foundation for quantifying the effort involved in restoring consistency, revealing a connection between this effort and entropy in information theory. Focusing on the case where consistency is achieved by isolating a single maximal consistent subset, we establish links between the structure and number of such subsets and the existence of winning strategies. Finally, we demonstrate how the quantified restoration effort can serve as a basis for measuring inconsistency. 1 Introduction Inconsistencies in knowledge-based applications stem from multiple factors, such as conflicting information sources, noisy data, context dependency, and dynamic changes. These inconsistencies cause important challenges for reasoning processes in artificial intelligence, especially in the field of knowledge representation and reasoning. To address these challenges, a variety of concepts, approaches, and tools have been developed to analyze and handle inconsistency, including argumentation frameworks (e.g. [Besnard and Hunter, 2008]), inconsistency measures (e.g. [Hunter and Konieczny, 2010; Thimm, 2018]), and non-explosive inference relations (e.g. [Rescher and Manor, 1970; Priest, 1991]). A common approach to dealing with inconsistencies is to modify the initial knowledge base to obtain a consistent one that can be used with classical reasoning; a process we call consistency restoration. This modification can involve forgetting variables [Lin and Reiter, 1994; Lang and Marquis, 2002], ignoring pieces of knowledge [Rescher and Manor, 1970; Benferhat et al., 1995], or applying belief revision and updating operators [Alchourr on et al., 1985; G ardenfors, 1992; Ferm e and Hansson, 2011]. In this work, we model the process of restoring consistency in propositional logic as a dialogue between two play- ers: the Proponent and the Opponent. The Proponent s goal is to achieve consistency by identifying a unique, consistent propositional base. To do this, she asks questions to gather necessary information. Conversely, the Opponent, who controls the responses to these questions, aims to prevent her from achieving her goal. Our game-theoretic framework is inspired by two key sources in the literature. The first is the work of Lorenzen [Lorenzen, 1960] on dialogue games in logic, which provides a semantics grounded in interactive dialogues (see also [Blass, 1992; Pitts and Dybjer, 1997; Ferm uller and Ciabattoni, 2003]). In many formulations of game semantics, a game involves two players: the Proponent, whose goal is to prove statements, and the Opponent, who challenges the application of proof rules. The second source of inspiration is the question-based interpretation of entropy in information theory [Shannon, 1948a; Shannon, 1948b]. This interpretation links the number of questions needed to solve a search problem with the amount of information. For instance, the entropy of a system reflects the minimal number of binary (yes/no) questions required to determine an outcome [van der Lubbe and Hoeve, 1997; Cover and Thomas, 2001]. Dealing with inconsistency through our game-theoretic approach brings several advantages. By considering the problem as a game, we can analyze and develop optimal strategies for agents to resolve conflicts, such as decisions about which pieces of information to question and how to allocate limited resources. Furthermore, introducing the Opponent adds an adversarial dimension that captures uncertainty. This allows us to study how agents can overcome this obstacle to restore consistency. Our main contributions are as follows. We first introduce a general game-theoretic framework in which consistency restoration is modeled as a series of questions posed by the Proponent and answered by the Opponent. We then define a measure that quantifies the expected cost associated with a Proponent s strategy for achieving consistency. This provides a quantitative perspective to the process of consistency restoration. Next, we focus on a specific scenario where restoring consistency involves identifying a single maximal consistent subset of the base. In this context, we establish results that link winning strategies to the number and structure of these subsets. Finally, we show how the quantified restora- Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence (IJCAI-25) tion effort can serve as the basis for defining an inconsistency measure, a function that assesses the severity of conflicts in propositional bases. We study the properties of this measure and examine several related computational aspects. 2 Preliminaries Let us start by defining the language of classical propositional logic. We begin with a constant , denoting false, and a countably infinite set of propositional variables, denoted by Prop. The set of well-formed formulas (wffs), denoted WFF, is the smallest set that satisfies: Every propositional variable p Prop is a wff. If φ and ψ are wffs, then ( φ) and (φ ψ) are wffs. The additional connectives , , and are defined, as usual, using and . We use p, q, r (with possible subscripts) to denote propositional variables, and we use Greek letters such as φ, ψ, χ to denote formulas. For any syntactic entity X (e.g., a formula or a set of formulas), the set of propositional variables occurring in X is denoted by var(X). Given a set of formulas Q, we use b Q to denote the set Q { φ : φ Q}. A propositional base (PB) is a finite subset of WFF. A Boolean interpretation, or simply interpretation, is a function ω that assigns a truth value in {0, 1} to each formula in WFF such that: ω( φ) = 1 ω(φ) and ω(φ ψ) = ω(φ) ω(ψ). An interpretation ω is called a model of a formula φ, written ω |= φ, if and only if ω(φ) = 1. A formula is consistent if it has at least one model; otherwise, it is inconsistent. Given a PB K and a formula φ, we say that K entails φ, written K φ, if, for every interpretation ω, whenever ω |= V K (the conjunction of all formulas in K), it follows that ω |= φ. Note that ω |= V holds for every interpretation ω. When K = {ψ} contains a single formula, we may write ψ φ instead of {ψ} φ. Two formulas φ and ψ are said to be logically equivalent, written φ ψ, if both φ ψ and ψ φ hold. Using the compactness theorem, we obtain the following properties: a countable set of formulas S is consistent iff for every finite subset S of S, V S is consistent; S φ if and only if there exists a finite subset S of S such that S φ. To indicate that a set of formulas S is consistent (resp. inconsistent), we sometimes write S (resp. S ). A minimal inconsistent subset (MIS) of a PB K is a subset M K that is inconsistent, and such that for every φ M, the set M \ {φ} is consistent. A maximal consistent subset (MCS) of K is a subset M K that is consistent, and such that for every φ K\M, the set M {φ} is inconsistent. We use MCS(K) to denote the set of all MCSes of K. A formula in a PB K is called free if it does not belong to any MIS of K. We denote the set of all free formulas in K by free(K). Conversely, a formula that is not free is referred to as problematic. In particular, the inconsistent formulas in a PB are always problematic. We use Inc(K) to denote the set of inconsistent formulas in K. We define the equivalence relation b on formulas as follows: φ b ψ iff φ ψ or φ ψ. We use R+ to denote the set of positive real numbers equipped with the usual order and extended by an additional greatest element . A binary tree is a rooted, ordered tree where each node can have at most two children, referred to as the left child and right child. A binary tree is considered full if every node has either 0 or 2 children. The set of leaf nodes of a tree T is denoted Lv(T ). The depth of a node v is defined as the length (i.e., the number of edges) in the path from the root of the tree to v. We denote the depth of the node v in the binary tree T as d(v, T ). Further, we use d(T ) to denote the depth of T . 3 Consistency-Restoration Game 3.1 Informal Description In this work, we represent the process of restoring consistency in a PB as a dialogue between two players: the Proponent and the Opponent. The game is as follows: the Proponent aims to restore consistency by identifying a unique, consistent PB; she does this by asking questions to gather information. The Opponent controls the responses to the Proponent s questions and tries to prevent the Proponent from achieving her goal. The Proponent should choose questions that will most effectively eliminate uncertainties. Additionally, with a limited effort budget, the Proponent needs to prioritize the most informative questions. The Opponent s consistent responses shape the Proponent s deductions. Indeed, we consider that the Opponent cannot contradict herself. To illustrate our framework, consider a medical researcher (Proponent) investigating a patient s condition. The researcher s PB initially contains conflicting information about the patient s symptoms, possible diseases, and test results. This inconsistency arises due to contradictory patient data. The researcher conducts medical tests (questions) to resolve these conflicts. The Opponent provides consistent test results grounded in the true state of the patient s condition. We interpret the test answers as originating from the Opponent to emphasize that these responses may present difficulties for the Proponent. However, what happens if the Proponent cannot access the results of all these tests due to constraints such as a limited laboratory budget? In our framework, we refer to this constraint as the effort budget, where each question is assigned a specific weight representing its cost or effort required. In addition, what are the potential consistent PBs associated with the responses provided by the Opponent? To address this question, we introduce the concept of a possibility function, denoted as Λ. This function captures the range of possible outcomes and their corresponding consistent PBs derived from the Opponent s answers. A winning strategy for the Proponent is a systematic plan that outlines how to handle every possible response from the Opponent, ensuring the achievement of a single consistent PB without exceeding the allocated effort budget. Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence (IJCAI-25) 3.2 Formal Description Game Input. We define a game input (GI) as a tuple (K, Q, E, b), where: K is the considered PB, Q is a countable set of possible questions, each being a wff, E : Q R+ is a weight function assigning a number in R+ to each question in Q (E(q) corresponds to the effort required to answer q), and b R+ is the effort budget: an upper bound on the total weight of the questions that the Proponent can ask. Game. A consistency-restoration game, or simply a game, associated with the GI (K, Q, E, b) involves two players: the Proponent (P) and the Opponent (O). The player P tries to identify a unique consistent PB by asking questions from Q, while O tries to prevent P from succeeding. At each step: (1) P selects a formula φ from Q, and (2) O responds by choosing either φ or its negation φ. More precisely, a game G of (K, Q, E, b) is a possibly infinite alternating sequence of moves of the form: P φ1 O ψ1 P φ2 O ψ2 P φk O ψk , where for each index i, φi Q and ψi {φi, φi}. In the sequel, we use the following notational conventions for a game G: α(G) denotes the set of O s answers ψi, κ(G) denotes the set of P s questions φi, Ξ(G) denotes the total effort P i E(φi). We impose the following condition on all games: Opponent s Consistency (OC): For any game G, the set of responses from O is consistent; that is, α(G) is consistent. Possibility Function. A player s success in the game is determined by a mechanism called the Q-possibility function Λ. This function associates each PB K and each consistent set of formulas A b Q with a set of consistent PBs. In essence, Λ(K, A) identifies all possible consistent PBs that P can have and that are compatible with O s responses A. This function is crucial because it helps P determines which beliefs she can have without conflicts in light of O s responses. It guides her toward strategies that lead to a successful outcome. We require the following rationality postulate on Qpossibility functions: Consistency With Responses (CWR): For every PB K and every consistent set of responses A b Q, S A is consistent for any S Λ(K, A). Identity on Consistent Propositional Bases (ICPB): For every consistent PB K, Λ(K, ) = {K}. Dominance Between Answers (DBA): For every PB K, every pair of consistent sets A, A b Q, if V A V A , then Λ(K, A) Λ(K, A ). CWR ensures that O s answers do not contradict any consistent subset that P considers possible. ICPB says that if K itself is already consistent, asking no questions must leave K unchanged as the unique possibility. DBA states that having more conclusive answers (V A V A ) cannot enlarge the set of possibilities. In other words, stronger sets of answers reduce or maintain, but never expand, the set of possible consistent bases. . . . . . . . . . . . . φ2 φ2 φ3 φ3 Figure 1: A P-Strategy of a Consistency-Restoration Game Proponent Λ-Wins. The player P Λ-wins the game G of (K, Q, E, b) if the game is finite, adheres to the effort budget, and enables P to reduce the possible consistent PBs that are compatible with O s responses to exactly one, according to the Q-possibility function Λ. Formally, P Λ-wins G when G is finite, Ξ(G) b and |Λ(K, α(G))| = 1. Example 1. Consider the PB K = {p, q, p q, r} and the set of questions Q = K. The effort function E is simply defined by E(φ) = 1 for every φ Q, and the effort budget is b = 2. The following sequence is a game for (K, Q, E, b): P p q O p q P p O p If we define a Q-possibility function Λ for any set of O s responses by Λ(K, A) = {M MCS(K) : M A }, we find that the only MCS of K that is consistent with the responses { p q, p} is {q, p q, r}. This means that P Λ-wins this game. 3.3 Winning Strategies To develop a strategy for player P, we must consider how P reacts to each possible response from O. More precisely, for each question φ that P selects, we need to consider both scenarios: when O affirms φ and when O negates φ. This explains why we represent P s strategy as a full binary tree. Definition 1 (P-Strategy). A P-strategy of a tuple (K, Q, E, b) is a pair σ = T , Φ , where: T is a full binary tree in which every branch is finite; Φ is a function that assigns to each internal node of T a formula from Q; for each complete branch (i.e., path from the root to a leaf) (v1, . . . , vk) in T , it holds that Pk 1 i=1 E(Φ(vi)) b. We then define a context function that captures the O s responses accumulated along each branch of the tree. Definition 2 (Context Function Cσ). Given a P-strategy σ = T , Φ , the function Cσ(v) is defined recursively for each node v in T as follows: If v is the root node, then Cσ(v) = ; Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence (IJCAI-25) If v is the left child of a node v , then Cσ(v) = Cσ(v ) { Φ(v )}; If v is the right child of a node v , then Cσ(v) = Cσ(v ) {Φ(v )}. Definition 3 (Λ-Winning P-Strategy). Given a Q-possibility function Λ, a Λ-winning P-strategy is a P-strategy σ where for any leaf node l, |Λ(K, Cσ(l))| = 1 or Cσ(l) is inconsistent. A Λ-winning P-strategy σ is said to be pure if, for any leaf node l, |Λ(K, Cσ(l))| = 1. Note that a Λ-winning P-strategy cannot consist of a single node: if no questions are needed, the corresponding strategy tree must be empty. Given a P-strategy σ, we use questions(σ) to denote the set of questions occurring in σ. In Figure 1, Subfigure (b) on the right illustrates a tree representing a P-strategy, while Subfigure (a) on the left displays the corresponding player moves. To visually represent a P-strategy σ = T , Φ , we depict each internal node of the tree T with its associated formula; each leaf nodes l is labeled with the number |Λ(K, Cσ(l))| or , depending on whether Cσ(l) is consistent or not. Example 2. Consider again the GI g = (K, Q, E, b) and the Q-possibility function Λ described in Example 1. In what follows, some P-strategies of g: 1 1 (a) (b) (c) The trees (b) and (c) correspond to Λ-winning P-strategies, with tree (c) being the unique pure strategy. Whether the tree in a P-strategy necessarily contains a finite number of nodes is a natural consideration. This result is, in fact, ensured by K onig s Lemma: every finitely branching infinite tree contains at least one infinite branch [Rival, 2012]. The following theorem establishes that reasoning about the effort required for consistency restoration can be limited to pure Λ-winning P-strategies. Theorem 1. For every GI g and every Q-possibility function Λ, if g admits a Λ-winning P-strategy σ = (T , Φ), then g also admits a pure Λ-winning P-strategy σ = (T , Φ ), such that there exists an injective function f from Lv(T ) to Lv(T ) satisfying the following condition: for every l Lv(T ), Cσ (l) Cσ(f(l)). The next proposition shows that in a pure Λ-winning Pstrategy, internal nodes on the same branch have distinct, nonequivalent labels (w.r.t. b), and each label is both consistent and not a tautology. Proposition 1. Let g = (K, Q, E, b) be a game input, Λ a Q-possibility function, and σ = (T , Φ) a pure Λ-winning Pstrategy for g. Then, the following properties hold: 1. For any two distinct internal nodes v and v on the same branch in T , we have Φ(v) b Φ(v ). 2. For every internal node v in T , Φ(v) is both consistent and not a tautology. In the sequel, Λ-PWS will refer to pure Λ-winning Pstrategy. 4 Quantifying Consistency-Restoration Effort In this section, we show how to quantify the consistency restoration effort associated with a given strategy by interpreting it as the proponent s expected cost to secure a win. We focus on GIs with an unbounded effort budget, i.e., b = . Consider a GI g = (K, Q, E, ) and a Λ-PWS σ = (T , Φ) for g. For each node v in the strategy tree T , we define E(v) R+ inductively as follows: If v is the only node in T , then E(v) = 0. If v is the root node of T , then E(v) = E(Φ(v)). Otherwise, if v is a child of v , then E(v) = E(Φ(v)) + E(v ). In particular, for any leaf node l in T , the value E(l) corresponds to Ξ(G), where G is the game defined by the complete path from the root to l. Intuitively, E(l) is the total effort expended along the branch leading to l. To compute the expected effort of the strategy σ, assume each question is equally likely to receive an affirmative or a negative response from O. Thus, each binary choice leads to two equally probable outcomes, and the probability of following any particular branch of depth d(l, T ) is 1/2d(l,T ). Thus, the expected effort value of σ is then given by: 1 2d(l,T ) E(l). This expected effort EV(σ) can be interpreted as the average cost to ensure a P s victory, taking into account all possible response patterns from O. Quantifying the effort required to restore consistency offers several practical benefits. For example, it enables a direct comparison between different sets of questions. This helps to identify and select those that are most effective in achieving consistency with minimal cost. It also facilitates resource management: the effort budget can be adjusted based on the expected effort value. Furthermore, this approach provides a foundation for defining inconsistency measures, as we will illustrate later. It is noteworthy that the effort value (EV) can be linked to the concept of entropy in information theory [Shannon, 1948a; Shannon, 1948b]. Recall that, if p1, p2, . . . , pn are probabilities such that Pn i=1 pi = 1, the entropy H(p1, p2, . . . , pn) is defined as: H(p1, p2, . . . , pn) = Pn i=1 pi log 1 pi , where the base of the logarithm is chosen based on the application; here, we adopt base 2. Now, if E is defined such that E(φ) = 1 for each question φ, we can express the effort value as: EV(σ) = H 1 2d(l1,T ) , . . . , 1 2d(ln,T ) where Lv(T ) = {l1, . . . , ln}. The connection between EV and entropy arises from viewing each question as analogous to a bit of information. This Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence (IJCAI-25) perspective aligns the notion of effort required for consistency restoration with the quantification of information: each question reduces uncertainty about the consistent knowledge base that P should consider. Notably, this view resonates with existing interpretations of entropy in the literature, where entropy is seen as the minimal number of binary questions needed to identify an outcome (e.g., see [van der Lubbe and Hoeve, 1997; Cover and Thomas, 2001]). 5 Identifying the Appropriate MCS In this section, we further simplify our setting by imposing two restrictions: 1. We restrict P s questions to the elements initially present in the given possibility base K, i.e., Q = K. 2. We adopt the uniform effort function E1, defined as E1(φ) = 1 for every φ K. Assigning a uniform cost of 1 means that the effort in a game corresponds to the number of asked questions. Considering these restrictions, we use (K, b) to denote a GI instead of (K, K, E1, b). Additionally, we focus on a specific K-possibility function Λmc defined as: Λmc(K, A) = {M MCS(K) : M A }. It is straightforward to see that Λmc satisfies the properties CWR, ICPB and DBA. By focusing on these simplified conditions, we achieve a baseline scenario that is conceptually clear. Since all questions come from K and have the same cost, we remove extraneous variables that could complicate the analysis. Moreover, using the K-possibility function Λmc links our gametheoretic framework to well-known inconsistency handling approaches based on MCSes [Rescher and Manor, 1970; Brewka, 1989; Konieczny et al., 2019]. Several results developed in this section will serve for the analysis of our proposed inconsistency measure. Theorem 2. For every Λmc-PWS σ = (T , Φ) of a GI (K, b), the number of leaf nodes of T is equal to |MCS(K)|. Proof. We first show that for every M MCS(K), there exists a leaf node l in T such that M is the unique MCS satisfying Cσ(l) M . This implies that the number of leaf nodes in T is at least |MCS(K)|. Let M be an MCS of K and ω a model of M. Define R = {φ b K : ω |= φ}. Since ω is a model of both M and R, we have M R . Furthermore, we know that there exists a leaf node l such that Cσ(l) R. Because |S| = 1 with S = {M MCS(K) : M Cσ(l) }, it follows that S = {M}. Next, assume, for the sake of contradiction, that there exists an MCS M of K and two distinct leaf nodes l and l such that both Cσ(l) M and Cσ(l ) M . By definition, there must exist a formula φ K such that φ Cσ(l) Cσ(l ) and φ Cσ(l) Cσ(l ). Consequently, we have φ M and φ / M, leading to a contradiction. Since T is a full binary tree in every Λmc-PWS σ = (T , Φ), we derive the next corollary. Corollary 1. For every Λmc-PWS σ = (T , Φ) of a GI (K, b), the following properties hold: 1. The number of nodes of T is equal to 2 |MCS(K)| 1. 2. d(T ) log2(|MCS(K)|) . Using the foregoing corollary, we obtain the following result. Theorem 3. For every GI g = (K, b) with b < log2(|MCS(K)|) , g does not admit any Λmc-PWS. The next proposition states that in any Λmc-PWS, no question asked during the strategy corresponds to a free formula. Proposition 2. Let g = (K, b) be a GI. For every Λmc-PWS σ = (T , Φ) of g, it holds that questions(σ) free(K) = . Proof. Let σ = (T , Φ) be a Λmc-PWS of g. Assume, for the sake of contradiction, that there exists an internal node v in T such that Φ(v) = φ is free in K. Knowing that φ is free, φ M holds for every M MCS(K). Thus, no MCS M satisfies M { φ} . However, as v is an internal node associated with φ, there exists a leaf l such that φ Cσ(l). Consequently, |{M MCS(K) : M Cσ(l)}| = 0 holds. This contradicts the fact that σ is pure. Now, we present a theorem that characterize types of game inputs that always admit a Λmc-PWS. Theorem 4. For every GI g = (K, b) with b |(K\S)/ b | such that S = free(K) Inc(K), g admits a pure Λmcwining P-strategy. In particular, Λmc-PWS always exists when there is no budget limit, i.e., b = . The theorem below describes the common structure of Λmc-PWSes for MISes. Theorem 5. Let K be an MIS. Then, (K, b) admits a pure Λmc-PWS σ = (T , Φ) iff b |K| 1 and σ satisfies the following properties: 1. The left child of every internal node in T is a leaf node. 2. The depth of T is |K| 1. 3. For every two distinct internal nodes v and v in T , Φ(v) = Φ(v ). 6 Inconsistency Measurement Inconsistency measures are designed to quantify the amount of contradiction present in a knowledge base. Widely studied in the literature, these measures often use postulate-based frameworks to capture different aspects of inconsistency (e.g., see [Hunter and Konieczny, 2010; Ammoura et al., 2017; Thimm, 2018; Bona et al., 2019; Besnard and Grant, 2020; Corea et al., 2021]). Definition 4 (Inconsistency Measure). An inconsistency measure is a mapping I from PBs to R+ that satisfies the following property for every PB K: CONSISTENCY: I(K) = 0 if and only if K is consistent. Beyond CONSISTENCY, the literature proposes various other postulates (e.g., [Besnard, 2014; Thimm, 2018]) to further characterize inconsistency measures. Among these, we focus on: Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence (IJCAI-25) FREE FORMULA: If φ Free(K), then I(K \ {φ}) = I(K). MONOTONICITY: If K K , then I(K) I(K ). EQUIVALENCE: If φ ψ, then I(K {φ}) = I(K {ψ}). EQUAL CONFLICT: If M and M are MISes of K with |M| = |M |, then I(M) = I(M ). ATTENUATION: If M and M are MISes such that |M| > |M |, then I(M) < I(M ). 6.1 A Game-based Measure Building on the restriction described in Section 5, we define the inconsistency measure Iw mc as follows: Iw mc(K) = min{ EV(σ) | σ PWin S(g, Λmc)}+|Inc(K)| where g = (K, ) and PWin S(g, Λmc) denotes the set of all Λmc-PWS for g. This function mainly represents the minimal expected effort required to restore consistency. Additionally, assigning a cost of 1 to each inconsistent formula aligns with the approach used in the MCS-based measure introduced in [Grant and Hunter, 2011]. By mainly observing that every inconsistent PB either contains at least one inconsistent formula or has at least two MCSs, we obtain the following result. Proposition 3. The function Iw mc is an inconsistency measure. In the proposition below, we focus on the specific case where the PB is an MIS. This result primarily follows from the structure of Λmc-PWSes described in Theorem 5. Proposition 4. If K is an MIS, then the following holds: Iw mc(K) = 2 1 2n 2 where n = |K|. In addition to satisfying CONSISTENCY, the measure Iw mc inherits various properties from our game framework: FREE FORMULA and EQUIVALENCE: It follow from the fact that no winning strategy needs to query free formulas (Proposition 2). EQUIVALENCE: It follow from the fact that logically equivalent formulas can be substituted without changing the set of consistent subsets. EQUAL CONFLICT: If two MISes M and M have the same cardinality, Iw mc(M) = Iw mc(M ) (Proposition 4). This reflects the equal cost of resolving conflicts of the same size. Note that MONOTONICITY does not hold in general because adding formulas to K changes both the base itself and the question set, which can reduce the cost of restoring consistency. For example, let K = {p, p, q, q}. Any Λmc-PWS for (K, ) must ask at least two questions in each branch, leading to Iw mc(K) = 2. However, if we extend K by adding S = { p q, p q, p q, p q}, then Iw mc can actually decrease. Indeed, a strategy that exclusively queries these newly added formulas achieves: Iw mc K S) = 1 2 + 2 8 = 1.75. Thus, extending K has reduced the minimal expected effort from 2 to 1.75, showing the failure of MONOTONICITY. Moreover, by applying Proposition 4, we observe that Iw mc exhibits a property opposite to ATTENUATION: if M and M are two MISes such that |M| > |M |, then Iw mc(M) > Iw mc(M ). Intuitively, resolving larger MISes requires identifying the correct MCS among a greater number of conflicting formulas, which increases the expected effort. 6.2 Computational Aspects To compute Iw mc, the restoration effort value of a strategy can be determined from the shape of its associated tree and the number of inconsistent formulas. In what follows, for a given full binary tree T , we use EV(T ) to denote the value P l Lv(T ) d(l,T ) 2d(l,T ) . Now, we identify the tree structure that minimizes the effort value. Theorem 6. Let T be a finite full binary tree such that each internal node has a child that is a leaf node. Then, for every tree T with |Lv(T )| = |Lv(T )|, it holds that EV(T ) EV(T ). Proof (Sketch). The proof proceeds by induction on k = |Lv(T )|: Base Case (k = 1): Straightforward (there is only one possible binary tree shape that consists of a single node). Inductive Step: Assume the property holds for all k m. Consider k = m + 1. Let T be a full binary tree such that |Lv(T )| = |Lv(T )|. Define Tr as the tree obtained by removing the two leaves of depth m from T . Similarly, define T r as the tree obtained from T by removing any two leaves having the same parent. By the induction hypothesis, EV(T r ) EV(Tr). Now, note that: EV(T ) = EV(Tr) + 1 2m 1 , EV(T ) = EV(T r ) + 1 2d 1 , where d is the depth of the removed leaves in T . Since d m and EV(T r ) EV(Tr), it follows that EV(T ) EV(T ). The following corollary is mainly a consequence of the foregoing theorem and Theorem 2. Corollary 2. Let K be a PB. Then, the following holds: Iw mc(K) 2 1 2n 2 + |Inc(K)| where n = |MCS(K)|. The proposition below outlines a scenario where the previously established bound is achieved. Proposition 5. Let K be a PB such that for every M MCS(K), M \(S(MCS(K)\M)) = . Then, the following holds: Iw mc(K) = 2 1 2n 2 + |Inc(K)| where n = |MCS(K)|. Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence (IJCAI-25) Algorithm 1 Greedy Algorithm for Approximating Iw mc Require: A PB K and the set of its MCSes S = MCS(K) 1: function BUILDTREE(S) 2: if |S| = 1 then 3: return a leaf node identifying the single MCS in S 4: end if 5: Let φ = arg minφ S S\T S G(φ, S) 6: Recursively build left and right subtrees: Tl BUILDTREE(S φ) Tr BUILDTREE(Sφ) 7: return the tree (φ, Tl, Tr) 8: end function 9: BUILDTREE(S) Proof. To prove the result, we construct a Λmc-PWS σ = (T , Φ) such that each internal node has a child that is a leaf node. Let MCS(K) = {M1, . . . , Mn}. For each Mi, pick φi Mi \ S(MCS(K ) \ {Mi}) . We define T as a full binary tree of depth n 1 where each right child of an internal node is a leaf. Then, we define Φ as follows: for every internal node v of depth k, Φ(v) = φk+1. We can easily check σ = (T , Φ) is a Λmc-PWS. In particular, for every internal node v, knowing that there exists exactly one MCS containing Φ(v) explains why its right child is a leaf. Since we use the effort function E1 and due to the connection between the consistency restoration effort and entropy, the following relationship holds for every Λmc-PWS σ = (T , Φ): EV(σ) = H 1 2d(l1,T ) , . . . , 1 2d(ln,T ) where Lv(T ) = {l1, . . . , ln}. Recall that the maximum value of H(p1, p2, . . . , pn) is achieved when p1 = p2 = = pn (e.g., see [Cover and Thomas, 2001]). Consequently, the maximum value of EV occurs when d(l, T ) = log2(n) for each leaf node l Lv(T ). Thus, the tree shape that maximizes the effort value corresponds to a balanced tree. Based on this observation, we derive the following proposition: Proposition 6. Let K be a PB. Then, the inequality Iw mc(K) log2 |MCS(K)| + |Inc(K)| holds. Finding the optimal strategy to compute Iw mc is a challenging problem. By leveraging the connection between this problem, decision trees, and information theory, we propose a greedy approach to approximate the inconsistency value. This approach is inspired by the decision tree learning algorithm ID3 [Quinlan, 1986; Mitchell, 1997]. Intuitively, each MCS can be viewed as a distinct class, while the formulas of the propositional base serve as the features. To define our algorithm, we first introduce the concept of information gain, which quantifies the reduction in entropy when the set of MCSes is split using a given formula φ. The entropy associated with a set of MCSes S is defined as: H(S) = |S| 1 |S| log2(|S|) = log2(|S|), where we assume that all elements of S are equally probable. Note that H(S) is only defined for non-empty S. Given a formula φ and a set of MCSes S, the information gain G(φ, S) is defined as: G(φ, S) = H(S) |Sφ| |S| H(Sφ) + |S φ| |S| H(S φ) , where Sφ = {M S : φ M} and S φ = {M S : φ / M}. Consider the function f(i) = (n i) log2(n i)+i log2(i) (1 < i < n 1) which is strictly concave and attains its global minimum at i = n 2 . Thus, maximizing G(φ, S) rewards balanced splits (exactly the goal of the classical ID3 algorithm). In our setting balanced partitions are not desirable; we instead seek highly unbalanced splits. Consequently, we choose to minimize G(φ, S) when selecting the next formula. The greedy approach used in Algorithm 1 can be summarized as follows: At each step, select φ that minimizes the G. Partition S = MCS(K) into two subsets, Sφ and S φ. Recursively repeat this process for the subsets Sφ and S φ until each leaf of the tree corresponds to a single MCS. Observe that, since the selected formula φ at each step belongs to S S \ T S, it holds that S φ = and Sφ = . In the next proposition, we identify a scenario in which Algorithm 1 allows us to compute the exact value of Iw mc. Proposition 7. Suppose K, admits a Λmc-PWS σ whose tree T ensures that each internal node has at least one child that is a leaf. Then Algorithm 1 computes one of the optimal Λmc-PWSes. A notable drawback of the greedy algorithm is its reliance on having all MCSes, which is generally recognized as a highly challenging task. 7 Conclusion and Perspectives We developed a game-theoretic framework to restore consistency in propositional bases. It represents the process as a strategic interaction between a Proponent, who aims to isolate a unique consistent base, and an Opponent, who controls the responses. By assigning a cost to each question, the framework reflects the concept of limited resources. This facilitates the definition of measures that evaluate the expected effort required to restore consistency. Drawing on the quantification of consistency restoration effort, we showed how to derive a new inconsistency measure. This framework provides numerous opportunities for future work. First, analyzing the complexity of identifying optimal strategies and quantifying restoration effort represents a critical direction for future research. Second, the foundational concepts could be adapted to more expressive logics, such as description logics and modal logics. Third, exploring variants that incorporate non-binary questions presents interesting possibilities for further study. Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence (IJCAI-25) References [Alchourr on et al., 1985] Carlos E. Alchourr on, Peter G ardenfors, and David Makinson. On the Logic of Theory Change: Partial Meet Contraction and Revision Functions. Journal of Symbolic Logic, 50(2):510 530, 1985. [Ammoura et al., 2017] Meriem Ammoura, Yakoub Salhi, Brahim Oukacha, and Badran Raddaoui. On an MCSbased inconsistency measure. International Journal of Approximate Reasoning, 80:443 459, 2017. [Benferhat et al., 1995] Salem Benferhat, Didier Dubois, and Henri Prade. How to Infer from Inconsisent Beliefs without Revising? In Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence, IJCAI 95. Montr eal Qu ebec, Canada, 2 Volumes, pages 1449 1457. Morgan Kaufmann, 1995. [Besnard and Grant, 2020] Philippe Besnard and John Grant. Relative Inconsistency Measures. Artificial Intelligence, 280:103231, 2020. [Besnard and Hunter, 2008] Philippe Besnard and Anthony Hunter. Elements of Argumentation. MIT Press, 2008. [Besnard, 2014] Philippe Besnard. Revisiting Postulates for Inconsistency Measures. In Eduardo Ferm e and Jo ao Leite, editors, Logics in Artificial Intelligence - 14th European Conference, JELIA 2014, Funchal, Madeira, Portugal, volume 8761 of Lecture Notes in Computer Science, pages 383 396. Springer, 2014. [Blass, 1992] Andreas Blass. A Game Semantics for Linear Logic. Annals of Pure and Applied Logic, 56(1-3):183 220, 1992. [Bona et al., 2019] Glauber De Bona, John Grant, Anthony Hunter, and S ebastien Konieczny. Classifying Inconsistency Measures Using Graphs. Journal of Artificial Intelligence Research, 66:937 987, 2019. [Brewka, 1989] Gerhard Brewka. Preferred Subtheories: An Extended Logical Framework for Default Reasoning. In N. S. Sridharan, editor, Proceedings of the 11th International Joint Conference on Artificial Intelligence, IJCAI 89. Detroit, MI, USA, pages 1043 1048. Morgan Kaufmann, 1989. [Corea et al., 2021] Carl Corea, Matthias Thimm, and Patrick Delfmann. Measuring Inconsistency over Sequences of Business Rule Cases. In Meghyn Bienvenu, Gerhard Lakemeyer, and Esra Erdem, editors, Proceedings of the 18th International Conference on Principles of Knowledge Representation and Reasoning, KR 2021, Online event, pages 655 660, 2021. [Cover and Thomas, 2001] Thomas M. Cover and Joy A. Thomas. Elements of Information Theory. Wiley, 2001. [Ferm e and Hansson, 2011] Eduardo L. Ferm e and Sven Ove Hansson. AGM 25 Years - Twenty-Five Years of Research in Belief Change. Journal of Philosophical Logic, 40(2):295 331, 2011. [Ferm uller and Ciabattoni, 2003] Christian G. Ferm uller and Agata Ciabattoni. From Intuitionistic Logic to G odel Dummett Logic via Parallel Dialogue Games. In 33rd IEEE International Symposium on Multiple-Valued Logic (ISMVL 2003), Tokyo, Japan, pages 188 196. IEEE Computer Society, 2003. [G ardenfors, 1992] Peter G ardenfors, editor. Belief Revision. Cambridge Tracts in Theoretical Computer Science. Cambridge University Press, 1992. [Grant and Hunter, 2011] John Grant and Anthony Hunter. Measuring Consistency Gain and Information Loss in Stepwise Inconsistency Resolution. In Weiru Liu, editor, Symbolic and Quantitative Approaches to Reasoning with Uncertainty - 11th European Conference, ECSQARU 2011, Belfast, UK, volume 6717 of Lecture Notes in Computer Science, pages 362 373. Springer, 2011. [Hunter and Konieczny, 2010] Anthony Hunter and S ebastien Konieczny. On the measure of conflicts: Shapley Inconsistency Values. Artificial Intelligence, 174:1007 1026, 2010. [Konieczny et al., 2019] S ebastien Konieczny, Pierre Marquis, and Srdjan Vesic. Rational Inference Relations from Maximal Consistent Subsets Selection. In Sarit Kraus, editor, Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019. Macao, China, pages 1749 1755. ijcai.org, 2019. [Lang and Marquis, 2002] J erˆome Lang and Pierre Marquis. Resolving Inconsistencies by Variable Forgetting. In Dieter Fensel, Fausto Giunchiglia, Deborah L. Mc Guinness, and Mary-Anne Williams, editors, Proceedings of the Eights International Conference on Principles and Knowledge Representation and Reasoning, KR 02. Toulouse, France, pages 239 250. Morgan Kaufmann, 2002. [Lin and Reiter, 1994] Fangzhen Lin and Raymond Reiter. Forget It! In Proceedings of the AAAI Fall Symposium on Relevance, pages 154 159, 1994. [Lorenzen, 1960] Paul Lorenzen. Logik und Agon. Atti Del XII Congresso Internazionale di Filosofia, 4:187 194, 1960. [Mitchell, 1997] Tom M. Mitchell. Machine learning, International Edition. Mc Graw-Hill Series in Computer Science. Mc Graw-Hill, 1997. [Pitts and Dybjer, 1997] Andrew M. Pitts and P.Editors Dybjer, editors. Semantics of Interaction: an Introduction to Game Semantics, pages 1 32. Publications of the Newton Institute. Cambridge University Press, 1997. [Priest, 1991] G. Priest. Minimally inconsistent LP. Studia Logica, 50:321 331, 1991. [Quinlan, 1986] J. Ross Quinlan. Induction of Decision Trees. Machine Learning, 1(1):81 106, 1986. [Rescher and Manor, 1970] Nicholas Rescher and Ruth Manor. On Inference from Inconsistent Premisses. Theory and Decision, 1:179 217, 1970. [Rival, 2012] I. Rival. Graphs and Order: The Role of Graphs in the Theory of Ordered Sets and Its Applications. Nato Science Series C. Springer Netherlands, 2012. Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence (IJCAI-25) [Shannon, 1948a] Claude E. Shannon. A Mathematical Theory of Communication. Bell System Technical Journal, 27(3):379 423, 1948. [Shannon, 1948b] Claude E. Shannon. A Mathematical Theory of Communication. Bell System Technical Journal, 27(4):623 656, 1948. [Thimm, 2018] Matthias Thimm. On the Evaluation of Inconsistency Measures. In John Grant and Maria Vanina Martinez, editors, Measuring Inconsistency in Information, volume 73 of Studies in Logic. College Publications, February 2018. [van der Lubbe and Hoeve, 1997] J.C.A. van der Lubbe and H.J. Hoeve. Information Theory. Cambridge University Press, 1997. Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence (IJCAI-25)