# epistemic_logic_of_likelihood_and_belief__c7428579.pdf Epistemic Logic of Likelihood and Belief James P. Delgrande1 , Joshua Sack2 , Gerhard Lakemeyer3 and Maurice Pagnucco4 1School of Computing Science, Simon Fraser University, Burnaby, B.C., V5A 1S6 Canada. 2Dept. of Math. and Statistics, California State University Long Beach, CA 90840, USA. 3Dept. of Computer Science, RWTH Aachen University, D-52056 Aachen, Germany. 4School of Computer Science and Engineering, UNSW, Sydney, NSW 2052, Australia. jim@cs.sfu.ca, joshua.sack@csulb.edu, gerhard@kbsg.rwth-aachen.de, morri@cse.unsw.edu.au A major challenge in AI is dealing with uncertain information. While probabilistic approaches have been employed to address this issue, in many situations probabilities may not be available or may be unsuitable. As an alternative, qualitative approaches have been introduced to express that one event is no more probable than another. We provide an approach where an agent may reason deductively about notions of likelihood, and may hold beliefs where the subjective probability for a belief is less than 1. Thus, an agent can believe that p holds (with probability <1); and if the agent believes that q is more likely than p, then the agent will also believe q. Our language allows for arbitrary nesting of beliefs and qualitative likelihoods. We provide a sound and complete proof system for the logic with respect to an underlying probabilistic semantics, and show that the language is equivalent to a sublanguage with no nested modalities. 1 Introduction Dealing with uncertainty and vagueness is a pervasive problem in Artificial Intelligence (AI). Traditional probabilistic approaches utilising a numeric assessment of likelihood have been employed extensively to address this issue. An issue with these approaches is that not infrequently it may be difficult or even impossible to determine such numeric values. On the other hand, a probabilistic approach may be too finegrained for a particular application. As a result, various nonnumeric techniques have been developed including, notably, nonmonotonic approaches. In a different vein, several logical approaches to qualitative probability have been introduced, to express that one event is no more probable than another. In such approaches, one may assert that one proposition is more probable than another, but without giving specific numeric values to the propositions. Similarly, people will often hold a proposition to be contingently true, even though they would readily acknowledge that the probability of the proposition is less than 1. For example, suppose I test negative for some illness; for a highly reliable test, I might contingently believe I do not have the illness, and act based on that supposition. People will also reason with such beliefs and likelihood, e.g., if I believe I do not have the illness, but I believe it is more likely that my condition will improve than it is that I do not have the illness, then I also believe my condition will improve. This paper takes as a starting point the Logic of Qualitative Probability (LQP) [Delgrande et al., 2019], which allows one to express that a sequence of formulas Φ is no more likely than another sequence Ψ. Here φ γ ψ χ asserts that the combined probability of φ and γ is not greater than the combined probability of ψ and χ, but without giving specific probabilities for the formulas. Such an expression is given an intuitive interpretation by requiring in the underlying semantics that the sum of the probabilities of the sentences in Φ is less or equal to the sum of the probabilities of the sentences in Ψ. In this paper we extend LQP by adding an explicit belief operator B, where Bφ can be read: φ is believed with probability at least c for some threshold c > 0.5. The B operator subsumes the modal logic KD45 and so, for example, the formula (Bφ Bψ) B(φ ψ) is valid. As well, one can reason with beliefs related by our notion of likelihood; specifically, if an agent believes that φ holds, and that φ is no more likely than ψ, then the agent will believe that ψ holds. The language allows arbitrary nesting and intermingling of the B and operators. Besides a formal semantics of the new logic ELL, we provide a sound and complete axiomatisation, and discuss various properties including the fact that every sentence is logically equivalent to a sentence without nested beliefs. 2 Background With qualitative probability, the goal is to specify conditions on a binary operator, expressed φ ψ, with the intended interpretation that ψ is at least as probable as φ. Formally, the goal is to ensure that, for a set of formulas involving assertions, there is a realising probability assignment; i.e., an assignment of probabilities to formulas that is consistent with . The best known early work on qualitative probability is that of de Finetti [1937], who proposed a set of basic principles. Subsequently, Kraft et al. [1959] extended these to a set that was necessary and sufficient. Segerberg [1971] put these notions in the context of a modal logic and provided a sound and complete axiomatisation. G ardenfors [1975] developed a simpler framework for . Delgrande et Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI-22) al. [2019] extended the arguments of to sequences of formulas, and provided a simpler system that avoided an exponential blowup of earlier approaches. We adopt this approach here as the basis of a logic of qualitative probability, and we embed it in an approach that includes contingent belief based on an epistemic logic of reasonably likely possibilities. We also note work by Fagin et al. [1990; 1994]; while this work is quantitative in nature, it allows for reasoning about probability and provides for multiple agents and a modal operator for knowledge. There has also been work that develops modal accounts of belief based on notions of probability. Most such approaches are very weak, and the approaches described below all violate the principle of Conjunction: (Bφ Bψ) B(φ ψ) and so cannot be given a Kripke-style semantics. Burgess [1969] develops a modal account Pφ intended to capture the notion that φ is probably true. Kyburg and Teng [2012] introduce 2ϵφ that holds whenever the probability of φ is no more than ϵ. Herzig and Longin [2003] consider a modal account of Pφ, where Pφ expresses that φ is more probable than φ. Halpern and Rabin [1987] introduce the modal operator Lφ with intended interpretation that φ is reasonably likely to be a consistent hypothesis ; see also [Halpern and Mc Allester, 1989]. In the approach, Lφ L φ is satisfiable when both φ and φ are satisfiable, and Lφ L φ is a theorem. van der Hoek [1996] provides a semantics for using Kripke structures and defines modal necessitation in terms of . However, he shows that this approach shares some of the shortcomings encountered by Segerberg and G ardenfors. An account that satisfies Conjunction is given by Leitgeb [2013]. Leitgeb develops a possible worlds semantics, where probabilities are associated with possible worlds, but the combined probability of the worlds characterising an agent s beliefs have probability 0.5 < r 1 for a given r. He is also interested in maintaining Conjunction following a conditioning of the agent s beliefs; i.e., if φ is consistent with an agent s beliefs K, then the probability of the K worlds satisfying φ with respect to the set of φ worlds also exceeds r. For this, Leitgeb shows that a condition of P-stability is required; that for a set of worlds K W characterising an agent s beliefs, the probability of any w K is greater than the probability of W \ K. P-stability is of particular interest here, since we recover this property in our approach; however for us it arises from the interaction of relative likelihood ( ) with our belief modality B. Thus, we arrive at the constraint of P-stability, but from a different direction. Leitgeb arrived at it since he wanted conditioning on a formula to yield coherent beliefs (i.e., satisfying Conjunction). We wanted the principle (BM) modus ponens for likelihood , which led us to the same principle. Lastly we note more general work providing insight into combining logic and probability: Kyburg [1994], Russell [2015], van Benthem [2017] and Belle [2017]. 3 The Logic ELL We now introduce our language and the semantics for the Epistemic Logic of Likelihood (ELL). As remarked earlier, this builds on the logic LQP [Delgrande et al., 2019]. Two points can be made here. First, any sufficiently expressive approach to qualitative probability (e.g., Segerberg s [1971] or G ardenfors s [1975]) could have been used. Second, although our language allows sequences of formulas as arguments to , this is inessential to our approach; the presence of (non-trivial) sequences is used in [Delgrande et al., 2019] to guarantee that a set of consistent formulas has a realising probability assignment. 3.1 Syntax Let P be a finite set of atomic propositions. Let LP L be the propositional language over P. The language L of Epistemic Logic of Likelihood (ELL) is given as follows: φ ::= p ( P) | φ | φ φ | Bφ | Φ Φ Φ ::= φ | Φ φ The language provides for the usual propositional connectives, a modal operator for belief (Bφ), and qualitative preferences over sequences of sentences (Φ1 Φ2). Note that is not an operator, but rather punctuation in a formula; Thus p q r can be read as the combined probability of p and q is not greater than that of r . We use lower case Greek letters φ, ψ, and χ, possibly with subscripts or superscripts, as metavariables for formulas. The upper case Greek letters Φ, Ψ, and are similarly used for sequences. Sequences may be written using indexed prefix notation so that, for example, 3 i=1φi denotes φ1 φ2 φ3. A modal formula is a formula of the form Bφ, Bφ, φ ψ, or (φ ψ), where φ, ψ L. The lower case Greek letters µ and ν are reserved to denote modal formulas. The propositional connectives , , are defined in the usual way. We also adopt the following abbreviations: 1 for some fixed tautology; 0 for 1; Φ Ψ for Φ Ψ Ψ Φ; Φ Φ for Φ Ψ (Ψ Φ); and, 2φ for 1 φ. 3.2 Semantics In this section we introduce the semantics for ELL. A model is M = W, P, r, V , where: W is a finite set of states or worlds; P : W (0, 1] is a probability assignment (sums to 1); r (0.5, 1.0] is a belief threshold; and V : W 2P, with condition that V (w1) = V (w2) implies w1 = w2. We define a possible-world belief set (pw-belief set) to be the smallest subset K W such that: P(K) r; w K and w K implies P(w) > P(w ); and if w K, then P(w) > P(W \ K). Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI-22) The pw-belief set is the set of reasonably likely worlds and helps determine which formulas are believed. We explain why a unique pw-belief set always exists as follows. Consider an ordered partition of the finitely many worlds, where worlds in the same partition have the same probability, and where partitions are arranged from greatest to least probability. The set K0 = W of all worlds has probability (1.0) above or equal to any threshold r, and vacuously satisfies the other desired properties. In case it is not the smallest such set, iteratively throw away the class of least-probable worlds; for each i > 0, Ki is the result of removing the least possible worlds from Ki 1. There are finitely many non-empty Ki; the smallest one satisfying the three conditions above is K. Let M = (W, P, r, V ) be a model with belief set K W. The satisfaction relation is given by: M, w |= p iff p W(w), where p P M, w |= φ iff M, w |= φ M, w |= φ ψ iff M, w |= φ or M, w |= ψ M, w |= Φ Ψ iff P φ Φ P w |=φ P(w ) P ψ Ψ P w |=ψ P(w ) M, w |= Bφ iff M, w |= φ for all w K If M, w |= φ, then φ is true (or satisfied) at world w in model M, and we say that φ is valid just if M, w |= φ for every M and every world w in M. Our semantics of belief generalizes the approach where belief is defined as having probability 1 by setting K to be the set of all possible worlds. Other established treatments, such as the Lockean thesis and Lenzen s weak belief allow for belief without certainty but determine whether an event is believed solely on the probability of the event. For better compatibility with modal logic, our belief takes into account the whole probability structure. From a pw-belief set K, we can define a binary relation R on W, where (w, w ) R iff w K. Our definition for the truth of Bφ coincides with B being the standard modal box operator for relation R, i.e., M, w |= Bφ if and only if M, w |= φ for all w {w | w Rw }, since K = {w | w Rw }. R is serial, transitive, and Euclidean, and hence a standard epistemic relation for belief. Thus our belief operator satisfies the standard KD45 axioms of epistemic logic. Example 1. Consider a model with two atomic propositions, d for having a disease and c for the condition will improve . 4 possible worlds result, expressed by propositions that are true or false (bar over the proposition): {cd, cd, cd, cd}. Assign probabilities: P(cd) = 0.25, P(cd) = 0.6, P(cd) = 0.1, and P(cd) = 0.05. Let r = 0.55. Then K = {cd}. The following formulas are true at all worlds: B( d) (I believe I do not have the disease); B(c d) (I believe it is more likely that my condition will improve than it is that I do not have the disease); and B(c) (I believe that my condition will improve); and also 2 d (I am not certain that I do not have the disease); thus I believe I do not have the disease though my probability of not having it is less than 1. If instead r = 0.8, then K = {cd, cd}. Now B( d) is false, but B(c) remains true. Note, that in this and the previous case, each w K has probability greater than that of W \ K, i.e. we have P-stability. Example 2. Consider two more atomic propositions, p and q representing other possible explanations for the condition being tested. Suppose the probability assignment is: p(cd p q) = 0.25, p(c dpq) = 0.05, p(c dp q) = 0.25, p(c d pq) = 0.30, p( cd p q) = 0.1, p( c dpq) = 0.5, Note, every event of Example 1 has the same probability; the difference is the outcome c d of Example 1 has been refined with sub-possibilities. Then with r = 0.55, K is {cdpq, c dpq, c dp q}, while in Example 1, it was {c d}. Although the formula c d has the same weight as before (0.6), it is no longer believed, while before (in Example 3.1) it was. Thus this refinement of Example 1 illustrates how belief takes into account the outcomes and not just the probability of the event in question. Example 3. I arrive at the parking lot at the university to find my car gone. My belief threshold is 0.9. Consider propositions: a my spouse took the car; b the car has been towed; c the car is stolen. We have the constraint that a, b, c are pairwise mutually exclusive. Consider a model where P(a) = 0.6, P(b) = 0.35, and P(c) = 0.05. Then I contingently believe that either my spouse took the car or it was towed. I don t believe it was stolen. 4 Proof System We now introduce the proof system for ELL. Recall that µ and ν denote modal formulas; that is formulas of the form Bφ, Bφ, φ ψ, or (φ ψ). The schema from (PC) through (K3) are from LQP [Delgrande et al., 2019]. The remainder are particular to this logic. The axiomatisation is not intended to be minimal; we have attempted to present the axioms simply and perspicuously. Axioms of LQP (PC) All tautologies of classical propositional logic (Tran) (Φ Ψ) ((Ψ ) (Φ )) (Tot) (Φ Ψ) (Ψ Φ) (Sub) 2(φ1 φ2) 2(ψ1 ψ2) ((φ1 Φ ψ1 Ψ) (φ2 Φ ψ2 Ψ)) (Com) (Φ1 Φ2 Ψ) (Φ2 Φ1 Ψ) (Φ Ψ1 Ψ2) (Φ Ψ2 Ψ1) (Add) (Φ1 Ψ1) (Φ2 Ψ2) (Φ1 Φ2 Ψ1 Ψ2) (Succ) (1 Φ 1 Ψ) (Φ Ψ) (K3) 2 (φ ψ) (φ ψ φ ψ) - Additional axioms for ELL (BM) (Bφ B(φ ψ)) Bψ (BK) B(φ ψ) (Bφ Bψ) Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI-22) (BD) Bφ B φ (2-Red) µ 2µ for µ a modal formula (B-Red) µ Bµ for µ a modal formula - Inference Rules (MP) From φ ψ and φ, infer ψ (2-Nec) From φ, infer 2φ - [Delgrande et al., 2019] shows that the logic underlying 2 is KD. Given the additional axioms of reflexivity (Ref) and iterated modalities (2-Red), it is clear that 2 is characterised by modal logic S5. Similarly, given the axioms (B1), (BK), and (BD), and the fact (see below) that the rule of necessitation is derivable, B subsumes modal logic KD45. The axiom (BM) is interesting in several respects. First, it allows beliefs to be derived from other beliefs and beliefs regarding likelihood. Second, it clearly is analogous in form to (BK). We now provide some results of interest. The first few are immediate from the underlying logic, but nonetheless are of interest given our intended interpretations of likelihood (for B) and certainty (for 2). Proposition 4. The following formulas are provable from the axiomatisation. 2. (Bφ Bψ) B(φ ψ) 3. (B-Nec): From φ infer Bφ. 4. Consequences of (BM): (a) ( Bψ B(φ ψ)) Bφ (b) (B φ B(ψ φ)) B ψ (c) (Bφ Bψ) B(ψ φ) 5. B ψ Bφ φ ψ. 6. Bφ B( φ φ). 8. Bφ B(φ χ ψ) Bψ The first two items are theorems of KD45. The first states that, while it may be that an agent s beliefs are false (in that φ B φ may hold for some φ), an agent will believe that its beliefs hold. The second, in the left-to-right direction, states that, if an agent believes φ and also ψ, then it believes their conjunction. This is notable since the interpretation of Bφ is that φ is believed by the agent despite the fact that the agent s assigned probability to φ may be less than 1.0. The third item shows that Necessitation is a derived rule. The fourth item gives several immediate consequences of (BM). For example, in (a) if an agent doesn t believe ψ and if φ is no more probable than ψ, then the agent doesn t believe φ. Item 5 is a more intricate consequence of (BM) that justifies the (semantic) property of P-stability (PSt): If w K, then P(w) > P(W \K). Informally, we have: if ψ is consistent with an agent s beliefs (i.e. B ψ holds), and the agent believes φ, then φ is less likely than ψ according to the agent. Semantically, ψ is characterized by a possible world, and φ could exactly denote all that the agent believes, leading to the informal semantic interpretation: if w K, then W \K is less likely than w. The next item states that, if an agent believes that φ holds, then it believes that the probability of φ is > 0.5. Item 7 shows that, if a proposition is certain, then it is believed. The final item illustrates that reasoning with beliefs and likelihood can extend to formulas in a sequence. 4.1 Modal Normal Form In this section we show that, for any formula φ in ELL, there is a formula φ such that φ φ is a theorem of ELL and φ is in an extended version of conjunctive normal form (CNF) which contains no nested modalities. We also develop, for completeness, a dual result where φ is in an extended form of disjunctive normal form (DNF). Definition 5. φ L is in modal conjunctive normal form (MCNF) if it is of the form n i=1 m j=1 φi,j where: 1. φi,j is a propositional literal or modal formula; and 2. if φi,j is a modal formula { }Bψ or { }(Ψ Φ), then ψ is in MCNF and Ψ, Φ are sequences of formulas in MCNF. Definition 6. φ L is in reduced modal conjunctive normal form (RMCNF) if it is in MCNF and, for any modal formulas { }Bψ or { }(Ψ Φ) occurring in φ, we have that ψ is a formula in propositional logic and Ψ, Φ are sequences of formulas of propositional logic. We define (reduced) modal disjunctive normal form, (R)MDNF, analogously. For the main result, for an arbitrary formula φ L we show that there are formulas φ and φ where φ is in MCNF and φ is RMCNF, and where φ φ and φ φ , and so φ φ , are theorems of ELL. The first part is straightforward. Theorem 7. For φ L there is φ L where φ is in MCNF and φ φ . The same result is dually obtained for MDNF. The proof that, for a formula in MCNF, there is an equivalent formula in RMCNF is essentially an inductive argument based on the number of modal formulas that appear within the scope of another modal formula. Leading up to this, we have the following results. Lemma 8. Let φ, ψ be formulas of ELL and µ a modal formula. The following are theorems of ELL: 1. B(µ ψ) µ Bψ. 2. µ 2((φ ψ) (φ (µ ψ))). 3. µ 2(φ (φ (µ ψ))). The following theorem is used in the reduction step of the main result. Theorem 9. Let µ be a modal formula; let φ and ψ be formulas; and let 1 and 2 be sequences, all in ELL. The following are theorems of ELL. 1. B(φ (µ ψ)) (( µ B(φ ψ)) (µ Bφ)) 2. B(φ (µ ψ)) (( µ Bφ) (µ B(φ ψ))) Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI-22) 3. (φ (µ ψ)) 1 2 (( µ ((φ ψ) 1 2)) (µ (φ 1 2))) 4. 1 ((µ ψ) φ) 2 ( µ ( 1 (ψ φ) 2) (µ ( 1 φ 2))) 5. (φ (µ ψ)) 1 2 (( µ (φ 1 2)) (µ ((φ ψ) 1 2))) 6. 1 ((µ ψ) φ) 2 ( µ ( 1 φ 2) (µ ( 1 (ψ φ) 2))) Consider the first item in the theorem. The left-hand side of the equivalence is a B modal formula, where its argument schema (viz. φ (µ ψ)) is applicable to (among others) a formula in MCNF. Since µ is a modal formula, we have a nesting of modalities ( or B within a B). The right-hand side gives an equivalent formula, but where this nesting is gone. Part 2 of the theorem states an analogous result for a formula in MDNF. The next 4 parts do the same thing for with Parts 3 and 4 applicable to MCNF and Parts 5 and 6 applicable to MDNF. Thus for Part 3, on the left-hand side of we have a formula with a nested modality: ((φ (µ ψ)) 1 2). Specifically, the sequence (φ (µ ψ)) is in MCNF with modal formula µ. The right-hand side of gives an equivalent formula in which µ is no longer nested in . Theorem 10. For φ L in MCNF there is φ L where φ is in RMCNF and φ φ is provable in ELL. Corollary 11. For φ L there are formulas φ such that φ is in RMCNF and φ φ is provable in ELL. Clearly the same argument can be made for a formula in MDNF, by appealing to Theorem 9, Parts 2, 5, and 6. 4.2 Soundness and Completeness We now establish the soundness and completeness of our axiomatisation of ELL with respect to its semantics. Theorem 12. The axiomatisation is sound and complete with respect to ELL-semantics: for every formula ϕ in ELL, ϕ if and only if ϕ. In the remainder of this section we outline our completeness proof, that is if ϕ, then ϕ. We do so by proving the contrapositive: we start with a consistent formula and find a model that satisfies it. To establish a suitable probability function for the model we construct in our completeness proof, we use a strengthened version of Scott s Theorem [Scott, 1964, Theorem 1.2] concerning a real linear vector space L(S) with a basis S. A subset N L(S) is strongly realized by a linear functional φ on X L(S) if N = {x X | f(x) 0}, and N \ ( N) = {x X | f(x) > 0} where for any set A of vectors, A = { a | a A}. The revised Scott s theorem that we will use is then: Theorem 13 (Strengthened Scott s Theorem). Let S be a finite nonempty set and let X be a finite, rational, symmetric subset of L(S). For each N X, there exists a linear functional f on L(S) that strongly realizes N in X if and only if the following conditions are satisfied: 1. for each x X, we have x N or x N; and 2. for each n 1 and x1, . . . , xn N we have: Pn i=1 xi = 0 implies x1 N. A Stronger Consistent Formula To Satisfy We now set up the details for our completeness proof. Suppose χ is a consistent formula. Without loss of generality, we assume it is in RMDNF. We wish to find a model for χ, but will find a model for another formula σ, whose model will also be a model for χ. We determine σ as follows. Let χ be a consistent disjunct of χ. Let consist of: 1. the conjuncts of χ ; 2. all formulas B φ, Bφ, φ ψ, 0 φ, and φ 0, where φ is a conjunction of a maximally consistent set of the propositional literals from letters appearing in χ, and ψ is a non-empty disjunction of conjunctions of maximally consistent sets of propositional letters from letters appearing in χ; and, 3. 0 1 and (1 0). Notice that each formula in is free of modal nesting. The closure of , denoted cl( ), consists of the set of formulas together with their negations. Let Σ be a maximally consistent subset of cl( ) that is consistent with χ . Let σ = V Σ. Note that σ χ; we ultimately will find a model that satisfies σ. Applying Strengthened Scott s Theorem Let Y be the set of maximal consistent sets of propositional literals involving letters appearing in χ. For each formula φ, let: [φ] = {w Y | w φ}. Given a set E Y of worlds, let ι(E) be the characteristic function of E, that is, it assigns w to 1 for all w E and w to 0 for all w Y \ E. For a formula φ, let: w [φ] ι({w}), assigning 1 to the worlds that make φ true and 0 to those that make φ false. We then define the tools for Scott s theorem as follows: φ Φ ι[φ] | Φ Ψ Σ} Proposition 14. Let φ be a propositional formula using only those letters that appear in χ. Then φ L w [φ] V w. Lemma 15 (Satisfaction of Scott s condition). X is a finite, rational, and symmetric subset of L(Y ), and N X, satisfying the conditions of Scott s Theorem: 1. for each x X, we have x N or x N; and 2. for each n 1 and x1, . . . , xn N we have: Pn i=1 xi = 0 implies x1 N. Applying the strengthened Scott s theorem (Theorem 13), we obtain a linear functional f on L(Y ) that strongly realizes N in X. Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI-22) Defining Probability Function Proposition 16. f(ι(Y )) > 0. Let W = {v Y | f(ι({v})) > 0}. We then define function P on w W and E W by P(w) = f(ι({w}) f(ι(W)) , P(E) = X We show P is a positive probability function on W. Proposition 17. P is a probability function on W. By definition of W, P is never zero. Determining K, r, and Properties of the Model The set K is uniquely determined from Σ. For each v W, v K iff B V v Σ. We define r = P(K). Proposition 18. K = . Proposition 19. The model is P-stable, that is if w K, then P(w) > P(W \ K). Notice that P-stability guarantees that if w K and v K, then P(w) > P(v). Satisfiability We find a potential satisfying world in W. Proposition 20. There exists a w Y consistent with σ, such that w W. We then show that w is satisfied at w. Theorem 21. σ is satisfied at w in the model. 5 Conclusions and Future Work We have presented an approach that combines subjective qualitative probability with epistemic logic, in which an agent may believe that a formula is true even while believing that the formula has probability less than 1. Besides drawing from [Delgrande et al., 2019] our belief modality shares a semantic characterisation with that of [Leitgeb, 2013]. So, in common with [Delgrande et al., 2019], one can make assertions about relative likelihood; however we extend this approach by allowing for deductive reasoning that combines belief and likelihood, in which beliefs follow as a consequence of other beliefs and assertions of likelihood. Similarly, Leitgeb gives a possible-world characterisation of belief, and identifies the key property of P-stability, with focus on conditioning with respect to sets of worlds. We generalise this work in several directions. We develop a proof theory that combines this work with qualitative probability, and that is shown to be sound and complete with respect to the probabilistic semantics. We show that P-stability is a natural consequent of axiom (BM), and so we provide a formal, syntactic justification for this condition. As well, our approach fully combines notions of qualitative probability and epistemic reasoning, and allows arbitrary nesting of our modal operators. Finally, we show the language is equivalent to a sublanguage without nesting. Several avenues for future work present themselves. A major topic is to develop a notion of conditionalisation that is the qualitative analogue of those in the literature on probability (e.g., Bayesian conditionalisation but also Jeffrey conditionalisation). We suggest that work in belief revision will be relevant here, at least for conditioning on information that is inconsistent with the agent s beliefs. If successful, this could provide a plausible approach to contingent belief change that is nonetheless based on underlying probability-based intuitions. More prosaically, the complexity of the formal system should be investigated, although related logics suggest that the satisfiability problem will be PSPACE complete. Ethical Statement There are no ethical issues. Acknowledgements James Delgrande gratefully acknowledges financial support from the Natural Sciences and Engineering Research Council of Canada. Gerhard Lakemeyer was partially supported by the EU ICT-48 2020 project TAILOR (No. 952215). References [Belle, 2017] Vaishak Belle. Logic meets probability: Towards explainable AI systems for uncertain worlds. In Carles Sierra, editor, Proceedings of the International Joint Conference on Artificial Intelligence, pages 5116 -5120, 2017. [Burgess, 1969] J.P. Burgess. Probability logic. Journal of Symbolic Logic, 34(2):264 -274, 1969. [de Finetti, 1937] Bruno de Finetti. La pr evision: Ses lois logiques, ses sources subjectives. Ann. Inst. Henri Poincar e, 7:1 -68, 1937. [Delgrande et al., 2019] J. P. Delgrande, Bryan Renne, and Joshua Sack. The logic of qualitative probability. Artificial Intelligence, 275:457 486, 2019. [Fagin and Halpern, 1994] Ronald Fagin and Joseph Y. Halpern. Reasoning about knowledge and probability. Journal of the Association for Computing Machinery, 41(2):340 367, 1994. [Fagin et al., 1990] Ronald Fagin, Joseph Y. Halpern, and Nimrod Megiddo. A logic for reasoning about probabilities. Information and Computation, 87:78 128, 1990. [G ardenfors, 1975] P. G ardenfors. Qualitative probability as an intensional logic. Journal of Philosophical Logic, 4(2):171 -185, 1975. [Halpern and Mc Allester, 1989] J.Y. Halpern and D.A. Mc Allester. Likelihood, probability, and knowledge. Computational Intelligence, 5(2):151 160, 1989. [Halpern and Rabin, 1987] Joseph Y. Halpern and Michael O. Rabin. A logic to reason about likelihood. Artificial Intelligence, 32(3):379 405, 1987. [Herzig and Longin, 2003] Andreas Herzig and Dominique Longin. On modal probability and belief. In Symbolic and Quantitative Approaches to Reasoning with Uncertainty, 7th European Conference, ECSQARU 2003, Aalborg, Denmark, July 2-5, 2003. Proceedings, pages 62 73, 2003. Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI-22) [Kraft et al., 1959] Charles H. Kraft, John W. Pratt, and A. Seidenberg. Intuitive probability on finite sets. Annals of Mathematics and Statistics, 30(2):408 -419, 1959. [Kyburg Jr. and Teng, 2012] Henry E. Kyburg Jr. and Choh Man Teng. The logic of risky knowledge, reprised. International Journal of Approximate Reasoning, 53(3):274 -285, 2012. [Kyburg Jr., 1994] H.E. Kyburg Jr. Believing on the basis of evidence. Computational Intelligence, 10(1):3 -20, 1994. [Leitgeb, 2013] Hannes Leitgeb. Reducing belief simpliciter to degrees of belief. Ann. Pure Appl. Logic, 164(12):1338 1389, 2013. [Russell, 2015] Stuart Russell. Unifying logic and probability. Communications of the ACM, 58(7):88 97, 2015. [Scott, 1964] Dana Scott. Measurement structures and linear inequalities. Journal of Mathematical Psychology, 1(2):233 247, 1964. [Segerberg, 1971] Krister Segerberg. Qualitative probability in a modal setting. In J.E. Fenstad, editor, Proceedings of the Second Scandinavian Logic Symposium, in: Studies in Logic and the Foundations of Mathematics, volume 63, pages 341 -352. Elsevier, 1971. [van Benthem, 2017] Johan van Benthem. Against all odds: When logic meets probability. In Joost-Pieter Katoen, Rom Langerak, and Arend Rensink, editors, Model Ed, Test Ed, Trust Ed - Essays Dedicated to Ed Brinksma on the Occasion of His 60th Birthday, pages 239 253, 2017. [van der Hoek, 1996] Wiebe van der Hoek. Qualitative modalities. Int. J. Uncertain. Fuzziness Knowl. Based Syst., 4(1):45 60, 1996. Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI-22)