# trustsensitive_belief_revision__eb351cdf.pdf Trust-Sensitive Belief Revision Aaron Hunter British Columbia Institute of Technology Burnaby, Canada aaron hunter@bcit.ca Richard Booth Mahasarakham University Mahasarakham, Thailand ribooth@gmail.com Belief revision is concerned with incorporating new information into a pre-existing set of beliefs. When the new information comes from another agent, we must first determine if that agent should be trusted. In this paper, we define trust as a pre-processing step before revision. We emphasize that trust in an agent is often restricted to a particular domain of expertise. We demonstrate that this form of trust can be captured by associating a state partition with each agent, then relativizing all reports to this partition before revising. We position the resulting family of trust-sensitive revision operators within the class of selective revision operators of Ferm e and Hansson, and we examine its properties. In particular, we show how trust-sensitive revision is manipulable, in the sense that agents can sometimes have incentive to pass on misleading information. When multiple reporting agents are involved, we use a distance function over states to represent differing degrees of trust; this ensures that the most trusted reports will be believed. 1 Introduction We consider the manner in which trust impacts the process of belief revision. Many approaches to belief revision require that all new information presented for revision must be incorporated; however, this is clearly untrue in cases where information comes from an untrusted source. In this paper, we are concerned with the manner in which an agent uses an external notion of trust in order to determine how new information should be integrated with some pre-existing set of beliefs. Our basic approach is the following. We introduce a model where each agent only trusts other agents to be able to distinguish between certain states. We use this notion of trust as a precursor to belief revision, transforming reported information so that we only revise by the part that is trusted to be correct. This is a form of selective revision [Ferm e and Hansson, 1999]; we establish key properties of our trust-senstive revision operator, and formally introduce the notion of manipulability. We then extend our model to a more general setting, by introducing quantitative measures that allow agents to compare degrees of trust. 2 Preliminaries 2.1 Motivation There are different reasons that an agent may or may not be trusted. In this paper, our primary focus is on trust as a function of the perceived expertise of other agents. One agent will trust information reported by another just in case they view the reporting agent as an authority capable of drawing meaningful distinctions over a particular domain. We introduce a simple motivating example, which we revisit periodically. Example 1 Consider an agent that visits a doctor, having difficulty breathing. The agent happens to be wearing a necklace that prominently features a jewel on a pendant. During the examination, the doctor checks the patient s throat for swelling; at the same time, the doctor sees the necklace. Following the examination, the doctor tells the patient you have a viral infection in your throat - and by the way, you should know that the jewel in your necklace is not a diamond. Note that the doctor provides information about two distinct domains: human health and jewelry. In practice, a patient is likely to trust the doctor s diagnosis about the viral infection. However, the patient has little reason to trust the doctor s evaluation of the necklace. We suggest that a rational agent should believe the doctor s statement about the infection, while essentially ignoring the comment on the necklace. This approach is dictated by the kind of trust that the patient has in the doctor. Our aim in this paper is to formalize this kind of domain-specific trust, and then demonstrate how this form of trust is used to inform belief revision. 2.2 Belief Revision Belief revision refers to the process in which an agent integrates new information with some pre-existing beliefs. One of the most influential approaches to belief revision is the AGM approach. This approach is defined with respect to a finite propositional vocabulary F. A belief set is a deductively closed set of formulas, representing the beliefs of an agent. A revision operator is a function that takes a belief set and a formula as input, and returns a new belief set. An AGM revision operator is a revision operator that satisfies the AGM postulates, as specified in [Alchourr on et al., 1985]. A state is a propositional interpretation over F, and we write 2F for the set of all states. It turns out that every AGM revision operator is characterized by a total pre-order over Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI 2015) states. To be more precise, a faithful assignment is a function that maps each belief set to a total pre-order over states in which the models of the belief set are minimal. When an agent is presented with a new formula φ for revision, the revised belief set is given by the set of all minimal models of φ in the total pre-order given by the faithful assignment. We refer the reader to [Katsuno and Mendelzon, 1992] for a proof of this result, and a discussion of the implications. At present, we simply need to know that each AGM revision operator is associated with a faithful assignment. 3 A Model of Trust 3.1 Domain-Specific Trust Assume a fixed propositional signature F and a set of agents A. For each A A, let A denote an AGM revision operator. This revision operator represents an ideal revision, in which A has complete trust in the new information. We want to modify the way this operator is used, by adding a representation of trust with respect to each agent B A. We assume that all new information is reported by an agent, so each formula for revision can be labelled with the name of the reporting agent.1 At this point, we are not concerned with degrees of trust or with conflicts between different sources. We start with a binary notion of trust, where A either trusts B or does not trust B with respect to a particular domain. We encode trust by allowing each agent A to associate a partition ΠB A over possible states with each agent B. Definition 1 A state partition Π is a collection of subsets of 2F that is collectively exhaustive and mutually exclusive. For any s 2F, let Π(s) denote the element of Π containing s. If Π = {2F}, we call Π the trivial partition with respect to F. If Π = {{s} | s 2F}, we call Π the unit partition. Definition 2 For each A A the trust function TA is a function that maps each B A to a state partition ΠB A. The partition ΠB A specifies the states that A will trust B to distinguish. If ΠB A(s1) = ΠB A(s2), then A will trust that B can distinguish between states s1 and s2. Conversely, if ΠB A(s1) = ΠB A(s2), then A does not see B as an authority capable of distinguishing between s1 and s2. We clarify by returning to our motivating example. Example 2 Let A = {A, D, J} and let F = {sick, diam}. Informally: D represents a doctor, J represents a jeweler, sick is true if A has an illness ,and diam is true if A is wearing a diamond. Following standard shorthand notation, we represent a state s by the set of propositional symbols that are true in s. We specify partitions by using the | symbol to visually separate different cells. The following partitions are intuitively plausible in this example: ΠD A := {sick, diam}, {sick}|{diam}, ΠJ A := {sick, diam}, {diam}|{sick}, Thus, A trusts the doctor D to distinguish between states where A is sick as opposed to states where A is not sick. 1In domains involving sensing or other forms of discovery, we allow an agent A to self-report information with complete trust. However, A does not trust D to distinguish between states that are differentiated by the authenticity of a diamond. We emphasize that a trust partition is an agent s perception of the expertise of others. When the doctor says that the jewel is not a diamond, they may very well be giving an assessment that they believe is correct. In this example, it is actually reasonable to believe that the doctor feels that they can tell diamond states from not diamond states; so they are not necessarily being dishonest by providing this statement. The trust partition held by A is a reflection of A s view of the doctor; it need not be correct. 3.2 Trust-Sensitive Belief Revision In this section, we describe how an agent A combines the revision operator A with the trust function TA to define a new, trust-sensitive revision operator B A. In general, B A will not be an AGM operator. In particular, B A normally will not satisfy the Success postulate. This is a desirable feature. If A is given a new formula φ for revision, the first thing to consider is the source B and the distinctions they are trusted to make. In other words, if A does not trust B to distinguish between states s and t, then any report from B that provides evidence for s also provides evidence for t. It follows that A need not believe φ after revision; A interprets φ to be evidence for every state s that is B-indistinguishable from a model of φ. The next definition helps formalize this notion. Definition 3 Let TA(B) = ΠB A. For every formula φ, define: ΠB A[φ] = [ {ΠB A(s) | s |= φ}. So ΠB A[φ] is the union of all cells that contain a model of φ. Based on the discussion above, a report of φ from B is construed to be evidence for each state in ΠB A[φ]. Definition 4 Let TA(B) = ΠB A, and let A be an AGM revision operator for A. For any belief set K with corresponding ordering K given by the underlying faithful assignment, the trust-sensitive revision K B A φ is the set of formulas true in min K ({s | s ΠB A[φ]}). So rather than taking the minimal models of φ, we take all minimal states among those that B can not be trusted to distinguish from the models of φ. Example 3 Returning to our example, we consider a few different formulas for revision: φ1 = sick; φ2 = diam; φ3 = sick diam. Assume the initial belief set is given by the pre-order K: {diam} K {sick, diam}, K {sick}. We have the following results for revision: (1) K D A φ1 = Cn(sick diam). (2) K D A φ2 = Cn( sick diam). (3) K D A φ3 = Cn(sick diam). Result (1) shows that A believes when the doctor says that they are sick; (2) indicates the doctor is not believed on the subject of jewelry. Finally, (3) shows that an agent is able to incorporate a part of a formula, because A only incorporates the part of φ3 over which the doctor is trusted. 4 Formal Properties 4.1 Basic Results We first consider extreme cases for trust-sensitive revision. Intuitively, if TA(B) is the trivial partition, then A does not trust B to be able to distinguish between any states. Hence, A should not incorporate any information obtained from B. Proposition 1 If TA(B) is the trivial partition, then K B A φ = K for all K and φ. The other extreme situation occurs when TA(B) is the unit partition, consisting of all singleton sets. In this case, A trusts B to distinguish between every possible pair of states. Proposition 2 If TA(B) is the unit partition, then B A = A. Hence, if B is universally trusted, then trust-sensitive revision is just normal AGM revision. Partitions are partially ordered by refinement. We say that Π1 is a refinement of Π2 just in case, for each S1 Π1, there exists S2 Π2 such that S1 S2. For trust partitions, refinement has a natural interpretation as breadth of trust. Proposition 3 For any formula φ, if ΠB A is a refinement of ΠC A, then K C A φ K B A φ. So if B is trusted over a greater range of states, then receiving φ from B yields a larger belief set than receiving φ from C. 4.2 Trust-Sensitive Revision as Selective Revision Trust-sensitive revision is a specialized version of selective revision [Ferm e and Hansson, 1999]. An operator is a selective revision operator if there exists an AGM revision operator and a transformation function f taking formulas to formulas such that, for all belief sets K and formulas φ, K φ = K f(φ). The operator B A clearly falls under this scheme. It s the particular instance obtained by allowing f to be defined via the state partition ΠB A with f(φ) = φB A such that the models of φB A are precisely the models in ΠB A[φ]. Ferm e and Hansson enumerate several properties for the function f and prove correspondence results between properties of f and postulates for . These results allow us to give a list of sound postulates for trust-sensitive revision. The relevant properties of f from [Ferm e and Hansson, 1999] are as follows2 ( and denote classical logical consequence and equivalence respectively): f( ) (falsity preservation) φ f(φ) (implication) f(f(φ)) f(φ) (idempotence) If φ ψ then f(φ) f(ψ) (extensionality) f(φ ψ) f(φ) f(ψ) (disjunctive distribution) The above properties are familiar from topology. They essentially express that f is a Kuratowski closure operator on the space of subsets of states [Kuratowski, 1958]. We thus make the following definition. 2The first property here does not actually appear in their paper. Definition 5 A function f taking formulas to formulas satisfying the above five properties will be called a Kuratowski transformation function. The next result is proved in [Ferm e and Hansson, 1999]. Proposition 4 ([Ferm e and Hansson, 1999]) Let be an AGM revision operator and f be a Kuratowski transformation function. Then derived from and f satisfies all the following postulates. K φ = Cn(K φ) (Closure) There is a formula ψ such that K φ ψ, φ ψ and K φ = K ψ (Proxy success) K φ Cn(K {φ}) (Inclusion) K φ is consistent iff φ is consistent (Consistency) If φ ψ then K φ = K ψ (Extensionality) If K K φ then K (K φ) (Consistent expansion) (K φ) (K ψ) K (φ ψ) (Disjunctive overlap) If K (φ ψ) φ then K (φ ψ) K φ (Disjunctive inclusion) K (φ ψ) is equal to one of K φ, K ψ or (K φ) (K ψ) (Disjunctive factoring) The above postulates are mostly familiar from the literature on belief change. The Success postulate is replaced by the weaker Proxy success. What, then, are the properties of trust-sensitive revision? We can immediately state the following. Proposition 5 Let f be defined via a state partition Π. Then f is a Kuratowski transformation function. Thus every trustsensitive revision operator satisfies all the postulates listed in Proposition 4. Trust-sensitive revision also satisfies a new postulate: Proposition 6 Every trust-sensitive revision operator satisfies the following postulate: There exist λ1, . . . , λm such that (i) (K λi) (K λj) for i = j, and (ii) for all φ there exists a set X {1, . . . , m} such that K φ = T i X K λi (Disjoint outcome basis) Disjoint outcome basis says there is some finite basic set of mutually inconsistent revision outcomes K λ1, . . . , K λm such that every revision outcome can be expressed as an intersection of them. In fact, roughly speaking, we may take the λi to be the formulas that are closed under f, i.e., such that f(λi) λi. It can be shown that this postulate does not hold in general for selective revision operators defined via Kuratowski transformation functions. As we have seen, Success does not hold in general for trustsensitive revision. Even the following weaker version (which is also implied by another of the basic AGM revision postulates, namely Vacuity) fails [Hansson, 1997]: If K φ then K φ φ (Weak Success) As an extreme counterexample to B A failing Weak Success, just take TA(B) to be the trivial partition so that K B A φ does not differ from K for any choice of φ. The failure of this rule is a departure from Ferm e and Hansson, who hold onto it by always assuming the transformation function satisfies f(φ) φ whenever K φ. We argue this assumption makes little sense in our setting: why should we accept information from an untrusted source just because it happens to be consistent with our current beliefs? Is there an even weaker variant of Weak Success that holds for trust-sensitive revision? As a first idea, one might suggest the following: If K φ and K φ φ then K φ φ (Very Weak Success) This rule roughly says that, if we trust B when it tells us φ is false, then we should trust B when it tells us φ is true. The decision on whether to accept B s information about φ is judged on B s perceived ability to answer the yes-or-no question of whether φ holds. This intuition is perhaps clearer in the following equivalent formulation. If K φ and K φ then [K φ φ iff K φ φ] Trust-sensitive revision fails this postulate too, in general: Proposition 7 There exists an AGM revision operator and a state partition Π such that defined via and Π does not satisfy Very Weak Success. This result follows from the following counterexample, which is given further motivation below. Example 4 Suppose F = {p, q}, let K = Cn(q), and let K be the total pre-order where the models of K are minimal and everything else is maximal. Suppose that the trust partition is Π = {p, q} | {q}, {p} | . Then we have: K p, K p = Cn( p q), so K p p. However, K p = Cn(q), so K p p. How disappointed should we be that Very Weak Success fails? We argue that we need not be disappointed at all. The partition Π in the example has the property that it can only distinguish these cases: both p and q are true, neither is true, exactly one is true. Thus, if we initially believe q, then no report can ever make us believe p and q. There is an asymmetry here: we will trust a report that p is false, but we will not trust a report that it is true. We can frame the counterexample as the Dead Battery Problem. Suppose a lamp requires two batteries to make a bright light; it is dim with one good battery, and it is off with zero. Now suppose you contribute one battery and your adversary contributes the other. If your adversary tests the lamp in private and tells you that their battery works, you will not trust this conclusion. From your perspective, they may have seen the dim light and jumped to the conclusion that their battery is working. So while you would accept a report that their battery is dead, you will not accept a report that it works. Although Very Weak Success does not hold, trust-sensitive revision does manage to capture an even weaker variant of Success. Proposition 8 Every trust-sensitive revision operator satisfies the following postulate: If K φ and K φ φ then there exists a consistent formula ψ such that ψ φ and K ψ φ. (Feeble Success) Feeble Success relaxes Very Weak Success by saying that if we trust B when it tells us φ is false (and we didn t already believe φ) then B can also bring us to believe φ by telling us φ perhaps in conjunction with some other extra evidence. For instance in the Dead Battery Problem (Example 4), although you do not accept the report of your adversary when they tell you their battery is working (K p p), you would come to believe it is working if instead they reported that both batteries are working (K (p q) = Cn(p q) p). The proof that trust-sensitive revision satisfies Feeble Success makes use of the fact that f defined via a state partition satisfies the following property, which in topological terms essentially says that the complement of a closed set of states is itelf closed, i.e., every open set is closed. f( f(φ)) f(φ) Kuratowski transformation functions do not satisfy this in general, and indeed Feeble Success is a property not shared by every selective revision operator defined via a Kuratowski function. Proposition 9 There exists an AGM revision operator and a Kuratowski transformation function f such that defined via and f does not satisfy Feeble Success. This is proved from the following counterexample. Example 5 Suppose F = {p} and let f be specified by setting f( ) , f(p) , f( p) p and f( ) . (Every other formula in this language is equivalent to precisely one of , p, p, , so these four values completely specify f by appeal to equivalence.) One can check that f forms a Kuratowski transformation function. Assume K = Cn( ), so K φ = K f(φ) = Cn(f(φ)) for all φ and any AGM revision operator . Looking at f we see there is no consistent formula ψ such that f(ψ) p and so there is no ψ such that K ψ p. The consequent of Feeble Success (plugging in φ = p) therefore cannot hold. But we have K p and K p = Cn( p) p, thus the antecedent holds. 4.3 Manipulability Next we consider a concept that has been extensively studied in areas such as voting theory, preference aggregation and belief merging [Chopra et al., 2006; Everaere et al., 2007; Gibbard, 1973; Satterthwaite, 1975], but that has not yet received attention in the belief change literature, namely manipulability. Let us assume that agent B, in passing information information φ to A, does so with the communicative goal of bringing about in A a belief in φ. Under this assumption, we can ask: does B have any incentive to pass on any formula other than φ? That is, is it possible that A will not believe φ if given φ directly, but will believe it if given some other formula ψ? The following postulate expresses that A can never be manipulated in this way. If K φ φ and ψ is consistent then K ψ φ (Non-manipulability) We remark that a slight variant of Non-manipulability has appeared in the literature (but with a different motivation) under the name Regularity [Hansson et al., 2001]. Is trust-sensitive revision non-manipulable, i.e., does defined from an AGM revision operator and a state partition satisfy the above postulate? For our two extreme cases the answer is yes: If Π is the unit partition then satisfies Success, so of course B can then do no better than just telling φ to A to achieve its goal, while if Π is the trivial partition then B can say nothing at all to change A s beliefs so in both cases the postulate is trivially satisfied. However, in general the answer is no, which can be seen from the following. Proposition 10 If a revision operator satisfies both Feeble Success and Non-Manipulability then it satisfies Very Weak Success. Since we have already seen that trust-sensitive revision satisfies Feeble Success but not, in general Very Weak Success (Propositions 7 and 8), we can immediately state: Proposition 11 There exists an AGM revision operator and a state partition Π such that defined via and Π fails Nonmanipulability. So trust-sensitive revision is manipulable. However, this is neither surprising nor undesirable. We already showed that it is not manipulable in the extreme cases. But informally, the notion of trust means that one agent is willing to believe things said by another. If we trust an agent to be able to draw certain distinctions, then we also accept the consequences of those distinctions when incorporating their reports. 5 Trust Pseudometrics 5.1 Measuring Trust Thus far, we have assumed that an agent is either trusted to distinguish certain states, or they are not. This is not sufficient if we consider situations where several different sources may provide conflicting information. In such cases, we need to determine which information source is the most trusted. In order to capture different levels of trust, we introduce a measure of the distance between states. Each agent A will associate a distance function d B over states with each agent B. If d B(s, t) = 0, then B can not be trusted to distinguish between the states s and t. If d B(s, t) is large, then A has a high level of trust in B s ability to distinguish between s and t. We will require that the distance function is a pseudoultrametric on the state space, which means that it satisfies the following properties for all x, y, z: 1. d(x, x) = 0 2. d(x, y) = d(y, x) 3. d(x, z) max{d(x, y), d(y, z)} A pseudo-ultrametric differs from an ultrametric in that d(x, y) = 0 does not imply x = y; this would be undesirable since we use the distance 0 to represent indistinguishability rather than identity. The third property is the ultrametric inequality, which is a strengthening of the triangle inequality. Definition 6 If A A, an ultrametric trust function TA is a function mapping each B A to a pseudo-ultrametric d B on 2F. The pair (2F, TA) is called a trust space. We associate a sequence of state partitions with each trust space. Definition 7 Let (2F, TA) be a trust space, let B A, and let i be a number. For each state s, define: ΠB A(i)(s) = {t | d B(s, t) i}. The following proposition is a known result in the theory of metric spaces, restated in our setting. The proof requires the ultrametric inequality. Proposition 12 For any number i, the collection of sets {ΠB A(i)(s) | s 2F} is a state partition. Moreover, if i < j, then ΠB A(i) is a refinement of ΠB A(j). The cells of the partition ΠB A(i) consist of all states that are separated by a distance of no more than i. Hence, a pseudometric trust space defines a sequence of partitions for each agent that gets coarser as we increase the index i. Since we can use Definition 4 to define a trust-sensitive revision operator from a state partition, we can now define a trust-sensitive revision operator for any fixed distance i between states. The simplest such operator is the following. Definition 8 Let (2F, TA) be a trust space. The trustsensitive revision operator for A with respect to B is the trustsensitive revision operator given by ΠB A(0). In other words, given an ultrametric, the simplest revision operator is obtained by saying states are indistinguishable just in case the distance between them is 0. However, as i increases, we get different partitions that require greater trust in B to distinguish between states. In the next section, we use this sequence of partitions to resolve conflicting reports between agents that are trusted to differing degrees. Example 6 We give two ultrametrics that produce the same basic trust partition. Assume two doctors: a general practitioner D and a specialist S. The vocabulary includes two symbols: ear and skin. Informally, ear is true if the patient has an ear infection, and skin is true if the patient has skin cancer. An ear infection can be diagnosed by any doctor, whereas skin cancer is typically diagnosed by a specialist. To capture these facts, we define two pseudo-ultrametrics d D and d S over the states labelled as follows: s1 = {ear, skin}; s2 = {ear}; s3 = {skin}; s4 = . s1, s2 s1, s3 s1, s4 s2, s3 s2, s4 s3, s4 d D 1 2 2 2 2 1 d S 2 2 2 2 2 2 Note that S is more trusted than D to distinguish states related to skin cancer. As such, S should be believed in the case of conflicting reports from D and S with respect to such states. 5.2 Multiple Reports Formally, define a report to be a pair (φ, B) where φ is a formula and B A. For simplicity in this section we assume φ is consistent and also that, given a finite set of reports Φ = {(φi, Bi) | i < n} incoming to A, we have Bi = Bj for i = j. We are interested in using ultrametric trust functions to address the situation where an agent simultaneously receives a set of reports from different agents. Proposition 13 Let {A} B A, and let Φ = {(φi, Bi) | i < n} be a finite set of reports. There exists a number m such that T i