# computing_preferences_based_on_agents_beliefs__a0a0fd25.pdf Computing Preferences Based on Agents Beliefs Jian Luo, Fuan Pu, Yulai Zhang, Guiming Luo School of Software, Tsinghua University Tsinghua National Laboratory for Information Science and Technology Beijing 100084, China {j-luo10@mails, pfa12@mails, zhangyl08@mails, gluo@mail}.tsinghua.edu.cn The knowledgebase uncertainty and the argument preferences are considered in this paper. The uncertainty is captured by weighted satisfiability degree, while a preference relation over arguments is derived by the beliefs of an agent. Introduction Argumentation is a reasoning model based on the construction and the evaluation of arguments (Ches nevar, Maguitman, and Loui 2000; Prakken and Vreeswijk 2002). In logical argumentation, arguments are usually constructed from a knowledgebase , possibly including conflict information (Besnard and Hunter 2006; 2008). There may be some uncertain information in the knowledgebase. For example, it will rain in New York tomorrow. Most information is uncertain to some degree. So, how to quantify uncertain information? In order to decide on a pair wise basis whether one argument defeats another, preferences over arguments have been harnessed in argumentation theory. In (Simari and Loui 1992; Benferhat, Dubois, and Prade 1993; Prakken and Sartor 1997) different preference relations between arguments have been defined. A preference relation captures differences in arguments strengths. However, it is not always clear what the preferences mean or where these preferences come from? Whether an argument is believable depends not only on the intrinsic merits of the argument of course, it needs to be based on plausible premises and must be sound but also on the audience to which it is addressed. Hunter introduced the beliefs of an agent in argument judgement (Hunter 2004; Besnard and Hunter 2008). However, only certain information was considered in the knowledgebase. In this paper, we define weighted satisfiability degree to capture the uncertainty of knowledgebase. Based on the beliefs of an agent, a preference relation between arguments is defined. Finally, a preference-based argumentation framework can be derived. This work was supported by the Funds NSFC61171121 and the Science Foundation of Chinese Ministry of Education - China Mobile 2012. Copyright c 2014, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Satisfiability Degree Let P = {p1, p2, . . . , pn} be a finite and non-empty set of atoms, we use LP to denote the set of all propositional formulae over P, formed from the logical connectives of , , and . Let p, q, r, . . . represent atoms, α, β, γ, . . . denote formulae, and , Φ, Ψ, . . . denote sets of finite formulae. The tautology is denoted by and represents the contradiction or falsity. We start introducing the concepts of satisfiability degree by Luo et al. (Luo, Yin, and Hu 2009). An assignment of truth values to the elements of a set P = {p1, p2, . . . , pn} is called interpretation relative to P. Let ΩP = {0, 1}n be the set of all 2n different interpretations. Thus, any formula α LP and ω ΩP , α(ω) {0, 1}. For any α LP , a subset Ωα ΩP is defined as below: Ωα = {ω | α(ω) = 1, ω ΩP } Definition 1. Given a propositional formula set LP and the global interpretation field ΩP , the subset Ωα is defined as above. Then, function : LP [0, 1] is called the satisfiability degree (s.d.) on ΩP , if for α LP : (α) = card(Ωα) card(ΩP ) (1) where card(X) denotes the cardinality of the set X. For more details about satisfiability degree, including algorithms and properties, please refer to (Luo, Yin, and Hu 2009; Luo, Luo, and Xia 2011). We write Φ α to mean that the set of formulae Φ entails the formula α. The notion V Φ will be used to denote the conjunction of all formulae in Φ, specially V = . Definition 2. An argument is a pair Φ, α s.t. Φ is a consistent, finite set of formulae, α is a formula such that Φ α, and no proper subset of Φ entails α. The set of all arguments generated from is denoted as A ( ). If A = Φ, α is an argument for α, we use the function support(A) = Φ to denote the support of A and claim(A) = α to denote the claim of A. Preferences Analysis The set of atoms contained in α is given by Atoms(α), and the set of atoms used in Φ is computed by Atoms(Φ) = S α Φ Atoms(α). The weight vector ϖ of represents the Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence weight assignments for atoms in Atoms( ). Consider a formula α as a whole with two interpretations true and false. Assign some weights to both interpretations as weight(1) and weight(0) representing the weights for the true interpretation and the false interpretation such that weight(1) + weight(0) = 1 where weight(1), weight(0) [0, 1]. Let P = {p1, p2, . . . , pn} where each pi has weighted interpretations weightϖ pi(1) and weightϖ pi(0) such that weightϖ pi(1) = ϖ(pi), weightϖ pi(0) = 1 ϖ(pi). Definition 3. Let ΩP be the global field and ϖ a weight vector on P, the weight of ω ΩP , denoted as weightϖ(ω), is defined: weightϖ(ω) = i=1 [ωi ϖ(pi)+(1 ωi) (1 ϖ(pi))] (2) where ω = (ω1, . . . , ωn), and ωi {0, 1}(1 i n). Proposition 1. Given a weight vector ϖ on an atom set P and the global field ΩP for interpreting LP , then X ω ΩP weightϖ(ω) = 1 Definition 4. Given a weight vector ϖ on P, and the global interpretation field ΩP . The weighted satisfiability degree (w.s.d.) of α LP is defined as ϖ(α): ω Ωα weightϖ(ω) (3) The w.s.d. describes the satisfiable extent of any formula in LP given ϖ. Specially, there is a weight vector ϖm such that ϖm(pi) = 1/2 for any p P, then weightϖ(ω) = 1/2n (card(P) = n) for any ω ΩP . In this case, the w.s.d. of α is the same as the s.d. of α. Proposition 2. Given a weight vector ϖ on atom set P and any formulae α, β LP , the w.s.d. satisfies: ϖ( ) = 0, ϖ( ) = 1, ϖ(α) + ϖ( α) = 1 ϖ(α β) + ϖ(α β) = ϖ(α) + ϖ(β) Definition 5. Given a weight vector ϖ on atom set P, α, β LP and ϖ(β) > 0, the weighted conditional satisfiability degree (w.c.s.d.) of α given β is ϖ(α | β) = ϖ(α β) ϖ(β) Proposition 3. Let ϖ be a weight vector on atom set P. For any formulae α, β, γ LP and ϖ(γ) > 0, then ϖ(α | ) = ϖ(α), ϖ(γ | γ) = 1 ϖ( | γ) = 1, ϖ( | γ) = 0 ϖ(α β | γ)+ ϖ(α β | γ) = ϖ(α | γ)+ ϖ(β | γ) Example 1. Let P = {p1, p2, p3}, and then ΩP contains the following 8 interpretations: ω1 = (1, 1, 1), ω2 = (1, 1, 0), ω3 = (1, 0, 1), ω4 = (1, 0, 0), ω5 = (0, 1, 1), ω6 = (0, 1, 0), ω7 = (0, 0, 1), ω8 = (0, 0, 0). For α = (p1 p2) p3 and β = p1 p2 p3, we have Ωα = {ω2, ω4, ω6}, Ωβ = ΩP \ {ω2}, and Ωα β = {ω4, ω6}. Consider the weight vector ϖ(p1, p2, p3) = (1/3, 1/2, 3/4), the evaluating results are given as follows: ϖ(α) = 1/6, ϖ(β) = 23/24, ϖ(β | α) = 3/4. So, β is more satisfiable than α due to ϖ(β) > ϖ(α). A beliefbase B on is a set of formulae such that ϖ(V B) > 0. The preference defined below reflects how much an agent supports an argument under his/her beliefs. Definition 6. Let A A ( ), ϖ on Atoms( ), and B be a beliefbase. The preference for A under B, denoted as κ(A, B), is defined: κ(A, B) = ϖ( S(A) | B) (4) Example 2. Consider the argument A1 = { p q}, (p q) and the weight vector ϖ(p, q) = (1/2, 2/3). Given the beliefbase B = {p q}, then the preference κ(A1, B) = 1/9. Proposition 4. Given a knowledgebase , ϖ on Atoms( ) and a beliefbase B, a preference-based argumentation framework (A, R, prefs) can be derived where: 1) A A ( ); 2) R A A and (A1, A2) R, ϖ(V support(A2) | claim(A1)) = 0; 3) prefs A A s.t. (A1, A2) prefs iff κ(A1, B) > κ(A2, B). Conclusions In this paper, classical logic is used for generating arguments. A weight vector is given to represent the weight assignments for atoms in the uncertain knowledgebase. This vector is then used to induce the weighted satisfiability degree. Based on the beliefs of an agent, preference is proposed to measure how much an agent supports an argument. Finally, a preference-based argumentation framework can be derived from the knowledgebase given the beliefbase. References Benferhat, S.; Dubois, D.; and Prade, H. 1993. Argumentative in ference in uncertain and in consistent knowledge bases. In Proceedings of UAI 93, 411 419. Besnard, P., and Hunter, A. 2006. Knowledgebase compilation for efficient logical argumentation. In KR 06, 123 133. Besnard, P., and Hunter, A. 2008. Elements of argumentation. volume 47. MIT press Cambridge. Ches nevar, C. I.; Maguitman, A. G.; and Loui, R. P. 2000. Logical models of argument. ACM Comput. Surv. 337 383. Hunter, A. 2004. Making argumentation more believable. In AAAI 04, 269 274. Luo, J.; Luo, G.; and Xia, M. 2011. An algorithm for satisfiability degree computation. In IJCCI (ECTA-FCTA) 11, 501 504. Luo, G.; Yin, C.; and Hu, P. 2009. An algorithm for calculating the satisfiability degree. In Proceedings of FSKD (2) 09, volume 7, 322 326. Prakken, H., and Sartor, G. 1997. Argument-based extended logic programming with defeasible priorities. J. of Applied Non-Classical Logics 7:25 75. Prakken, H., and Vreeswijk, G. 2002. Logics for defeasible argumentation. Handbook of philosophical logic 4(5):219 318. Simari, G., and Loui, R. 1992. A mathematical treatment of defeasible reasoning and its implementation. Artificial Intelligence J. 53:125 157.