# on_the_relationship_between_plog_and_lp__435c5d59.pdf On the Relationship between P-log and LPMLN Evgenii Balai and Michael Gelfond Texas Tech University, Lubbock Texas {evgenii.balai, michael.gelfond}@ttu.edu The paper investigates the relationship between knowledge representation languages P-log [Baral et al., 2004] and LPMLN [Lee et al., 2015] designed for representing and reasoning with logic and probability. We give a translation from an important subset of LPMLN to P-log which preserves probabilistic functions defined by LPMLN programs and complements recent research [Lee and Wang, 2016] by the authors of LPMLN where they give a similar translation from a subset of P-log to their language. This work sheds light on the different ways to treat inconsistency in both languages. 1 Introduction Combining logic and probability has been one of the most important directions of artificial intelligence research in recent years. Many different languages and formalisms have been developed to represent and reason about both probabilistic and logical arguments, such as Prob Log [De Raedt et al., 2007; Fierens et al., 2015], PRISM [Sato, 1995; 2009], LPADs [Vennekens et al., 2004], CP-Logic [Vennekens et al., 2009], MLN [Richardson and Domingos, 2006], and others. In this paper we focus on two such languages, P-log and LPMLN. They are distinguished from other mentioned alternatives by their common logic base, Answer Set Prolog (ASP) [Gelfond and Lifschitz, 1991], a logical formalism modeling beliefs of a rational agent. ASP is powerful enough to naturally represent defaults, non-monotonically update the knowledge base with new information, define relations recursively, reason about causal effects of actions, etc. The language serves as the foundation of the so called Answer Set Programming paradigm [Marek and Truszczynski, 1999; Niemela, 1998] and has been used in a large number of applications [Brewka et al., 2011]. An agent associated with an ASP knowledge base reasons about three degrees of belief he can believe that p is true, believe that p is false, or remain uncommitted about his belief in p. In the latter case the truth of p remains unknown. This work was sponsored by NASA under the grant Cert Ware ABSA: Argument Based Safety Assurance, Grant #: NRA NNH11ZEA001N-SSAT2 An extension of ASP, called P-log, allows the reasoner to express and reason with finer, numerically expressed, gradation of the strength of his beliefs. In other words, it preserves the power of ASP and, in addition, allows an agent to do sophisticated probabilistic reasoning. The main goal of the P-log designers was to provide the language and reasoning mechanism which can be used for clear and transparent modeling of knowledge involving logical and probabilistic arguments. There are a number of nontrivial scenarios formalized in [Baral et al., 2009]. More computationally challenging scenarios, including probabilistic planning and diagnosis, can be found in [Zhu, 2012], where their P-log representations and performance analysis are given. A new version of P-log , introduced in [Gelfond and Rushton, 2010], replaces ASP by its extension CR-Prolog [Balduccini and Gelfond, 2003], which expands logical power of ASP (and hence the original P-log) by allowing so called consistency restoring rules (cr-rules) used for restoring consistency of the program by a certain form of abductive reasoning. Despite the presence of cr-rules, the underlying philosophy of P-log requires the corresponding knowledge base to be consistent. A rational reasoner is assumed to trust its rules and refuses to deal with a knowledge base containing statements p and p. Possible means to ensure consistency of the program should be supplied by a knowledge engineer. This is natural from a theoretical goal of the authors but is also important in many practical applications, for example, where inconsistency of the knowledge base may be a sign of some errors in its design and therefore should be addressed by making the necessary changes. The language LPMLN, introduced in [Lee et al., 2015], is based on a different philosophy. Its first goal seems to be similar to that of P-log it is supposed to provide means for combining ASP based reasoning with reasoning about probability. But, in addition, the new language is aimed at providing a powerful (though somewhat less predictable) way of resolving inconsistencies which may appear in LPMLN programs due to mechanical combination of different knowledge bases, designer mistakes, or some other reasons. The design of the language was influenced by Markov Logic Networks [Richardson and Domingos, 2006] and seems to be practically independent from P-log. As a result, the relationship between these two languages with seemingly similar goals Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16) remains unclear. This paper is a step in remedying this situation. In particular, we give a translation from an important subset of LPMLN to P-log which preserves probabilistic functions defined by LPMLN programs. The work complements resent research[Lee and Wang, 2016] by Lee and Wang in which the authors give a translation from a subset of P-log to LPMLN . The rest of this paper is organized as follows. In section 2 we define the subset of P-log used in this paper. In section 3 we briefly describe the syntax and semantics of LPMLN. In section 4 we describe a translation from LPMLN to P-log and define the correspondence between LPMLN programs and their P-log translations precisely. Section 5 uses the results from sections 4 to describe certain properties of probabilities defined by LPMLN programs. Section 6 concludes the paper by summarizing the obtained results and future work. 2 Language P-log In this section we introduce a simplified version of P-log with consistency restoring rules from [Gelfond and Rushton, 2010] which is sufficient to define the translation from LPMLN. We do so by considering a simple domain consisting of two adjacent rooms, r1 and r2, and a robot initially located in r1. We present a P-log program, 0, modeling direct effects of the action move of the robot attempting to enter the second room. The program is presented to illustrate the language constructs so we ignore concerns about its generality, elaboration tolerance, etc. We start with declarations of objects and functions of the domain. In P-log such functions are usually referred to as attributes, while expressions of the form f( x), where f is an attribute, are called attribute terms. We need the sort step = {0, 1} where 0 and 1 denote time-steps before and after the execution of the action respectively. We also need the sort room = {r1, r2} and attributes move, broken : boolean loc : step ! room Here move is true iff at step 0 the robot has attempted to move to room r2; broken holds if the robot has been broken and hence may exhibit some non-deterministic behavior; loc gives the location of the robot at a given step. Declarations of P-log are followed by the program rules. In our case we will have rules loc(0) = r1 (1) indicating that at step 0 the robot located in room r1 attempted to move to room r2. Here move is a shorthand for move = true. We use this convention for all boolean functions: f( x) = true and f( x) = false are written as f( x) and f( x) respectively. As expected the effect of action move for the wellfunctioning robot is given by the rule: loc(1) = r2 not broken, move. (3) If the robot is malfunctioning however we need to state that the effect of move is random the robot can still successfully move to room r2 or to stay in room r1. In P-log this is expressed by the following random selection rule r [r] random(loc(1)) broken, move (4) which says that if the malfunctioning robot will attempt to move to room r2 then, in the resulting state, attribute term loc(1) will randomly take a value from the range of loc. The rules of the program described so far can be easily translated into regular ASP rules we simply need to replace random(loc(1)) in the last rule by (loc(1) = r1 or loc(1) = r2), replace atoms of the form f( x) = y by f( x, y) and, for every term f( x), add a constraint { f( x) = y1, f( x) = y2, y1 6= y2}. In general, an atom of the form random(f( x)) is replaced by the disjunction f( x) = y1 or . . . or f( x) = yk where {y1, . . . , yk} is the range of f. Answer sets of the translation of a P-log program into ASP are viewed as possible worlds of the probabilistic model defined by . It is easy to see that program 0 consisting of rules (1) (4) has one such possible world W0 = {move, loc(0) = r1, loc(1) = r2}1 . Program 1 = 0 [ {broken} will have two possible worlds, W1 = {broken, move, loc(0) = r1, loc(1) = r2} and W2 = {broken, move, loc(0) = r1, loc(1) = r1}. In the case of multiple possible worlds we need some device allowing to specify the numeric probabilities of possible values of random attributes. This is expressed in P-log through causal probability statements, or, simply, pr-atoms. A pr-atom takes the form prr(f( x) = y) = v where f( x) is a random attribute, y is a value from the range of f, and v 2 [0, 1] is the probability of y to be selected as the value of f( x) as the result of firing random selection rule r. In case of 1 such pr-atoms may look as, say, prr(loc(1) = r1) = 0.3 prr(loc(1) = r2) = 0.7 Unnormalized probabilistic measure of a possible world W is defined as the product of probabilities of the random atoms f1( x1) = y1, . . . , fk( xk) = yk from W. These probabilities are obtained from the corresponding pr-atoms. Normalized probabilistic measures and probability function on the sets of possible worlds and on the literals of the language are defined as usual. Let P0 and P1 be the probability functions defined by 0 and 1 respectively. W0 has no random atoms, the empty product is 1, and hence the probabilistic measure of 1for convenience we will often identify original P-log literals with corresponding ASP ones (e.g, we will sometimes write loc(1) = r1 in place of loc(1, r1)) W0 and P0(loc(1) = r1) are both equal to 1. The probabilistic measures of W1 and W2 are 0.7 and 0.3 respectively and hence P1(loc(1) = r1) = 0.3. As mentioned in the introduction, a P-log program can be inconsistent. For instance, program 2 = 0[{loc(1) = r1} has no possible worlds. To avoid this particular inconsistency the program designer can expand 0 by a cr-rule: which allows to restore inconsistency of 2 by assuming that the robot is broken. Since the original program 0 is consistent, the resulting program, 0 new will define the same probabilistic model as 0. The program 2 new, consisting of 0 new and the fact {loc(1) = r1}, unlike the program 2, will be consistent and have one possible world, W2. The extension of 0 by a new information changed the probability of the robot being in room r2 after execution of move from 1 to 0. 3 Language LPMLN In this section we give a brief summary of LPMLN ([Lee et al., 2015]). We limit out attention to ground programs whose rules contain no double default negation not not and no disjunction. To the best of our knowledge, no example in the literature demonstrating the use of LPMLN for formalization of knowledge uses these constructs. As usual, we may use rules with variables viewed as shorthands for the sets of their ground instances. A program of the language is a finite set of LPMLN rules ground ASP rules preceded by a weight: symbol or a real number. Rules of the first type are called hard while rules of the second are referred to as soft. Despite their name the hard rules are not really hard . Their behavior is reminiscent of that of defaults. According to the semantics of the language the reasoner associated with a program constructs possible worlds with non-zero probability by trying to satisfy as many hard rules as possible. The satisfiability requirement for the soft rules and the use of their weights for assigning the probability measure to possible worlds of M are more subtle. In what follows we give the necessary definitions and illustrate them by an example of LPMLN program. Sometimes we abuse the notation and identify an LPMLN program M with its ASP counterpart obtained from M by dropping the weights of its rules. Stable models of such a counterpart will be referred to as ASP models of M. By MI we denote the set of rules of M which are satised by an interpretation I of M. An interpretation W is a possible world of M if it is a ASP model of MW . We will say that a possible world W is supported by MW . As usual, by M we denote the set of all possible worlds of M. Unnormalized measure of a possible world W 2 M (denoted by w M(W)) is expγ where γ is the sum of weights of all rules of M satisfied by W. Note that, in case M contains rules with -weights satisfied by W, w M(W) is not a numerical value and should be understood as a symbolic expression. The probability function, PM, defined by program M is PM(W) = lim V 2 M w M(V ) It is easy to check that PM maps possible worlds of M into the interval [0, 1] and satisfies standard axioms of probability. As expected, the probabilistic model defined by M consists of M and PM. Let us now use LPMLN to formalize the stories from the previous section. Program M 0 will capture the first such story corresponding to P-log program 0. It clearly should contain rules (1) (3) of 0. In addition, for every attribute f it must include a constraint f(X) = Y1, f(X) = Y2, Y1 6= Y2 (6) which is hidden in P-log semantics of 0. All these rules, however, should be supplied with some weights. Since we strongly believe that the rules are correct, we would like to preserve as many of them as possible. Hence, we view them as hard. LPMLN does not have a direct analog of rule (4) but, it seems natural to represent it by two rules: ln(0.3) : loc(1) = r1 broken, move (7) ln(0.7) : loc(1) = r2 broken, move (8) where the logarithms are added to the probabilities to cancel the exponentiation from the definition of unnormalized measure w M. In addition, the hard rule : not loc(1) = r1, not loc(1) = r2 (9) is added to force loc(1) to take a value (in P-log this is guaranteed by the semantics of disjunction). This concludes construction of M 0. It is worth noting that M0 is similar to the program obtained from 0 by a general translation from Plog to LPMLN described in [Lee and Wang, 2016]. We will show that there is a simple relationship between probabilistic models defined by 0 and M 0. The possible worlds of 0 correspond to the possible worlds M 0 with nonzero probability (also called probabilistic stable models of M 0 in [Lee et al., 2015]). Moreover, probability functions PM 0 and P 0 coincide on probabilistic stable models of M 0. Let us first notice that W0 = {move, loc(0) = r1, loc(1) = r2} is an ASP model of M 0 W0 and hence is a possible world of M 0. The probability of W0 is 1. Clearly, W0 is the only possible world of M 0 satisfying all its hard rules. M 0 however has other possible worlds. For instance, V = {move} satisfies all the rules of M 0\{(1), (3), (9)} and is the stable model of this program. Therefore, it is a possible world of M 0. It is easy to check however that V is not a probabilistic stable model of M 0. In fact, this is a consequence of a general result in [Lee et al., 2015] which says that if there is a possible world of LPMLN program M which satisfies all its hard rules then every probabilistic stable model of M also satisfies them. The result clearly implies that W0 is the only probabilistic stable model of M 0. The program M 1 = M 0[{ : broken} is again similar to 1. It has two probabilistic stable models, W1 and W2 with probabilities equal to 0.7 and 0.3 respectively. As before, there are other possible worlds but none of them satisfies all the hard rules of M 1 and hence they have probability 0. A more serious difference can be observed however between the program 2 and the new program M 2 obtained from M 0 by adding the rule : loc(1) = r1 (10) Since the rules of 2 are strict and, therefore, should be satisfied by possible worlds, the program 2 is inconsistent. M 2 however does not have such a restriction. It will have three probabilistic stable models. The first one is an ASP model of M 2 \ {(2)}. It resolves contradiction by assuming that the robot failed to move. The second is an ASP model of M 2 \{(3)}. The contradiction is removed by abandoning the causal law (3). Another possible explanation may given by an ASP model of M 2 \ {(10)} which assumes that our observation of the robot being in r1 after the execution of move is incorrect. This seems to be a reasonable answer. Finally, to model the effect of consistency restoring rule (5), we extend M 0 with the rule w : broken (11) where w is a very large negative weight. The resulting program M 0 new will have 3 probabilistic stable models, in two of which the robot is believed to be broken, however the probabilities of both of them are very low. This behavior is quite different (and, in some sense, less elegant) from the similar case in P-log where the cr-rule (5) was not used since the program 0 is consistent. Similarly to 2 new, the program M 2 new consisting of M 0 new and the fact {loc(1) = r1} has exactly one probabilistic stable model W2 (which satisfies all the hard rules of the program). As in P-log, the semantics of LPMLN allow updating of the probability of the robot to be in room r2 at step 1 from 0 to 1 by adding new information. However, unlike in P-log, extending the original program M 0 with a soft counterpart (11) of the cr-rule (5) leads to introducing new probabilistic stable models of the program with negligible probabilities. 4 From LPMLN to P-log In this section we state the main result of this paper: establishing a relationship between LPMLN and P-log programs. First we need a definition. Let M be an LPMLN program and At(M) be the set of atoms in M. Definition 1 (Counterpart). A P-log program is called a counterpart of M if there exists a bijection φ from the set of probabilistic stable models of M to the set of possible worlds of such that 1. for every probabilistic stable model W of M , if PM and P are probability functions defined by M and respectively, then PM(W) = P (φ(W)) 2. for every probabilistic stable model W of M, W At(M) φ(W), that is, W and φ(W) coincide on the atoms of M. Main Theorem. For every LPMLN program M (as defined in section 3) there exists its P-log counterpart, (M), which is linear-time constructible. 2 The previous theorem immediately implies the following important corollary: Corollary 1. If A is atom of an LPMLN program M, then PM(A) = P (M)(A) where the probabilities PM(A) and P (M)(A) are defined as the sum of probabilities of possible worlds which contain A of the corresponding program. 2 To prove the theorem we need the following Lemma which gives an alternative characterization of probabilistic stable models of M. Lemma 1. Let W be an interpretation satisfying n hard rules of M. W is a probabilistic stable model of M if and only if 1. W is a possible world of M supported by some M 0 2. no possible world of M supported by some M 1 M satisfies more than n hard rules of M. Proof of the Main Theorem: Due to space limitations, we only provide a construction of (M) given a program M and define a map φ from definition 1 (the complete proof of the theorem, including a proof of lemma 1 and a short outline, can be found in the extended version of the paper2). We will assume that all atoms in are of the form p, where p is an identifier (in the general case, the atoms of the form p(t1, . . . , tn) can be translated into unique identifiers). In what follows we will construct a P-log program which chooses a subprogram M 0 of M and computes ASP models of M supported by M 0 such that no possible world of M is supported by a program containing more hard rules than M0. By Lemma 1, they will be probabilistic stable models of M. Appropriate probability atoms will ensure that the corresponding probabilities match. Let r1, . . . , rn be the enumeration of rules of M. We will refer to i as the label of ri. The translation (M) is defined as follows: 1. (M) contains (a) declarations of the sorts hard and soft sets of labels of hard and soft rules of M respectively. (b) declaration a : boolean for each atom a in the sig- nature of . (c) declarations of the auxiliary attributes h, b, selected, sat : soft ! boolean ab : hard ! boolean whose meaning will be explained later. We refer to this part of the translation as the declaration part of . 2http://www.depts.ttu.edu/cs/research/krlab/pdfs/papers/ mlplogfull.pdf 2. For every hard rule ri of the form : head body (12) (M) contains the rules: head body, not ab(i) (13) The auxiliary relation ab(i) says that rule ri is abnormal (or not-applicable)) . The addition of not ab(i) turns the translation (13) of s rule (12) into a default rule of P-log. The cr-rule (14), called Contingency Axiom [Gelfond and Rushton, 2010], says that, the reasoner may possibly believe ab(i). This possibility, however, may be used only if there is no way to obtain a consistent set of beliefs by using only regular rules of the program. It is commonly used to capture indirect exceptions to defaults [Gelfond and Kahl, 2014]. Together, these rules allow to stop the application of a minimal number of the hard rules of M thus avoiding possible inconsistency and conforming to the semantics of such rules in LPMLN. This completes the translation for programs consisting of hard rules only. 3. For every soft rule ri of the form w : head body (15) (M) contains the rules: head body, selected(i) (16) random(selected(i)) (17) selected(i), sat(i) (18) The auxiliary relation selected(i) says that the rule with label i is selected ; relation sat(i) stands for the rule with label i is satisfied . The addition of selected(i) to the body of the translation (16) of M s rule (15) together with random selection rule (17) allows a reasoner to select soft rules of a candidate subprogram M0 of M. Constraint (18) is used to ensure that computed models of M0 satisfy condition 1 from the Lemma 1. Of course, to make this work we need the definition of sat which is given by the following rules: sat(i) b(i), h(i) (19) sat(i) not b(i) (20) b(i) B (21) where B is the body of soft rule ri, and h(i) l (22) for every literal l in the head of soft rule ri. As expected, b(i) stands for the body of ri is satisfied and h(i) for the head of ri is satisfied . 4. Finally, for every selected(i), (M) contains probabil- pr(selected(i)) = ewi 1 + ewi (23) which says the soft rule ri with weight wi is selected (that is, added to M 0) with probability ewi 1+ewi . It is easy to see that the size of (M) is linear in terms of the size of M. Moreover, (M) is modular, that is, it can be easily extended if new rules are added to M. The map φ is defined is follows: φ(W) =W [ {ab(i) | i 2 hard, W does not satisfy ri} [ {sat(i) | i 2 soft, W satisfies ri} [ {selected(i) | i 2 soft, W does not satisfy ri} [ { selected(i) | i 2 soft, W satisfies ri} [ {b(i) | i 2 soft, W satisfies the body of ri} [ {h(i) | i 2 soft, W satisfies the head of ri} The rest of the proof can be outlined as follows. We first need to show that for every probabilistic stable model W, φ(W) is a possible world of (M). This can be done by using standard techniques suitable for CR-Prolog programs. After that, we show the bijectivity of φ. This step can be split into two parts. Firstly, the surjectivity of φ(W) follows from the fact that a probabilistic stable model V of M obtained from a possible world W of ( ) by dropping all newly introduced literals from W satisfies φ(V ) = W. Secondly, the injectivity follows trivially from the definition of φ. Finally, the required probability equality from definition 1 follows from the definition of probabilistic functions in both languages. 2 The following is an example of the translation. Example. Consider the following LPMLN program M from [Lee et al., 2015]: : concert Booked. : long Drive concert Booked, not cancelled. ln(0.2) : cancelled. ln(0.8) : cancelled. The program has two probabilistic stable models (each of which satisfy both its hard rules): 1. V1 = {concert Booked, cancelled} 2. V2 = {concert Booked, long Drive} with corresponding probabilities equal to 0.2 and 0.8. The corresponding translation (M) looks as follows: % declaration part: soft = {3, 4}. hard = {1, 2}. concert Booked, cancelled, long Drive : boolean. b, h, selected, sat : soft ! boolean. ab : hard ! boolean. % translation of hard rules: concert Booked not ab(1). long Drive concert Booked, not cancelled, not ab(2). + . % translation of soft rules: cancelled selected(3). cancelled, selected(4). random(selected(R)). selected(R), sat(R). % definition of satisfiability: sat(R) not b(R). sat(R) b(R), h(R). b(3). b(4) cancelled. h(3) cancelled. % probability atoms: pr(selected(3)) = 0.2/(1 + 0.2). pr(selected(4)) = 0.8/(1 + 0.8). The translation (M) has two possible worlds: 1. U1 = {selected(3), selected(4), h(3), cancelled, b(3), b(4), sat(3), concert Booked} 2. U2 = { selected(3), selected(4), b(3), long Drive, concert Booked, sat(4)} As expected, on the atoms of M, U1 and U2 coincide with the corresponding probabilistic stable models {cancelled, concert Booked} and {concert Booked, long Drive} of M (more specifically, U1 = φ(V1) and U2 = φ(V2)). It can be easily checked that, as promised, P (M)(U1) = PM(V1) = 0.2 and P (M)(U2) = PM(V2) = 0.8. 5 Probabilities of Soft Rules in LPMLN Let M be an LPMLN program with at least one soft rule ri of the form wi : head body. The authors of LPMLN view ri as an implication and define the probability PM(ri) as follows: W 2 M,W |=ri Note that replacing M with the set of all probabilistic stable models of M gives an equivalent definition. We will use the result obtained in the previous section to investigate the relationship between the reasoner s confidence in ri, i.e, its weight wi and its probability PM(ri). It seems natural to assume that PM(ri) would be proportional to w. However, this is not necessarily the case. Let us consider the following program: ln(3) : a. ln(3) : a. ln(2) : b. Despite the larger weight, the first rule has smaller probability than the third one (the corresponding probabilities are equal to 1/2 and 2/3 respectively). Informally speaking, this happens because the first rule is inconsistent with the second one, while the third one doesn t have such a restriction. We next use the results from the previous section to obtain an alternative understanding of the probability PM(ri). Let (M) be the counterpart of M described there and φ be the bijection from Definition 1. It can be easily seen that ri is satisfied by a possible world W of M iff φ(W) contains selected(i). This, together with the first clause of Definition 1 implies that: PM(ri) = P (M)(selected(i)) (25) That is, the probability of ri in M is equal to the probability of selected(i) in P-log program (M). In general, this probability depends on all possible worlds of (M) and their probabilities. However, for some cases it can be determined uniquely by the weight of ri. This is always the case if (M) belongs to the class of coherent P-log programs, where this probability is equal to the value of the pr-atom pr(selected(i)) in (23). This class, and the sufficient conditions for a P-log program to be in it, are given in [Baral et al., 2009]. For instance, it can be checked that the translation of the program M 3 consisting of soft facts ln(3) : a and ln(2) : b is coherent. Thus, the fact that probability of a is equal to eln(3)/(1 + eln(3)) = 3/4 can be obtained directly from the corresponding pr-atom (23) of (M 3). Note that, in general, to compute the probability of an atom, we may need to perform fairly complex inference (e.g, compute possible worlds of the program). 6 Conclusion and Future Work We have defined a linear-time constructible modular translation from LPMLN programs into P-log programs. Non-zero probability possible worlds of an LPMLN program M coincide with possible worlds of (M) on atoms of M. Moreover, the probabilistic functions defined by M and (M) coincide on atoms M. The work allowed us to better understand both languages, including their treatment of potential inconsistencies, and opened a way to the development of LPMLN solvers based on P-log inference engines. We also believe that this work, together with the new complementary results from [Lee and Wang, 2016], will allow to use the theory developed for one language to discover properties of the other. Our plans for future work are as follows. 1. We plan to complete the current work on the develop- ment of an efficient inference engine for P-log and use the translation to turn it into a solver for LPMLN . 2. In the near future, we expect the appearance of LPMLN solver based on the algorithm from Section 3.4 [Lee et al., 2015]. It will be interesting to use the translatiom from [Lee and Wang, 2016] to turn it into P-log solver and compare its performance with that of the one mentioned above. 3. We plan to investigate the possibility of adapting infer- ence methods developed for MLN[Gogate and Domingos, 2012] and LPMLN for improving efficiency of P-log solvers. Acknowledgments We would like to thank Joohyung Lee and Yi Wang for useful discussions on the topic. [Balduccini and Gelfond, 2003] Marcello Balduccini and Michael Gelfond. Logic programs with consistencyrestoring rules. In International Symposium on Logical Formalization of Commonsense Reasoning, AAAI 2003 Spring Symposium Series, volume 102. The AAAI Press, 2003. [Baral et al., 2004] Chitta Baral, Michael Gelfond, and Nel- son Rushton. Probabilistic reasoning with answer sets. In Logic Programming and Nonmonotonic Reasoning, pages 21 33. Springer, 2004. [Baral et al., 2009] Chitta Baral, Michael Gelfond, and Nel- son Rushton. Probabilistic reasoning with answer sets. Theory and Practice of Logic Programming, 9(01):57 144, 2009. [Brewka et al., 2011] Gerhard Brewka, Thomas Eiter, and Mirosław Truszczy nski. Answer set programming at a glance. Communications of the ACM, 54(12):92 103, 2011. [De Raedt et al., 2007] Luc De Raedt, Angelika Kimmig, and Hannu Toivonen. Problog: A probabilistic prolog and its application in link discovery. In IJCAI, volume 7, pages 2462 2467, 2007. [Fierens et al., 2015] Daan Fierens, Guy Van den Broeck, Joris Renkens, Dimitar Shterionov, Bernd Gutmann, Ingo Thon, Gerda Janssens, and Luc De Raedt. Inference and learning in probabilistic logic programs using weighted boolean formulas. Theory and Practice of Logic Programming, 15(03):358 401, 2015. [Gelfond and Kahl, 2014] Michael Gelfond and Yulia Kahl. Knowledge representation, reasoning, and the design of intelligent agents: The answer-set programming approach. Cambridge University Press, 2014. [Gelfond and Lifschitz, 1991] Michael Gelfond and Vladimir Lifschitz. Classical Negation in Logic Programs and Disjunctive Databases. New Generation Computing, 9(3/4):365 386, 1991. [Gelfond and Rushton, 2010] Michael Gelfond and Nelson Rushton. Causal and probabilistic reasoning in p-log. Heuristics, Probabilities and Causality. A tribute to Judea Pearl, pages 337 359, 2010. [Gogate and Domingos, 2012] Vibhav Gogate and Pedro Domingos. Probabilistic theorem proving. ar Xiv preprint ar Xiv:1202.3724, 2012. [Lee and Wang, 2016] Joohyung Lee and Yi Wang. Weighted rules under the stable model semantics. In Proceedings of International Conference on Principles of Knowledge Representation and Reasoning (KR), 2016. [Lee et al., 2015] Joohyung Lee, Yunsong Meng, and Yi Wang. Markov logc style weighted rules under the stable model semantics. In Technical Communications of the 31st International Conference on Logic Programming, 2015. [Marek and Truszczynski, 1999] Victor W. Marek and Miroslaw Truszczynski. Stable Models and an Alternative Logic Programming Paradigm, pages 375 398. The Logic Programming Paradigm: a 25-Year Perspective. Springer Verlag, Berlin, 1999. [Niemela, 1998] Ilkka Niemela. Logic Programs with Stable Model Semantics as a Constraint Programming Paradigm. In Proceedings of the Workshop on Computational Aspects of Nonmonotonic Reasoning, pages 72 79, Jun 1998. [Richardson and Domingos, 2006] Matthew Richardson and Pedro Domingos. Markov logic networks. Machine learning, 62(1-2):107 136, 2006. [Sato, 1995] Taisuke Sato. A statistical learning method for logic programs with distribution semantics. In In Proceedings of the 12th International Conference on Logic Programming (ICLP95. Citeseer, 1995. [Sato, 2009] Taisuke Sato. Generative modeling by prism. In Logic Programming, pages 24 35. Springer, 2009. [Vennekens et al., 2004] Joost Vennekens, Sofie Verbaeten, and Maurice Bruynooghe. Logic programs with annotated disjunctions. In Logic Programming, pages 431 445. Springer, 2004. [Vennekens et al., 2009] Joost Vennekens, Marc Denecker, and Maurice Bruynooghe. Cp-logic: A language of causal probabilistic events and its relation to logic programming. TPLP, 9(3):245 308, 2009. [Zhu, 2012] Weijun Zhu. Plog: Its algorithms and applica- tions. Ph D thesis, Texas Tech University, 2012.