# on_the_tractability_of_shap_explanations__b6a933e6.pdf Journal of Artificial Intelligence Research 74 (2022) 851-886 Submitted 08/2021; published 06/2022 On the Tractability of SHAP Explanations Guy Van den Broeck guyvdb@cs.ucla.edu University of California 404 Westwood Plaza, Los Angeles, CA 90095, USA Anton Lykov alykov@cs.washington.edu Maximilian Schleich schleich@cs.washington.edu Dan Suciu suciu@cs.washington.edu University of Washington 185 E Stevens Way NE, Seattle, WA 98195, USA Shap explanations are a popular feature-attribution mechanism for explainable AI. They use game-theoretic notions to measure the influence of individual features on the prediction of a machine learning model. Despite a lot of recent interest from both academia and industry, it is not known whether Shap explanations of common machine learning models can be computed efficiently. In this paper, we establish the complexity of computing the Shap explanation in three important settings. First, we consider fully-factorized data distributions, and show that the complexity of computing the Shap explanation is the same as the complexity of computing the expected value of the model. This fully-factorized setting is often used to simplify the Shap computation, yet our results show that the computation can be intractable for commonly used models such as logistic regression. Going beyond fully-factorized distributions, we show that computing Shap explanations is already intractable for a very simple setting: computing Shap explanations of trivial classifiers over naive Bayes distributions. Finally, we show that even computing Shap over the empirical distribution is #P-hard. 1. Introduction Machine learning is increasingly applied in high stakes decision making. As a consequence, there is growing demand for the ability to explain the prediction of machine learning models. One popular explanation technique is to compute feature-attribution scores, in particular using the Shapley values from cooperative game theory (Roth, 1988) as a principled aggregation measure to determine the influence of individual features on the prediction of the collective model. Shapley value based explanations have several desirable properties (Datta, Sen, and Zick, 2016), which is why they have attracted a lot of interest in academia as well as industry in recent years (see e.g., Gade et al. (2019)). ˇStrumbelj and Kononenko (2014) show that Shapley values can be used to explain arbitrary machine learning models. Datta, Sen, and Zick (2016) use Shapley-value-based explanations as part of a broader framework for algorithmic transparency. Lundberg and Lee (2017) use Shapley values in a framework that unifies various explanation techniques, and they coined the term Shap explanation. They show that the Shap explanation is effective in explaining predictions in the medical domain; see Lundberg et al. (2020). More recently there has been a lot of work on the tradeoffs of variants of the original Shap 2022 AI Access Foundation. All rights reserved. Van den Broeck, Lykov, Schleich, & Suciu explanations, e.g., Sundararajan and Najmi (2020), Kumar et al. (2020), Janzing, Minorics, and Bloebaum (2020), Merrick and Taly (2020), and Aas, Jullum, and Løland (2019). Despite all of this interest, there is considerable confusion about the tractability of computing Shap explanations. The Shap explanations determine the influence of a given feature by systematically computing the expected value of the model given a subsets of the features. As a consequence, the complexity of computing Shap explanations depends on the predictive model as well as assumptions on the underlying data distribution. Lundberg et al. (2020) describe a polynomial-time algorithm for computing the Shap explanation over decision trees, but online discussions have pointed out that this algorithm is not correct as stated. We present a concrete example of this shortcoming in Section 2.3. In contrast, for fully-factorized distributions, Bertossi et al. (2020) prove that there are models for which computing the Shap explanation is #P-hard. A contemporaneous paper by Arenas et al. (2020) shows that computing the Shap explanation for tractable logical circuits over uniform and fully factorized binary data distributions is tractable. In general, the complexity of the Shap explanation is open. In this paper we consider the original formulation of the Shap explanation by Lundberg and Lee (2017) and analyze its computational complexity under the following data distributions and model classes: 1. First, we consider fully-factorized distributions, which are the simplest possible data distribution. Fully-factorized distributions capture the assumption that the model s features are independent, which is a commonly used assumption to simplify the computation of the Shap explanations, see for example Lundberg and Lee (2017). For fully-factorized distributions and any prediction model, we show that the complexity of computing the Shap explanation is the same as the complexity of computing the expected value of the model. It follows that there are classes of models for which the computation is tractable (e.g., linear regression, decision trees, tractable circuits) while for other models, including commonly used ones such as logistic regression and neural nets with sigmoid activation functions, it is #P-hard. 2. Going beyond fully-factorized distributions, we show that computing Shap explanation becomes intractable already for the simplest probabilistic model that does not assume feature independence: naive Bayes. As a consequence, the complexity of computing Shap explanations on such data distributions is also intractable for many classes of models, including linear and logistic regression. 3. Finally we consider the empirical distribution, and prove that computing Shap explanations is #P-hard for this class of distributions. This result implies that the algorithm by Lundberg et al. (2020) cannot be fixed to compute the exact Shap explanations over decision trees in polynomial time. 2. Background and Problem Statement Suppose our data is described by n indexed features X = {X1, . . . , Xn}. Each feature variable X takes a value from a finite domain dom(X). A data instance x = (x1, . . . , xn) On the Tractability of SHAP Explanations consists of values x dom(X) for every feature X. This instance space is denoted x X = dom(X1) dom(Xn). We are also given a learned function F : X R that computes a prediction F(x) on each instance x. Throughout this paper we assume that the prediction F(x) can be computed in polynomial time in n. For a particular instance of prediction F(x), the goal of local explanations is to clarify why the function F gave its prediction on instance x, usually by attributing credit to the features. We will focus on local explanation that are inspired by game-theoretic Shapley values (Datta, Sen, and Zick, 2016; Lundberg and Lee, 2017). Specifically, we will work with the Shap explanations as defined by Lundberg and Lee (2017). 2.1 Shap Explanations To produce Shap explanations, one needs an additional ingredient: a probability distribution Pr(X) over the features, which we call the data distribution. We will use this distribution to reason about partial instances. Concretely, for a set of indices S [n] = {1, . . . , n}, we let x S denote the restriction of complete instance x to those features XS with indices in S. Abusing notation, we will also use x S to denote the probabilistic event XS = x S. Under this data distribution, it now becomes possible to ask for the expected value of the predictive function F. Clearly, for a complete data instance x we have that E[F|x] = F(x), as there is no uncertainty about the features. However, for a partial instance x S, which does not assign values to the features outside of XS, we appeal to the data distribution Pr to compute the expectation of function F as EPr[F|x S] = P x X F(x) Pr(x|x S). The Shap explanation framework draws from Shapley values in cooperative game theory. Given a particular instance x, it considers features X to be players in a coalition game: the game of making a prediction for x. Shap explanations are defined in terms of a set function v F,x,Pr : 2X R. Its purpose is to evaluate the value of each coalition of players/features XS X in making the prediction F(x) under data distribution Pr. Concretely, following Lundberg and Lee (2017), this value function is the conditional expectation of function F: v F,x,Pr(XS) def = EPr[F|x S]. (1) We will elide F, x, and Pr when they are clear from context. Our goal, however, is to assign credit to individual features. In the context of a coalition XS, the contribution of an individual feature X / XS is given by c(X, XS) def = v(XS {X}) v(XS). (2) where each term is implicitly w.r.t. the same F, x, and Pr. Finally, the Shap explanation computes a score for each feature X X averaged over all possible contexts, and thus measures the influence feature X has on the outcome. Let π be a permutation on the set of features X, i.e., π fixes a total order on all features. Let π t. We may also fix function F, and consider the problems F-SHAP({F}, PR) or D-SHAP({F}, PR), where the only input is data distribution Pr PR. To establish the complexities of these problems, we use standard notions of reductions. A polynomial time reduction from a problem A to a problem B, denoted by A P B, and also called a Cook reduction, is a polynomial-time algorithm for the problem A with access to an oracle for the problem B. We write A P B when both A P B and B P A. In the remainder of this paper, we study the computational complexity of these problems for natural hypothesis classes F that are popular in machine learning, and common classes of data distributions PR, including those most often used to compute Shap explanations. On the Tractability of SHAP Explanations Algorithm 1 Algorithm to compute value function v from Lundberg, Erion, and Lee (2018) procedure EXPVALUE(x, S, root = {left,right,feature,threshold,value}) procedure G(n) if n is a leaf then return n.value else if n.feature S then return if x[n.feature] n.threshold then G(n.left) else G(n.right) else return (G(n.left) count(n.left) + G(n.right) count(n.right))/count(n) return G(root) 2.3 Discussion on the Tree SHAP Algorithm Lundberg, Erion, and Lee (2018) propose Tree SHAP, a variant of Shap explanations for tree-based machine learning models such as decision trees, random forests and gradient boosted trees. The authors claim that, for the case when both the model and probability distribution1 are defined by a tree-based model, the algorithm can compute the exact Shap explanations in polynomial time. However, it has been pointed out in Github discussions2 that the Tree SHAP algorithm does not compute the Shap explanation as defined in Section 2.1. In this section, we provide a concrete example of this shortcoming. The main shortcoming of the Tree SHAP algorithm is captured by Algorithm 1. The authors claim that Algorithm 1 computes the conditional expectation E[F | x S], for a given set of features S and tree-based model F. We first describe the algorithm and then show by example that this algorithm does not accurately compute the conditional expectation. Algorithm 1 takes as input a feature vector x, a set of features S, and the root node for a binary tree that represents the tree-based model. Each internal node n defines a binary decision between a feature and a constant threshold, and has left and right pointers to the left and right children of n. Leaf nodes define a value which represents the prediction of the model. We use count(n) as an auxiliary function that returns the count of the number of data samples that fall in the sub-tree that is rooted by node n. The algorithm proceeds recursively in a top-down traversal of the tree. For inner nodes, the algorithm follows the decision path for x if the split feature is in S, and takes the weighted average of both branches if the split feature is not in S. For leaf nodes, it returns the value of the node. The following simple example shows that the value returned by Algorithm 1 does not represent the conditional expectation E[F | x S]. Example 1. We consider the following dataset and decision tree model. The dataset has two binary variables X1 and X2, and each instance (x1, x2) is weighted by the occur- 1. Lundberg, Erion, and Lee (2018) compute the probability distribution using tree-based models recursively as the weighted average of the frequencies for the output variable are returned by the left and right subtrees. The weight is given by the number of data samples that fall into the respective subtree. 2. See, for instance, https://github.com/christoph M/interpretable-ml-book/issues/142 (accessed on August 20, 2021). Van den Broeck, Lykov, Schleich, & Suciu rence count (i.e., the instance (0,0) occurs twice in the dataset). We want to compute E[F(X1, X2)|X2 = 0], where F(X1, X2) is the outcome of the decision tree. X1 X2 # 0 0 2 0 1 1 1 0 1 1 1 2 F(0, 0) F(0, 1) F(1, 0)F(1, 1) The correct value is: E[F(X1, X2) | X2 = 0] = 2/3 F(0, 0) + 1/3 F(1, 0) This is because there are three items with X2 = 0, and their probabilities are 2/3 and 1/3. Algorithm 1, however, returns: G(1) = 1/2 F(0, 0) + 1/2 F(1, 0), and thus does not compute E[F(X1, X2) | X2 = 0]. The algorithm does not accurately compute the conditional expectation E[F | x S], because it does not normalize the expectation by the probability of the condition. Without this normalization, Lundberg, Erion, and Lee (2018) are able to compute all expectations required by Shap in one pass over the tree-based model. When we consider the normalization, however, the expectations depend on the values of x and it is non-obvious if Shap can still be computed efficiently. We address this question in the next sections. 3. Shap over Fully-Factorized Distributions We start our study of the complexity of Shap by considering the simplest probability distribution: a fully-factorized distribution, where all features are independent. There are both practical and computational reasons why it makes sense to assume a fully-factorized data distribution when computing Shap explanations. First, functions F are often the product of a supervised learning algorithm that does not have access to a generative model of the data it is purely discriminative. Hence, it is convenient to make the practical assumption that the data distribution is fully factorized, and therefore easy to estimate. Second, fully-factorized distributions are highly tractable; for example they make it easy to compute expectations of linear regression functions (Khosravi et al., 2019b) and other hard inference tasks (Vergari et al., 2020). Lundberg and Lee (2017) indeed observe that computing the Shap-explanation on an arbitrary data distribution is challenging and consider using fully-factorized distributions (Sec. 4, Eq. 11). Other prior work on computing explanations also use fully-factorized distributions of features, e.g., Datta, Sen, and Zick (2016); ˇStrumbelj and Kononenko (2014). As we will show, the Shap explanation can be computed efficiently for several popular classifiers when the distribution is fully factorized. Yet, such simple data distributions are not a guarantee for tractability: computing Shap scores will be intractable for some other common classifiers. On the Tractability of SHAP Explanations 3.1 Equivalence to Computing Expectations Before studying various function classes, we prove a key result that connects the complexity of Shap explanations to the complexity of computing expectations. Let INDn be the class of fully-factorized probability distributions over n discrete and independent random variables X1, . . . , Xn. That is, for every instance (x1, . . . , xn) X, we have that Pr(X1 = x1, . . . , Xn = xn) = Q i Pr(Xi = xi). Let IND def = S n 0 INDn. We show that for every function class F, the complexity of F-SHAP(F, IND) is the same as that of the fully-factorized expectation problem. Definition 3 (Fully-Factorized Expectation Problem). Let F be a class of real-valued functions with discrete inputs. The fully-factorized expectation problem for F, denoted E(F), is the following: given a function F F and a probability distribution Pr IND, compute EPr(F). We know from Equation (5) that E[F] = F(x) P i=1,n Shap F (Xi), and thus for any function F over n features E({F}) P F-SHAP({F}, INDn). In this section we prove that the converse holds too: Theorem 2. For any function F : X R, we have that F-SHAP({F}, INDn) P E({F}). In other words, for any function F, the complexity of computing the Shap scores is the same as the complexity of computing the expected value E[F] under a fully-factorized data distribution. One direction of the proof is immediate: E({F}) P F-SHAP({F}, INDn) because, if we are given an oracle to compute Shap F (Xi) for every feature Xi, then we can obtain E[F] from Equation (5) (recall that we assumed that F(x) is computable in polynomial time). The hard part of the proof is the opposite direction: we will show in Sec. 3.2 how to compute Shap F (Xi) given an oracle for computing E[F]. Theorem 2 immediately extends to classes of functions F, and to any number of variables, and therefore implies that F-SHAP(F, IND) P E(F). Sections 3.3 and 3.4 will discuss the consequences of this result, by delineating which function classes support tractable Shap explanations, and which do not. The next section is devoted to proving our main technical result. 3.2 Proof of Theorem 2 We start with the special case when all features X are binary: dom(X) = {0, 1}. We denote by INDBn the class of fully-factorized distributions over binary domains. Theorem 3. For any function F : {0, 1}n R, we have that F-SHAP({F}, INDBn) P Proof. We prove only F-SHAP(F, INDBn) P E({F}); the opposite direction follows immediately from Equation (5). We will assume w.l.o.g. that F has n + 1 binary features X = {X0} X and show how to compute Shap F (X0) using repeated calls to an oracle for computing E[F], i.e., the expectation of the same function F, but over fully-factorized distributions with different probabilities. The probability distribution Pr is given to us by n + 1 rational numbers, pi def = Pr(Xi = 1), i = 0, n; obviously, Pr(Xi = 0) = 1 pi. Recall Van den Broeck, Lykov, Schleich, & Suciu that the instance whose outcome we want to explain is e = (1, . . . , 1). Recall that for any set S [n] we write e S for the event V i S(Xi = 1). Then, we have that Shap(X0) = X (n + 1)! Dk, where (6) S [n]:|S|=k E F|e S {0} E[F|e S] . This follows from Equation (3) by the following argument. Obviously, quantity n! becomes (n + 1)!. If |S| = k, then there are exactly k!(n k)! permutations π that place all elements in S before X0, and all elements not in S after X0. Finally, the expression E F|e S {0} E[F|e S] is precisely the contribution c(X0, XS) of the feature X0 in the context S, as defined by Equation (2). Let F0 def = F[X0 := 0] and F1 def = F[X0 := 1] (both are functions in n binary features, X = {X1, . . . , Xn}). Then the contribution of feature 0 becomes: E F e S {0} = E[F1|e S] E[F|e S] = E[F0|e S] (1 p0) + E[F1|e S] p0 and therefore Dk is given by: Dk = (1 p0) X S [n]:|S|=k (E[F1|e S] E[F0|e S]) Recall that we defined v G,e,Pr(XS) = E[G|e S] in Equation (1). Abusing notation, we write v G,k for the sum of these quantities over all sets S of cardinality k: v G,k def = X S [n],|S|=k E[G|e S]. (7) In summary so far, we have expressed Shap(X0) in Equation (6), where Dk is given by: Dk = (1 p0)(v F1,k v F0,k) To compute Shap(X0), it suffices to compute v G,k where k = 0, n and G is one of F0 or F1. We will prove the following claim. Claim 1. Let G be a function over n binary variables. Then the n + 1 quantities v G,0 until v G,n can be computed in polynomial time, using n + 1 calls to an oracle for E({G}). Note that an oracle for E({F}) is also an oracle for both E({F0}) and E({F1}), by simply setting Pr(X0 = 1) = 0 or Pr(X0 = 1) = 1 respectively. Therefore, Claim 1 proves Theorem 3, by applying it once to F0 and once to F1 in order to derive all the quantities v F0,k and v F1,k, thereby computing Dk, and finally computing Shap F (X0) using Equation (6). It remains to prove Claim 1. Fix a function G over n binary variables and let vk = v G,k. Let pj = Pr(Xj = 1), for j = 1, n, define the distribution over which we need to compute v0, . . . , vn. We will prove the following additional claim. On the Tractability of SHAP Explanations Claim 2. Given any real number z > 0, consider the distribution Prz(Xj) = p j def = pj+z 1+z , for j = 1, n. Let Ez[G] denote E[G] under distribution Prz. We then have that X k=0,n zk vk =(1 + z)n Ez[G]. (8) Assuming Claim 2 holds, we prove Claim 1. The LHS of Equation (8) is a polynomial in z with coefficients vk, while the RHS gives us an oracle for computing the value of this polynomial for any input z > 0. Using this oracle, we need to compute the coefficients vk of the polynomial. This can be done in many ways, we briefly describe here one possibility. Choose any n + 1 distinct values for z, use the oracle to compute the quantities Ez0[G], . . . , Ezn[G], and form a system of n+1 linear equations (8) with unknowns v0, . . . , vn. Next, observe that its matrix is a non-singular Vandermonde matrix, hence the system has a unique solution which can be computed in polynomial time. It remains to prove Claim 2. Because of independence, the probability of instance x {0, 1}n is Pr(x) = Q i:xi=1 pi Q i:xi=0(1 pi), where xi looks up the value of feature Xi in instance x. Similarly, Prz(x) = Q i:xi=1 p i Q i:xi=0(1 p i). Using direct calculations we derive: = (1 + z)n Prz(x) (9) Separately we also derive the following identity, using the fact that Pr(e S) = Q i S pi by independence: E[G|e S] = 1 Q i S pi x:x S=e S G(x) Pr(x) (10) We are now in a position to prove Claim 2: X k=0,n zk vk = X S [n]:|S|=k E[G|e S] S [n] z|S| E[G|e S] z|S| Q i S pi x:x S=e S G(x) Pr(x) The last line follows from Equation (10). Next, we simply exchange the summations P S and P x, after which we apply the identity P S A Q i S ui = Q i A(1 + ui). (continuing) x {0,1}n G(x) Pr(x) X z|S| Q i S pi x {0,1}n G(x) Pr(x) Y = (1 + z)n X x {0,1}n G(x) Prz(x) = (1 + z)n Ez[G]. Van den Broeck, Lykov, Schleich, & Suciu The final line uses Equation (9). This completes the proof of Claim 2 as well as Theorem 3. Next, we generalize this result from binary features to arbitrary discrete features. Fix a function with n inputs, F : X(def = Q i dom(Xi)) R, where each domain is an arbitrary finite set, dom(Xi) = {1, 2, . . . , mi}; we assume w.l.o.g. that mi > 1. A fully factorized probability space Pr INDn is defined by numbers pij [0, 1], i = 1, n, j = 1, mi, such that, for all i, P j pij = 1. Given F and Pr over the domain Q i dom(Xi), we define their projections, Fπ, Prπ over the binary domain {0, 1}n as follows. For any instance x {0, 1}n, let T(x) denote the event asserting that Xj = 1 iff xj = 1. Formally, j:xj=1 (Xj =1) j:xj=0 (Xj =1). (11) Then, the projections are defined as follows: x {0, 1}n, Prπ(x) def =Pr(T(x)) Fπ(x) def =E[F | T(x)] (12) Notice that Fπ depends both on F and on the probability distribution Pr. Intuitively, the projections only distinguishes between Xj = 1 and Xj = 1, for example: Fπ(1, 0, 0) =E[F|(X1 = 1, X2 = 1, X3 = 1)] Prπ(1, 0, 0) =Pr(X1 = 1, X2 = 1, X3 = 1) Proposition 4. Let F : X R be a function with n input features, and Pr INDn a fully factorized distribution over X. Then the following hold: (1) For any feature Xj, Shap F,Pr(Xj) = Shap Fπ,Prπ(Xj); in particular, E({F}) P E({Fπ}). And (2) E({Fπ}) P Item (1) states that the Shap-score of F computed over the probability space Pr is the same as that of its projection Fπ (which depends on Pr) over the projected probability space Prπ. Item (2) says that, for any probability space over {0, 1}n (not necessarily Prπ), we can compute E[Fπ] in polynomial time given access to an oracle for computing E[F]. Before we prove the proposition, we show how it to use it to complete the proof of Theorem 2, by showing that F-SHAP({F}, INDn) P E({F}). We are given a function F over domain X, and have access to an oracle for computing E[F] over any fully factorized probability distribution on X. Given probability space Pr INDn, our goal is to compute Shap F,Pr(Xj). By item (1) of Proposition 4 it suffices to compute Shap Fπ,Prπ(Xj). By Theorem 3, we can compute the latter given access to an oracle for computing E[Fπ], over an arbitrary fully factorized probability distribution on {0, 1}n. Finally, by item (2) of the proposition, we can compute E[Fπ] given the oracle for computing E[F]. In the rest of the section we prove Proposition 4. Proof. We start with item (1). Recall that dom(Xi) = {1, 2, 3, . . . , mi}. We denote by pi1, pi2, . . . , pimi their probabilities, thus P j=1,mi pij = 1. By definition, the projected distribution is: Prπ(Xi = 1) def = pi1, and Prπ(Xi = 0) = 1 pi1. We denote by Eπ be the corresponding expectation. Our goal is to prove Shap F,Pr(Xj) = Shap Fπ,Prπ(Xj). On the Tractability of SHAP Explanations Let e S again denote the event V i S(Xi = 1). Note that, by construction, for any set S, Pr(e S) = Prπ(e S). Given the definition of T(x) in Equation (11), for any instance x {0, 1}n, Pr(T(x)) = Y i:xi=1 pi1 Y i:xi=0 (pi2 + pi3 + ) = Prπ(x). There are 2n disjoint events T(x), which partition the space X. Therefore, for every set S: E[F e S] = X x: i S,xi=1 E[F|T(x)] Pr(T(x)) x: i S,xi=1 Fπ(x) Prπ(x) = Eπ[Fπ e S] This implies that E[F|e S] = Eπ[Fπ|e S] for any set S, and Shap F,Pr(Xj) = Shap Fπ,Prπ(Xj) for all j follows from Equation (6). We now prove item (2). As before, we are given F, Pr over the domain X, which in turn define Prπ and Fπ in Equations (12) and (12). In addition, we are also given an oracle for computing E [F] over an arbitrary distribution Pr . We need to compute E[Fπ], over some arbitrary distribution on {0, 1}n, denote it by Pr π (not to be confused with Prπ). To compute E[Fπ], we will construct a distribution Pr over X, use the oracle to compute the expectation E [F] over Pr , then show to use this answer to compute E[Fπ]. To define Pr , we need some notations. Recall that pij denotes the probability Pr(Xi = j). Denote qi def = Pr π(Xi =1), thus Pr π(Xi =0) = 1 qi, and further denote: wi def = 1 qi qi Z def = Y i=1,n qi W def = Y pijwi 1 pi1 With these notations, we define Pr (Xi = j) def = p ij, where: p i1 def = 1 p ij def = pijwi W(1 pi1) i =1, n; j = 2, mi One can check that the numbers p ij indeed define a probability space on X, in other words p ij [0, 1] and, for all i = 1, n: P j=1,mj p ij = 1. We denote by Pr the probability space that they define, and denote by E [F] the expectation of F in this space. We claim: Claim 3. E[Fπ] = Z W E [F] The claim immediately proves item (2) of Proposition 4: to compute E[Fπ] over the probability distribution Pr π, we compute the distribution Pr and invoke the oracle to compute E [F], then multiply with the quantities Z and W, both of which are computable in polynomial time. It remains to prove the claim. We start with some observations and notations. We denote an outcome in X = Q i=1,n dom(Xi) as τ, and view it as either a vector (τ1, τ2, . . .) or a function with domain Van den Broeck, Lykov, Schleich, & Suciu {1, 2, . . . , n}, where τ(i) dom(Xi). Then, the expectation of some function G : X R can be written as E[G] = P τ X G(τ)Pr(τ) = P τ X G(τ) Q i piτi. Furthermore, τ 1(1) denotes the set {i | τ(i) = 1}. We these notations, we compute the projection Fπ, defined Equation (12), explicitly in terms of the probabilities pij. For x {0, 1}n: Fπ[x] = E[F|T(x)] = E[F T(x)] P τ X:x 1(1)=τ 1(1) F(τ) Q i piτi Q i:xi=1 pi1 Q i:xi =1(1 pi1) τ X:x 1(1)=τ 1(1) F(τ) Y piτi 1 pi1 . We now prove the claim by applying directly the definition of E[Fπ]: θ {0,1}n Fπ(θ) Y i:θi=1 qi Y i:θi=0 (1 qi) = Z X θ {0,1}n Fπ(θ) Y τ X:θ 1(1)=τ 1(1) F(τ) Y piτiwi 1 pi1 = Z W X i p iτi = Z W E [F] We explain the transition from line 2 to line 3. In the summation in line 2, the assignment θ {0, 1}n is used in two places: to restrict the range of τ X in the second sum, and in the condition θi = 0. The condition θ 1(1) = τ 1(1) says θi = 1 iff τi = 1 and therefore θi = 0 iff τi = 1. Thus, we can replace the condition θi = 0 with τi = 1. Furthermore, the assignment τ X uniquely defines θ, hence θ can be dropped from the summation. In line 3 we simply used the definition of p ij introduced earlier. This completes the proof of the claim, and of Proposition 4. 3.3 Tractable Function Classes Given the polynomial-time equivalence between computing Shap explanations and computing expectations under fully-factorized distributions, a natural next question is: which real-world hypothesis classes in machine learning support efficient computation of Shap scores? Corollary 5. For the following function classes F, computing Shap scores F-SHAP(F, IND) is in polynomial time in the size of the representations of function F F and fully-factorized distribution Pr IND. 1. Linear regression models 2. Decision and regression trees 3. Random forests or additive tree ensembles 4. Factorization machines, regression circuits On the Tractability of SHAP Explanations 5. Boolean functions in d-DNNF, binary decision diagrams 6. Bounded-treewidth Boolean functions in CNF These are all consequences of Theorem 2, and the fact that computing fully-factorized expectations E(F) for these function classes F is in polynomial time. Concretely, we have the following observations about fully-factorized expectations: 1. Expectations of linear regression functions are efficiently computed by mean imputation (Khosravi et al., 2019b). The tractability of Shap on linear regression models is well known. In fact, ˇStrumbelj and Kononenko (2014) provide a closed-form formula for this case. 2. Paths from root to leaf in a decision or regression tree are mutually exclusive. Their expected value is therefore the sum of expected values of each path, which are tractable to compute within IND; see Khosravi et al. (2020). 3. Additive mixtures of trees, as obtained through bagging or boosting, are tractable, by the linearity of expectation. 4. Factorization machines extend linear regression models with feature interaction terms and factorize the parameters of the higher-order terms (Rendle, 2010). Their expectations remain easy to compute. Regression circuits are a graph-based generalization of linear regression. Khosravi et al. (2019a) provide an algorithm to efficiently take their expectation w.r.t. a probabilistic circuit distribution, which is trivial to construct for the fully-factorized case. The remaining tractable cases are Boolean functions. Computing fully-factorized expectations of Boolean functions is widely known as the weighted model counting task (WMC) (Sang, Beame, and Kautz, 2005; Chavira and Darwiche, 2008). WMC has been extensively studied both in the theory and the AI communities, and the precise complexity of E(F) is known for many families of Boolean functions F. These results immediately carry over to the F-SHAP(F, IND) problem through Theorem 2: 5. Expectations can be computed in time linear in the size of various circuit representations, called d-DNNF, which includes binary decision diagrams (OBDD, FBDD) and SDDs (Bryant, 1986; Darwiche and Marquis, 2002).3 6. Bounded-treewidth CNFs are efficiently compiled into OBDD circuits (Ferrara, Pan, and Vardi, 2005), and thus enjoy tractable expectations. To conclude this section, the reader may wonder about the algorithmic complexity of solving F-SHAP(F, IND) with an oracle for E(F) under the reduction in Section 3.2. Briefly, we require a linear number of calls to the oracle, as well as time in O(n3) for solving a system of linear equations. Hence, for those classes, such as d-DNNF circuits, where expectations are linear in the size of the (circuit) representation of F, computing F-SHAP(F, IND) is also linear in the representation size and polynomial in n. 3. In contemporaneous work, Arenas et al. (2020) also show that the Shap explanation is tractable for d-DNNFs, but for the more restricted class of uniform data distributions. Van den Broeck, Lykov, Schleich, & Suciu 3.4 Intractable Function Classes The polynomial-time equivalence of Theorem 2 also implies that computing Shap scores must be intractable whenever computing fully-factorized expectations is intractable. This section reviews some of those function classes F, including some for which the computational hardness of E(F) is well known. We begin, however, with a more surprising result. Logistic regression is one of the simplest and most widely used machine learning models, yet it is conspicuously absent from Corollary 5. We prove that computing the expectation of a logistic regression model is #P-hard, even under a uniform data distribution, which is of independent interest. A logistic regression model is a parameterized function F(x) def = σ(w x), where w = (w0, w1, . . . , wn) is a vector of weights, σ(z) = 1/(1 + e z) is the logistic function, x def = (1, x1, x2, . . . , xn), and w x def = P i=0,n wixi is the dot product. Note that we define the logistic regression function to output probabilities, not data labels. Let LOGITn denote the class of logistic regression functions with n variables, and LOGIT = S n LOGITn. We prove the following: Theorem 6. Computing the expectation of a logistic regression model w.r.t. a uniform data distribution is #P-hard. Proof. The proof is by reduction from counting solutions to the number partitioning problem. The number partitioning problem, NUMPAR, is the following: given n natural numbers k1, . . . , kn, decide whether there exists a subset S [n] that partitions the numbers into two sets with equal sums: P i S ki = P i S ki. NUMPAR is known to be NP-complete. The corresponding counting problem, in notation #NUMPAR, asks for the number of sets S such that P i S ki = P i S ki, and is #P-hard. We show that we can solve the #NUMPAR problem using an oracle for EU[F], where F is a logistic regression function and U is the uniform probability distribution. This implies that computing the expectation of a logistic regression function is #P-hard. Fix an instance of NUMPAR, k1, . . . , kn, and assume w.l.o.g. that the sum of the numbers ki is even, P i ki = 2c for some natural number c. Let P def ={S | S [n], X i S ki = c} (13) For each set S [n], denote by S its complement. Obviously, S P iff S P, therefore |P| is an even number. We next describe an algorithm that computes |P| using an oracle for EU[F], where F is a logistic regression function and U is the uniform probability distribution. Let m be a natural number large enough, to be chosen later, and define the following weights: 2 mc wi def =mki i = 1, n Let w = (w1, . . . , wn), then F(x1, . . . , xn) def = σ(w0 + w x) is the logistic regression function defined by the weights w0, . . . , wn. On the Tractability of SHAP Explanations Claim 4. Let ε def = 1/2n+3. If m satisfies both 2σ( m/2) ε and 1 σ(m/2) ε, then: |P| = 2n 2n+1E[F] The claim immediately proves the theorem: in order to solve the #NUMPAR problem, compute E[F] and then use the formula above to derive |P|. To prove the claim, for each set S [n] denote by: weight(S) def = m 2 mc + m( X Let U denote the uniform probability distribution over the domain {0, 1}n. Then, x σ(w0 + w x) 2 mc + m( X i [n] kixi)) 2 mc + m( X i:xi=1 ki)) S [n] σ(weight(S)) S [n] (σ(weight(S)) + σ(weight( S))) If S is a solution to the number partitioning problem (S P), then: σ(weight(S)) + σ(weight( S)) = 2σ( m/2) Otherwise, one of weight(S), weight( S) is m/2 and the other is 3m/2 and therefore: σ(m/2) σ(weight(S)) + σ(weight( S)) 1 + σ( 3m/2) Since ε = 1/2n+3, and m satisfies both 2σ( m/2) ε and 1 σ(m/2) ε, we have: S P : 0 σ(weight(S)) + σ(weight( S)) ε S P : 1 ε σ(weight(S)) + σ(weight( S)) 1 + ε This implies: 2n+1 (1 ε) E[F] |P| 2n+1 ε + 2n |P| 2n+1 (1 + ε) |P| 2n 2n+1E[F] 1 ε |P| 2n(1 + ε) 2n+1E[F] Thus, we have a lower and an upper bound for |P|. Since E[F] 1, the difference between the two bounds is < 1 and there exists at most one integer number between them, hence |P| is equal to the ceiling of the lower bound (and also to the floor of the upper bound), proving the claim. Van den Broeck, Lykov, Schleich, & Suciu Because the uniform distribution is contained in IND, and following Theorem 2, we immediately obtain: Corollary 7. The computational problems E(LOGIT) and F-SHAP(LOGIT, IND) are both #Phard. We are now ready to list general function classes for which computing the Shap explanation is #P-hard. Corollary 8. For the following function classes F, computing Shap scores F-SHAP(F, IND) is #P-hard in the size of the representations of function F F and fully-factorized distribution Pr IND. 1. Logistic regression models (Corollary 7) 2. Neural networks with sigmoid activation functions 3. Naive Bayes classifiers, logistic circuits 4. Boolean functions in CNF or DNF Our intractability results stem from these observations: 2. Each neuron is a logistic regression model, and therefore this class subsumes LOGIT. 3. The conditional distribution used by a naive Bayes classifier is known to be equivalent to a logistic regression model (Ng and Jordan, 2002). Logistic circuits are a graph-based classification model that subsumes logistic regression (Liang and Van den Broeck, 2019). 4. For general CNFs and DNFs, weighted model counting, and therefore E(F) is #Phard. This is true even for very restricted classes, such as monotone 2CNF and 2DNF functions, and Horn clause logic (Wei and Selman, 2005). 4. Beyond Fully-Factorized Distributions Features in real-world data distributions are not independent. In order to capture more realistic assumptions about the data when computing Shap scores, one needs a more intricate probabilistic model. In this section we prove that the complexity of computing the Shap-explanation quickly becomes intractable, even over the simplest probabilistic models, namely naive Bayes models. To make computing the Shap-explanation as easy as possible, we will assume that the function F simply outputs the value of one feature. We show that even in this case, even for function classes that are tractable under fully-factorized distributions, computing Shap explanations becomes computationally hard. Let NBNn denote the family of naive Bayes networks over n+1 variables X = {X0, . . . , Xn} with binary domains, where X0 is a parent of all features: Pr(X) = Pr(X0) Y i=1,n Pr(Xi|X0). As usual, the class NBN def = S n 0 NBNn. We write X0 for the function F that returns the value of feature X0; that is, F(x) = x0. We prove the following. On the Tractability of SHAP Explanations Theorem 9. The decision problem D-SHAP({X0}, NBN) is NP-hard. Proof. We use a reduction from the number partitioning problem, similar to the proof of Corollary 7. We note that the subset sum problem was also used to prove related hardness results, e.g., for proving hardness of the Shapely value in network games Elkind et al. (2008). As before we assume w.l.o.g. that the sum of the numbers ki is even, P i ki = 2c for some natural number c. Let m be a large natural number to be defined later. We reduce the NUMPAR problem to the D-SHAP({X0}, NBN) problem. The Naive Bayes network NBN consists of n + 1 binary random variables X0, . . . , Xn. Let Xi, Xi denote the events Xi = 1 and respectively Xi = 0. We define the following probabilities of the NBN: Pr(X0) Pr( X0) =e m 2 mc Pr(Xi|X0) Pr(Xi| X0) =emki The probabilities Pr( X0) and Pr(Xi| X0) can be chosen arbitrarily (with the obvious con- straints Pr( X0) e m 2 +mc and Pr(Xi| X0) e mki). As required, our classifier is F(X0, . . . , Xn) def = X0. Let ak def = k!(n k)! (n+1)! and let ε > 0 be any number such that ε ak for all k = 0, 1, . . . , n. We prove: Claim 5. Let ε be the value defined above. If m satisfies both 2σ( m/2) ε and 1 σ(m/2) ε, then NUMPAR has a solution iff Shap F (X0) 1/2(1 + ε). The claim implies Theorem 9. To prove the claim, we express the Shap explanation using Eq. (6). Let XS denote the event V i S(Xi = 1). Then, we can write the Shap explanation as: Shap F (X0) = X S [n] a|S| E[F | XS {0}] E[F | XS] Obviously, E[X0 | XS {0}] = 1. In addition, we have P S [n] a|S| = 1, because there are n k sets of size k, hence P S [n] a|S| = P k=0,n n k k!(n k)! (n+1)! = 1. Therefore Shap F (X0) = 1 D, where: S [n] a|S| E[X0 | XS] (14) To compute D, we expand: E[X0|XS] = Pr(X0|XS) = Pr(X0, XS) = Q i S Pr(Xi|X0)Pr(X0) Q i S Pr(Xi|X0)Pr(X0) + Q i S Pr(Xi| X0)Pr( X0) = 1 1 + Pr( X0)/Pr(X0) Q i S(Pr(Xi| X0)/Pr(Xi|X0)) = σ(weight(S)) Van den Broeck, Lykov, Schleich, & Suciu σ(x) def = 1 1 + e x weight(S) def = m 2 mc + m( X We compute D in Eq. (14) by grouping each set S with its complement S def = [n] S: S [n] a|S| σ(weight(S)) + σ(weight( S)) (15) If S is a solution to the number partitioning problem, then: σ(weight(S)) + σ(weight( S)) = 2σ( m/2) Otherwise, one of weight(S), weight( S) is m/2 and the other is 3m/2 and therefore: σ(m/2) σ(weight(S)) + σ(weight( S)) 1 + σ( 3m/2) As in the proof of Theorem 6, we obtain: S P : 0 σ(weight(S)) + σ(weight( S)) ε S P : 1 ε σ(weight(S)) + σ(weight( S)) 1 + ε Therefore, using the fact that P S [n] a|S| = 1, we derive these bounds for the expression (15) for D: If the number partitioning problem has no solution, then D 1/2(1 ε), and Shap F (X0) 1/2(1 + ε). Otherwise, let S be any solution to the NUMPAR problem, and k = |S|, then: 2(1 + ε) ak(1 + ε) + akε and therefore Shap F (X0) > 1/2(1 + ε). This result is in sharp contrast with the complexity of the Shap score over fullyfactorized distributions in Section 3. There, the complexity was dictated by the choice of function class F. Here, the function is as simple as possible, yet computing Shap is hard. This ruins any hope of achieving tractability by restricting the function, and this motivates us to restrict the probability distribution in the next section. This result is also surprising because it is efficient to compute marginal probabilities (such as the expectation of X0) and conditional probabilities in naive Bayes distributions. Theorem 9 immediately extends to a large class of probability distributions and functions. We say that F depends only on Xi if there exist two constants c0 = c1 such that F(x) = c0 when xi = 0 and F(x) = c1 when xi = 1. In other words, F ignores all variables other than Xi, and does depend on Xi. We then have the following. On the Tractability of SHAP Explanations Corollary 10. The problem D-SHAP(F, PR) is NP-hard, when PR is any of the following classes of distributions: 1. Naive Bayes, bounded-treewidth Bayesian networks 2. Bayesian networks Markov networks, Factor graphs 3. Decomposable probabilistic circuits and when F is any class that contains some function F that depends only on X0, including the class of linear regression models and all the classes listed in Corollaries 5 and 8. This corollary follows from two simple observations.First, each of the classes of probability distributions listed in the corollary can represent a naive Bayes network over binary variables X. For example, a Markov network will consists of n factors f1(X0, X1), f2(X0, X2), . . . , fn(X0, Xn); similar arguments prove that all the other classes can represent naive Bayes, including tractable probabilistic circuits such as sum-product networks (Vergari et al., 2020). Second, for each function that depends only on X0, there exist two distinct constants c0 = c1 R such that F(x) = c0 when x0 = 0 and F(x) = c1 when x0 = 1. For example, if we consider the class of logistic regression functions F(x) = σ(P i wixi), then we choose the weights w0 = 1, w1 = . . . = wn = 0 and we obtain F(x) = 1/2 when x0 = 0 and F(x) = 1/(1 + e 1) when x0 = 1. Then, over the binary domain {0, 1} the function is equivalent to F(x) = (c1 c0)x0 + c0, and, therefore, by the linearity of the Shap explanation (Equation (4)) we have Shap F (X0) = (c1 c0) Shap X0(X0) (because the Shap explanation of a constant function c0 is 0) for which, by Theorem 9, the decision problem is NP-hard. We end this section by proving that Theorem 9 continues to hold even if the prediction function F is the value of some leaf node of a (bounded treewidth) Bayesian Network. In other words, the hardness of the Shap explanation is not tied to the function returning the root of the network, and applies to more general functions. Corollary 11. The Shap decision problem for Bayesian networks with latent variables is NP-hard, even if the function F returns a single leaf variable of the network. Proof. (Sketch) We use a reduction from the NUMPAR problem, as in the proof of Theorem 9. We start by constructing the NBN with variables X0, X1, . . . , Xn (as for Theorem 9), then add two more variables Xn+1, Xn+2, and edges X0 Xn+1 Xn+2, and define the random variables Xn+1, Xn+2 to be identical to X0, i.e. X0 = Xn+1 = Xn+2. The prediction function is F = Xn+2, i.e. it returns the feature Xn+2, and the variables X0, Xn+1 are latent. Thus, the new BN is identical to the NBN, and, since both models have exactly the same number of non-latent variables, the Shap-explanation is the same. 5. Shap on Empirical Distributions In supervised learning one does not require a generative model of the data, instead, the model is trained on some concrete data set: the training data. When some probabilistic model is needed, then the training data itself is conveniently used as a probability model, called the empirical distribution. This distribution captures dependencies between features, Van den Broeck, Lykov, Schleich, & Suciu while its set of possible worlds is limited to those in the data set. For example, the intent of the Kernel SHAP algorithm by Lundberg and Lee (2017) is to compute the Shap explanation on the empirical distribution. In another example, Aas, Jullum, and Løland (2019) extend Kernel SHAP to work with dependent features, by estimating the conditional probabilities from the empirical distribution. Compared to the data distributions considered in the previous sections, the empirical distribution has one key advantage: it has many fewer possible worlds with positive probability this suggests increased tractability. Unfortunately, in this section, we prove that computing the Shap explanation over the empirical distribution is #P-hard in general. To simplify the presentation, this section assumes that all features are binary: dom(Xj) = {0, 1}. The probability distribution is given by a 0/1-matrix d = (xij)i [m],j [n], where each row (xi1, . . . , xin) is an outcome with probability 1/m. One can think of d as a dataset with n features and m data instances, where each row (xi1, . . . , xin) is one data instance. Repeated rows are possible: if a row occurs k times, then its probability is k/m. We denote by X the class of empirical distributions. The predictive function can be any function F : {0, 1}n R. As our data distribution is no longer strictly positive, we adopt the standard convention that E[F|XS = 1] = 0 when Pr(XS = 1) = 0. Recall from Section 2.2 that, by convention, we compute the Shap-explanation w.r.t. instance e = (1, 1, . . . , 1), which is without loss of generality. Somewhat surprisingly, the complexity of computing the Shap-explanation of a function F over the empirical distribution given by a matrix d is related to the problem of computing the expectation of a certain CNF formula associated to d. Definition 4. The positive, partitioned 2CNF formula, PP2CNF, associated to a matrix d {0, 1}m n is: (i,j):xij=0 (Ui Vj). Thus, a PP2CNF formula is over m + n variables U1, . . . , Um, V1, . . . , Vn, and has only positive clauses. The matrix d dictates which clauses are present. A quasy-symmetric probability distribution is a fully factorized distribution over the m + n variables for which there exists two numbers p, q [0, 1] such that for every i = 1, m, Pr(Ui = 1) = p or Pr(Ui = 1) = 1, and for every j = 1, n, Pr(Vj = 1) = q or Pr(Vj = 1). In other words, all variables U1, . . . , Um have the same probability p, or have probability 1, and similarly for the variables V1, . . . , Vn. We denote by EQS(PP2CNF) the expectation computation problem for PP2CNF over quasi-symmetric probability distributions. EQS(PP2CNF) is #P-hard, because computing E[Φd] under the uniform distribution (i.e. Pr(U1 = 1) = = Pr(Vn = 1) = 1/2) is #P-hard Provan and Ball (1983). We prove: Theorem 12. Let X be the class of empirical distributions, and F be any class of function such that, for each i, it includes some function that depends only on Xi. Then, we have that F-SHAP(F, X) P EQS(PP2CNF). As a consequence, the problem F-SHAP(F, X) is #P-hard in the size of the empirical distribution. On the Tractability of SHAP Explanations The theorem is surprising, because the set of possible outcomes of an empirical distribution is small. This is unlike all the distributions discussed earlier, for example those mentioned in Corollary 10, which have 2n possible outcomes, where n is the number of features. In particular, given an empirical distribution d, one can compute the expectation E[F] in polynomial time for any function F, by doing just one iteration over the data. Yet, computing the Shap explanation of F is #P-hard. Theorem 12 implies hardness of Shap explanations on the empirical distribution for a large class of functions. Corollary 13. The problem F-SHAP(F, X) is #P-hard, when X is the class of empirical distributions, and F is any class such that for each feature Xi, the class contains some function that depends only on Xi. This includes all the function classes listed in Corollaries 5 and 8. For instance, any class of Boolean function that contains the n single-variable functions F def = Xi, for i = 1, n, fall under this corollary. Section 4 showed an example of how the class of logistic regression functions fall under this corollary as well. The proof of Theorem 12 follows from the following technical lemma, which is of independent interest: Lemma 14. We have that: 1. For every matrix d, F-SHAP(F, d) P EQS({Φd}). 2. EQS(PP2CNF) P F-SHAP(F, X). The proof of the Lemma is given in Sections 5.1 and 5.2. The first item says that we can compute the Shap-explanation in polynomial time using an oracle for computing E[Φd] over quasi-symmetric distributions. The oracle is called only on the PP2CNF Φd associated to the data d, but may perform repeated calls, with different probabilities of the Boolean variables. This is somewhat surprising because the Shap explanation is over an empirical distribution, while E[Φd] is taken over a fully-factorized distribution; there is no connection between these two distributions. This item immediately implies F-SHAP(F, X) P EQS(PP2CNF), where X is the class of empirical distributions d, since the formula Φd is in the class PP2CNF. The second item says that a weak form of converse also holds. It states that we can compute in polynomial time the expectation E[Φ] over a quasi-symmetric probability distributions by using an oracle for computing Shap explanations, over several matrices d, but not necessarily restricted to the matrix associated to Φ. Together, the two items of the lemma prove Theorem 12. We end this section with a comment on the Tree SHAP algorithm in Lundberg et al. (2020), which is computed over a distribution defined by a tree-based model. Our result implies that the problem that Tree SHAP tries to solve is #P-hard. This follows immediately by observing that every empirical distribution d can be represented by a binary tree of size polynomial in the size of d. The tree examines the attributes in the order X1, X2, . . . , Xn, and each decision node for Xi has two branches: Xi = 0 and Xi = 1. A branch that does not exists in the matrix d will end in a leaf with label 0. A complete branch that Van den Broeck, Lykov, Schleich, & Suciu corresponds to a row xi1, xi2, . . . , xin in d ends in a leaf with label 1/m (or k/m if that row occurs k times in d). The size of this tree is no larger than twice the size of the matrix (because of the extra dead end branches). This concludes our study of Shap explanations on the empirical distribution. 5.1 Proof of Lemma 14 (1) Fix a PP2CNF Φ = V(Ui Vj). A symmetric probability space is defined by two numbers p, q [0, 1] and consists of the fully-factorized distribution where Pr(U1) = Pr(U2) = = p and Pr(V1) = Pr(V2) = = q. A quasi-symmetric probability space consists of two sets of indices I, J and two numbers p, q such that: ( p when i I 1 when i I Pr(Vj) = ( q when j J 1 when j J In this and the following section we prove Lemma 14: computing the Shap-explanation over an empirical distribution is polynomial time equivalent to computing the expectation of PP2CNF formulas over a (quasi)-symmetric distribution. Provan and Ball (1983) proved that computing the expectation of a PP2CNF over uniform distributions is #P-hard in general. Since uniform distribution are symmetric (namely p = q = 1/2) it follows that computing Shap-explanations is #P-hard in general. In this section we prove item (1) of Lemma 14. Fix a 0/1-matrix x defining an empirical distribution, and let F be a real-valued prediction function over these features. Let Φx be the PP2CNF associated to x (see Definition 4). Will assume w.l.o.g. that x has n + 1 features (columns), denoted X0, X1, . . . , Xn. The prediction function F is any function F : {0, 1}n+1 R. We prove: Proposition 15. One can compute Shap F (X0) in polynomial time using an oracle for computing E[Φx] over quasi-symmetric distributions. Denote by yi def = F(xi0, xi1, . . . , xin), i = 1, m the value of F on the i th row of the matrix x. Since the only possible outcomes of the probability space are the m rows of the matrix, the quantity Shap F (X0) depends only on the vector y def = (y1, . . . , ym). Furthermore, by the linearity of the Shap explanation (Eq. (4)), it suffices to compute the Shap explanation in the case when y has a single value = 1 and all others are = 0. By permuting the rows of the matrix, we will assume w.l.o.g. that y1 = 1, and y2 = y3 = = ym = 0. In summary, denoting F1 the function that is 1 on the first row of the matrix x and is 0 on all other rows, our task is to compute Shap F1(X0). For that we use the following expression for Shap (see also Sec. 3): Shap F1(X0) = X S [n]:|S|=k E[F1|XS {0} = 1] E[F1|XS = 1] (16) On the Tractability of SHAP Explanations We will only show how to compute the quantity S [n]:|S|=k E[F1|XS = 1] (17) using an oracle to E[Φx], because the quantity P S:|S|=k E[F1|XS {0} = 1] is computed similarly, by restricting the matrix x to the rows i where xi0 = 1. The PP2CNF Φ associated to this restricted matrix is obtained from Φx as follows. Let I def = {i | xi0 = 1} be the set of rows of the matrix where the feature X0 is 1. Then, we need to remove all clauses of the form (Ui Vj) for i I. This is equivalent to setting Ui := 1 in Φx. Therefore, we can compute the expectation of the restricted Φ by using our oracle for E[Φx], and running it over the probability space where we define Pr(Ui) def = 1 for all i I. Hence, it suffices to show only how to compute the expression (17). Notice that the quantity v F1,k is the same as what we defined earlier in Eq. (7). Column X0 of the matrix is not used in expression (17), because the set S ranges over subsets of [n]. Hence w.l.o.g. we can drop feature X0 and denote by x (with some abuse) the matrix that only has the features X1, . . . , Xn. In other words, x {0, 1}m n. The PP2CNF formula for the modified matrix is obtained from Φx by setting V0 := 1, hence we can compute its expectation by using our oracle for E[Φx]. We introduce the following quantities associated to the matrix x {0, 1}m n: For all S [n], ℓ m, k n, we define: g(S) def ={i | j S, xij = 1} (18) aℓk def =|{S | |S| = k, |g(S)| = ℓ}| (19) We define the sequence vk, k = 0, 1, . . . , n: We define the value V : (n + 1)! vk (21) We prove that, under a certain condition, the value vk in Eq. (20) is equal to Eq. (17); this justifies the notation vk, since it turns out to be the same as v F1,k from Eq. (7). Definition 5. Call the matrix x good if i, j, x1j xij. In other words, the matrix is good if the first row dominates all others. In general the matrix x need not be good , however we can make it good by removing all columns where row 1 has a value 0. More precisely, let J(1) def = {j | x1j = 1} denote the non-zero Van den Broeck, Lykov, Schleich, & Suciu positions of the first row, and let x(1) denote the sub-matrix of x consisting of the columns J(1). Obviously, x(1) is good , because its first row is (1, 1, . . . , 1). The following hold: If S J(1) : Ex[F1|XS = 1] =Ex(1)[F1|XS = 1] If S J(1) : Ex[F1|XS = 1] =0 (When S J(1) then the quantity Ex(1)[F1|XS = 1] is undefined). Therefore: S [n]:|S|=k Ex[F1|XS = 1] = X S J(1):|S|=k Ex(1)[F1|XS = 1] It follows that, in order to compute the values in Eq. (17), we can consider the matrix x(1) instead of x; its associated PP2CNF is obtained from Φx by setting Vj := 1 for all j [m] J(1), hence we can compute its expectation over a quasi-symmetric space by using our oracle for computing E[Φx] over quasi-symmetric spaces. To simplify the notation, we will still use the name x for the matrix instead of x(1), and assume w.l.o.g. that the first row of the matrix x is (1, 1, . . . , 1). We prove that, when x is good , then vk is indeed the quantity Eq. (17) that we want to compute. This holds for any good matrix, not just matrices with (1, 1, . . . , 1) in the first row, and we need this more general result later in Sec. 5.2. Claim 6. If the matrix x is good , then, for any k = 0, n: S:|S|=k E[F1|XS = 1] Proof. Recall that J(1) def = {j | x1j = 1}. Let S [n] be any set of columns. We consider two cases, depending on whether S is a subset of J(1) or not: S J(1) : |g(S)| >0 E[F1|XS = 1] = 1 |g(S)| S J(1) : |g(S)| =0 E[F1|XS = 1] =0 Therefore: X S [n]:|S|=k E[F1|XS = 1] = X S J(1):|S|=k E[F1|XS = 1] S J(1):|S|=k 1 |g(S)| = X S:|S|=k,|g(S)|>0 1 |g(S)| = X At this point we introduce two polynomials, P and Q. On the Tractability of SHAP Explanations Definition 6. Fix an m n matrix x with 0, 1-entries. The polynomials P(u, v) and Q(u, v) in real variables u, v associated to the matrix x are the following: P(u, v) def = X S [n] u|g(S)|v|S| Q(u, v) def = X T [m], S [n] : (i, j) T S : xij = 1 The polynomials are defined by summing over exponentially many sets S [n], or pairs of sets S [n], T [m]. In the definition of P, we use the function g(S) associated to the matrix x, see Eq. (18). In the definition of Q(u, v) we sum only those pairs T, S where i T, j S, xij = 1. While their definition involves exponentially many terms, these polynomials have only (m + 1)(n + 1) terms, because the degrees of the variables u, v are m and n respectively. We claim that these terms are as follows: Claim 7. The following identities hold: P(u, v) = X ℓ=0,m;k=0,n aℓkuℓvk Q(u, v) =P(1 + u, v) Proof. The identity for P(u, v) follows immediately from the definition of aℓk. We prove the identity for Q. From the definition of g(S) in Eq. (18) we derive the following equivalence: ( i T, j S : xij = 1) T g(S) Which implies: Q(u, v) = X S [n],T g(S) u|T|v|S| and the claim follows from P T g(S) u|T| = (1 + u)|g(S)|. Thus, in order to compute the quantities vk for k = 0, 1, . . . , n it suffices to compute the coefficients aℓk of the polynomial P(u, v), and, for that, it suffices to compute the coefficients of the polynomial Q(u, v). For that, we establish the following important connection between E[Φx] and the polynomial Q(u, v). Fix u, v > 0 any two positive real values, and let p def = 1/(1 + u), q def = 1/(1 + v); notice that p, q (0, 1). Consider the probability space over independent Boolean variables U1, . . . , Um, V1, . . . , Vn where i [m], Pr(Ui) = p, and j [n], Pr(Vj) = q. Then: Claim 8. Given the notations above, the following identity holds: E[Φx] = 1 (1 + u)m(1 + v)n Q(u, v) (22) Van den Broeck, Lykov, Schleich, & Suciu Proof. A truth assignment for Φx consists of two assignments, θ {0, 1}m for the variables Ui, and τ {0, 1}n for the variables Vj. Defining T def = {i | θ(Ui) = 0} and S def = {j | τ(Vj) = 0}, we observe that Φx[θ, τ] = true iff i T, j S, xij = 1, and therefore: θ,τ:Φ[θ,τ]=1 Pr(θ)Pr(τ) T [m], S [n] (i, j) T S : xij = 1 pm |T|(1 p)|T|qn |S|(1 q)|S| = pmqn Q((1 p)/p, (1 q)/q) Finally, to prove Lemma 14 (1), it suffices to show how to use an oracle for E[Φx] to compute the coefficients of the polynomial Q(u, v). We denote by bℓk these coefficients, in other words: Q(u, v) = X ℓ=0,m;k=0,n bℓkuℓvk (23) To compute the coefficients bℓk, we proceed as follows. Choose m + 1 distinct values u0, u1, . . . , um > 0, and choose n + 1 distinct values v0, v1, . . . , vn > 0, and for all i = 0, m and j = 0, n, use the oracle for E[Φx] to compute Q(ui, vj) as per identity (22). This leads to a system of (m + 1)(n + 1) equations whose unknowns are the coefficients bℓk (see Eq. (23)) and whose coefficients are uℓ ivk j . The matrix A of this system of equations is an [(m+1)(n+1)] [(m+1)(n+1)] matrix, whose rows are indexed by pairs (i, j), and whose columns are indexed by pairs (ℓ, k): A(ij),(ℓk) =uℓ ivk j We prove that this matrix is non-singular, and for that we observe that it is the Kronecker product of two Vandermonde matrices. Recall that the t t Vandermonde matrix defined by t numbers z1, . . . , zt is: V (z1, . . . , zt) = 1 1 . . . 1 z1 z2 . . . zt z2 1 z2 2 . . . z2 t . . . zt 1 1 zt 1 2 . . . zt 1 t It is known that det(V (z1, . . . , zt)) = Q 1 i n + , then v( ) p = 0. We use the oracle to compute the quantity V ( ), which is: (2n + 1)! v( ) p k=0,min(p,n) def = 1 2n + 1 k=0,n A ,k vk Thus, after running the oracle on all matrices x(0), . . . , x(n), we obtain a system of n + 1 equations with n + 1 unknowns v0, v1, . . . , vn. It remains to prove that system s matrix, Van den Broeck, Lykov, Schleich, & Suciu A ,k, is non-singular matrix. Let us denote following matrices by: A ,k def = X 2n k+q = 0, n; k = 0, n; B ,q def = q = 0, n; q = 0, n; Cq,k def = 1 2n k+q q = 0, n; k = 0, n; It is immediate to verify that A = B C, so it suffices to prove det(B) = 0, det(C) = 0. We start with B, and for that consider the Vandermonde matrix X def = V (x0, x1, . . . , xn), Xqt def = xq t. Denoting Y def = B X, we have that q=0,n B ,q Xq,t = X xq t = (1 + xt) is also a Vandermonde matrix Y = V (1+x0, 1+x1, . . . , 1+xn). We have det(Y ) = 0 when x0, x1, . . . , xn are distinct, proving that det(B) = 0. Finally, we prove det(C) = 0. For that, we prove a slightly more general result. For any N 2n, denote by C(n,N) the following (n + 1) (n + 1) matrix: C(n,N) def = 1 (N 0) 1 (N 1) . . . 1 (N n) 1 (N 1) 1 (N 2) . . . 1 ( N n+1) . . . 1 (N n) 1 ( N n+1) . . . 1 ( N 2n) We will prove that det(C(n,N)) = 0; our claim follows from the special case N = 2n. For the base case, n = 0, det(C(0,N)) = 1 because C(0,N) is a 1 1 matrix equal to 1/ N 0 , hence det(C(0,N)) = 1. To show the induction step, we will perform elementary column operations (which preserve the determinant) to make the last row of the resulting matrix consist of zeros, except for the last entry. Consider an arbitrary row i, and two adjacent columns j, j + 1 in that row: . . . 1 ( N i+j) 1 ( N i+j+1) . . . We use the fact that N i+j = N i+j+1 i+j+1 N i j and rewrite the two adjacent elements as: . . . 1 ( N i+j+1) N i j 1 ( N i+j+1) . . . Now, for each j = 0, 1, 2, ..., n 1, we subtract column j + 1, multiplied by N (n+j) (n+j)+1 , from column j. The last row becomes 0, 0, . . . , 0, 1 ( N 2n), which means that det(C(n,N)) is equal to 1 ( N 2n) times the upper left (n n) minor. On the Tractability of SHAP Explanations Now, we check what happens with element at (i, j). After subtraction, it becomes 1 N i+j+1 N (i + j) (i + j) + 1 N (n + j) (n + j) + 1 This expression can be rewritten as: 1 N (i+j)+1 N (i + j) (i + j) + 1 N (n + j) (n + j) + 1 = (N i j 1)!(i + j + 1)! N! (N + 1)(n i) (i + j + 1)(n + j + 1) = (N i j 1)!(i + j)! (N 1)!N (N + 1)(n i) (n + j + 1) = 1 N 1 (i+j) (N + 1)(n i) N(n + j + 1) Note that this expression holds with the whole (n n) upper-left minor of C(n,N): the element in the lower-right corner of the matrix remains 1/ N 2n . Observe that the (i, j)-th entry of this minor is precisely the (i, j)-entry of C(n 1,N 1), multiplied by (N+1)(n i) N(n+j+1) . Here N is a global constant, n i is the same constant in the entire row i, and 1 n+j+1 is the same constant in the entire column j. We factor out the global constant N+1 N , factor out n i from each row i, and factor out 1 n+j+1 from each column j, and obtain the following recurrence: det(C(n,N)) = 1 N 2n N + 1 n Qn 1 i=0 (n i) Qn 1 j=0 (n + j + 1) det(C(n 1,N 1)) It follows by induction on n that det(C(n,N)) = 0. Step 3: Let x be a good matrix (Definition 5). Then V P Shap. More precisely, we claim that we can compute the quantity V associated to a matrix x as defined in Eq. (21) in polynomial time, by using an oracle for computing Shap F1(Xj) over any matrix x . We modify the matrix x as follows. We add a new attribute X0 whose value is 1 only in the first row, and let F1 = X0 denote the function that returns the value of feature X0. We show here the new matrix x , augmented with the values of the function F1: X0 X1 X2 . . . Xn F1 1 x11 x12 . . . x1n 1 0 x21 x22 . . . x2n 0 . . . . . . . . . . . . 0 xm1 xm2 . . . xmn 0 We run our oracle to compute Shap F1(X0) over the matrix x . The value Shap F1(X0) is given by Eq. (16), but notice that the matrix x has n + 1 columns, while Eq. (16) is given Van den Broeck, Lykov, Schleich, & Suciu for a matrix with n columns. Therefore, since E[F1|XS {0}] = 1 for any set S, we have: Shap F1(X0) =1 X (n + 1)! E[F1|XS = 1] Since x is good , so is the new matrix x and, by Claim 6, for any k = 0, n X S:|S|=k E[F1|XS = 1] =vk This implies that we can use the value Shap F1(X0) returned by the oracle to compute the quantity: (n + 1)! E[F1|XS = 1] = X (n + 1)! vk = V which completes Step 3 Putting It Together Given a PP2CNF formula Φ = V(Ui Vj), and two probability values p = Pr(U1) = = Pr(Um) and q = Pr(V1) = = Pr(Vn), to compute E[Φ] we proceed as follows: Construct the 0,1-matrix associated to Φ, denote it x. Construct m + 1 matrices x(Γ), Γ = 1, m + 1, by adding Γ rows (1, 1, . . . , 1) at the beginning of the matrix. For each matrix x(Γ), construct n+1 matrices x(Γ, ), = 0, n, by adding n columns, of which the first columns are 1, the others are 0. For each x(Γ, ), construct one new matrix (x(Γ, )) by adding a column (1, 0, 0, . . . , 0). Call this new column X0. Use the oracle to compute Shap F1(X0). From here, compute the value V (Γ, ) associated with the matrix x(Γ, ). Using the values V (Γ,0), V (Γ,1), . . . , V (Γ,n), compute the values v(Γ) 0 , v(Γ) 1 , . . . , v(Γ) n associated to the matrix x(Γ). For each k = 0, n, use the values v(1) k , v(2) k , . . . , v(m+1) k to compute the coefficients a0k, a1k, . . . , amk associated to the matrix x. At this point we have all coefficients aℓk of the polynomial P(u, v). Compute the coefficients bℓk of the polynomial Q(u, v) = P(1 + u, v). Finally, return E[Φ] = pmqn (1 p)m(1 q)n Q(1 p This concludes the entire proof of Lemma 14. On the Tractability of SHAP Explanations 6. Perspectives and Conclusions We establish the complexity of computing the Shap explanation in three important settings. First, we consider fully-factorized data distributions and show that for any prediction model, the complexity of computing the Shap explanation is the same as the complexity of computing the expected value of the model. It follows that there are commonly used models, such as logistic regression, for which computing Shap explanations is intractable. Going beyond fully-factorized distributions, we show that computing Shap explanations is also intractable for simple functions and simple distributions naive Bayes and empirical distributions. The recent literature on Shap explanations predominantly studies tradeoffs of variants of the original Shap formulation, and relies on approximation algorithms to compute the explanations. These approximation algorithms, however, tend to make simplifying assumptions which can lead to counter-intuitive explanations, see e.g., Slack et al. (2020). We believe that more focus should be given to the computational complexity of Shap explanations. In particular, which classes of machine learning models can be explained efficiently using the Shap scores? Our results show that, under the assumption of fully-factorized data distributions, there are classes of models for which the Shap explanations can be computed in polynomial time. In future work, we plan to explore if there are classes of models for which the complexity of the Shap explanations is tractable under more complex data distributions, such as the ones defined by tractable probabilistic circuits Vergari et al. (2020) or tractable symmetric probability spaces (Van den Broeck, Meert, and Darwiche, 2014; Beame et al., 2015). Acknowledgements This work is partially supported by NSF grants IIS-1907997, IIS-1954222 IIS-1943641, IIS1956441, CCF-1837129, DARPA grant N66001-17-2-4032, a Sloan Fellowship, and gifts by Intel and Facebook research. Schleich is supported by a Relational AI fellowship. The authors would like to thank Yoo Jung Choi for valuable discussions on the proof of Theorem 6. Aas, K.; Jullum, M.; and Løland, A. 2019. Explaining individual predictions when features are dependent: More accurate approximations to Shapley values. ar Xiv preprint ar Xiv:1903.10464 . Arenas, M.; Barcel o, P.; Bertossi, L.; and Monet, M. 2020. The Tractability of SHAPScore-Based Explanations over Deterministic and Decomposable Boolean Circuits. ar Xiv preprint ar Xiv:2007.14045 . Beame, P.; Van den Broeck, G.; Gribkoff, E.; and Suciu, D. 2015. Symmetric Weighted First-Order Model Counting. In Proceedings of the 34th ACM Symposium on Principles of Database Systems, PODS 2015, Melbourne, Victoria, Australia, May 31 - June 4, 2015, 313 328. Van den Broeck, Lykov, Schleich, & Suciu Bertossi, L.; Li, J.; Schleich, M.; Suciu, D.; and Vagena, Z. 2020. Causality-Based Explanation of Classification Outcomes. In Proceedings of the Fourth International Workshop on Data Management for End-to-End Machine Learning, DEEM 20. New York, NY, USA: Association for Computing Machinery. Bryant, R. E. 1986. Graph-based algorithms for boolean function manipulation. Computers, IEEE Transactions on 100(8): 677 691. Chavira, M.; and Darwiche, A. 2008. On probabilistic inference by weighted model counting. Artificial Intelligence 172(6-7): 772 799. Darwiche, A.; and Marquis, P. 2002. A knowledge compilation map. Journal of Artificial Intelligence Research 17: 229 264. Datta, A.; Sen, S.; and Zick, Y. 2016. Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems. In IEEE Symposium on Security and Privacy, SP 2016, San Jose, CA, USA, May 22-26, 2016, 598 617. Elkind, E.; Goldberg, L. A.; Goldberg, P. W.; and Wooldridge, M. J. 2008. A tractable and expressive class of marginal contribution nets and its applications. In 7th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2008), Estoril, Portugal, May 12-16, 2008, Volume 2, 1007 1014. Ferrara, A.; Pan, G.; and Vardi, M. Y. 2005. Treewidth in verification: Local vs. global. In International Conference on Logic for Programming Artificial Intelligence and Reasoning, 489 503. Springer. Gade, K.; Geyik, S. C.; Kenthapadi, K.; Mithal, V.; and Taly, A. 2019. Explainable AI in Industry. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 19, 3203 3204. New York, NY, USA: Association for Computing Machinery. Janzing, D.; Minorics, L.; and Bloebaum, P. 2020. Feature relevance quantification in explainable AI: A causal problem. volume 108 of Proceedings of Machine Learning Research, 2907 2916. PMLR. Khosravi, P.; Choi, Y.; Liang, Y.; Vergari, A.; and Van den Broeck, G. 2019a. On Tractable Computation of Expected Predictions. In Advances in Neural Information Processing Systems 32 (Neur IPS). Khosravi, P.; Liang, Y.; Choi, Y.; and den Broeck, G. V. 2019b. What to Expect of Classifiers? Reasoning about Logistic Regression with Missing Features. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, 2716 2724. Khosravi, P.; Vergari, A.; Choi, Y.; Liang, Y.; and Van den Broeck, G. 2020. Handling Missing Data in Decision Trees: A Probabilistic Approach. In The Art of Learning with Missing Values Workshop at ICML (Artemiss). On the Tractability of SHAP Explanations Kumar, I. E.; Venkatasubramanian, S.; Scheidegger, C.; and Friedler, S. 2020. Problems with Shapley-value-based explanations as feature importance measures. In Proceedings of the 37th International Conference on Machine Learning, Vienna, Austria, PMLR 119, 2020. Liang, Y.; and Van den Broeck, G. 2019. Learning Logistic Circuits. In Proceedings of the 33rd Conference on Artificial Intelligence (AAAI). Lundberg, S. M.; Erion, G.; Chen, H.; De Grave, A.; Prutkin, J. M.; Nair, B.; Katz, R.; Himmelfarb, J.; Bansal, N.; and Lee, S. 2020. From Local Explanations to Global Understanding with Explainable AI for Trees. Nature Machine Intelligence 2: 56 67. Lundberg, S. M.; Erion, G. G.; and Lee, S.-I. 2018. Consistent individualized feature attribution for tree ensembles. ar Xiv preprint ar Xiv:1802.03888 . Lundberg, S. M.; and Lee, S. 2017. A Unified Approach to Interpreting Model Predictions. In Advances in neural information processing systems (NIPS), 4765 4774. Merrick, L.; and Taly, A. 2020. The Explanation Game: Explaining Machine Learning Models Using Shapley Values. In International Cross-Domain Conference for Machine Learning and Knowledge Extraction, 17 38. Springer. Ng, A. Y.; and Jordan, M. I. 2002. On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. In Advances in neural information processing systems, 841 848. Provan, J. S.; and Ball, M. O. 1983. The Complexity of Counting Cuts and of Computing the Probability that a Graph is Connected. SIAM J. Comput. 12(4): 777 788. Rendle, S. 2010. Factorization machines. In 2010 IEEE International Conference on Data Mining, 995 1000. IEEE. Roth, A. e. 1988. The Shapley Value: Essays in Honor of Lloyd S. Shapley. Cambridge Univ. Press. Sang, T.; Beame, P.; and Kautz, H. A. 2005. Performing Bayesian inference by weighted model counting. In AAAI, volume 5, 475 481. Slack, D.; Hilgard, S.; Jia, E.; Singh, S.; and Lakkaraju, H. 2020. Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods. In AAAI/ACM Conference on AI, Ethics, and Society (AIES). ˇStrumbelj, E.; and Kononenko, I. 2014. Explaining prediction models and individual predictions with feature contributions. Knowledge and information systems 41(3): 647 665. Sundararajan, M.; and Najmi, A. 2020. The many Shapley values for model explanation. In Proceedings of the 37th International Conference on Machine Learning, Vienna, Austria, PMLR 119, 2020. Van den Broeck, Lykov, Schleich, & Suciu Van den Broeck, G.; Meert, W.; and Darwiche, A. 2014. Skolemization for Weighted First-Order Model Counting. In Principles of Knowledge Representation and Reasoning: Proceedings of the Fourteenth International Conference, KR 2014, Vienna, Austria, July 20-24, 2014. Vergari, A.; Choi, Y.; Peharz, R.; and Van den Broeck, G. 2020. Probabilistic Circuits: Representations, Inference, Learning and Applications. AAAI Tutorial. Wei, W.; and Selman, B. 2005. A new approach to model counting. In International Conference on Theory and Applications of Satisfiability Testing, 324 339. Springer.