# partial_awareness__cb899653.pdf The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19) Partial Awareness Joseph Y. Halpern Computer Science Department Cornell University halpern@cs.cornell.edu Evan Piermont Economics Department Royal Holloway, University of London evan.piermont@rhul.ac.uk We develop a modal logic to capture partial awareness. The logic has three building blocks: objects, properties, and concepts. Properties are unary predicates on objects; concepts are Boolean combinations of properties. We take an agent to be partially aware of a concept if she is aware of the concept without being aware of the properties that define it. The logic allows for quantification over objects and properties, so that the agent can reason about her own unawareness. We then apply the logic to contracts, which we view as syntactic objects that dictate outcomes based on the truth of formulas. We show that when agents are unaware of some relevant properties, referencing concepts that agents are only partially aware of can improve welfare. 1 Introduction Standard models of epistemic logic assume that agents are logically omniscient: they know all valid formulas and logical consequences of their knowledge. There have been many attempts to find models of knowledge that do not satisfy logical omniscience. One of the most common approaches involves awareness. Roughly speaking, an agent i cannot know a valid formula ϕ if i is unaware of ϕ. For example, an agent cannot know that either quantum computers are faster than conventional computers or they are not if she is not aware of the notion of quantum computer. There have been many attempts to capture unawareness in the computer science, economics, and philosophy literature, ranging from syntactic approaches (Fagin and Halpern 1988), to semantic approaches involving lattices (Heifetz, Meier, and Schipper 2006), to identifying the lack of awareness of ϕ with an agent neither knowing ϕ nor knowing that she does not know ϕ (Modica and Rustichini 1994; 1999). Most of the attempts involved propositional (modal) logics, although there are papers that use first-order quantification as well (Board and Chung 2009; Sillari 2008). However, none of these approaches are rich enough to capture what we will call partial unawareness. Perhaps the most common interpretation of lack of awareness identifies the lack of awareness of ϕ with the sentiment ϕ is not on my radar screen . With this interpretation, partial awareness becomes some aspects of ϕ are on my Copyright c 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. radar screen .1 Consider an agent who is in the market for a new computer. She might be completely unaware of quantum computers, never having heard of one at all. Such an agent cannot reason about her value for having a quantum computer, nor think about for which tasks a quantum computer would be useful. But this is an extreme case. A slightly more aware agent might be aware of (the concept of) quantum computers, having read a magazine article about them. She might understand some properties of quantum computers, for example, that they can factor integers faster than a conventional computer, but be unaware of the notion of qubit state on which quantum computing is based. Such an agent may well be able to reason about her value of a quantum computer despite her less then full awareness. To capture such partial awareness more formally, we consider a logic with three building blocks: objects, properties, and concepts. We take a property to be a unary predicate, so it denotes a subset of objects in the domain;2 a concept is a Boolean combination of properties. In each state (possible world), each agent is aware of a subset of objects, properties, and concepts. The use of concepts in the context of awareness, which is (to the best of our knowledge) original to this paper, is critical in our approach, and is how we capture partial awareness. For a simple example of how we use it, suppose that a quantum computer (Q) is defined as a computer (C) that possesses an additional quantum property QP. That is, Q is defined to be C QP (more precisely, we will have x(Q(x) C(x) QP(x)) as a domain axiom). A partially aware agent might be aware of the concept of a quantum computer but unaware of the specific Boolean combination of properties that characterizes it. Contrast this to the cases where the agent is fully unaware (she is unaware of even the concept of a quantum computer) or fully aware (she is aware of both the concept of a quantum computer and also what it means to be one, i.e., the properties C and QP). Once we have awareness in the language, we need to con- 1Lack of awareness of ϕ has also been identified with the inability of compute whether ϕ is true (due to computational limitations). We do not consider this interpretation in this paper, but partial awareness makes sense for it as well now partial awareness becomes I can compute whether some aspects of ϕ are true. 2We could easily extend our approach to allow arbitrary k-ary predicates, but this would complicate the presentation. sider what an agent knows about her own awareness (and lack of it). This is critical in order to capture many interesting economic behaviors. For example, an agent might know (or at least believe it is possible) that there are aspects of a quantum computer about which she is unaware (in our example, this happens to be QP). In the spirit of Halpern and Rˆego (2009; 2013) (HR from now on), we capture this using quantification over properties: Ki( P x(Q(x) C(x) P(x))); although i is unaware of how quantum computers differ from conventional computers, she knows that there is a distinction that is captured by some property P. While the agent is unaware of the property QP, so cannot reason explicitly about it, she is aware that there is some property that relates C and Q. This allows her to reason at a sophisticated level about QP. For example, for an arbitrary property R, the statement Ki P x((Q(x) C(x) P(x)) (P(x) R(x)) , combined with the definition of quantum computer, implies that the agent knows that QP implies R, even though she is unaware of QP. Despite her (partial) unawareness of the notion of quantum computer, the agent can reach some substantive conclusions. With unawareness, an agent may in general be uncertain about the relation between C and Q; for example, she might also envision a state where a quantum computer is a computer that satisfies one of two properties, either QP or QP , but cannot articulate statements that distinguish QP from QP . So if the agent wishes to purchase a QP-computer but not a QP -computer, she cannot do so. This lack of awareness has important consequences in market settings. Consider a seller of quantum computers who is fully aware and has the ability to teach the buyer; specifically, he can expand the buyer s awareness, allowing her to discriminate between QP and QP computers. It is instructive to compare the case of unawareness with the more standard case of uncertainty (with full awareness) in this setting. In environments of pure uncertainty, the buyer and seller are assumed to have a common (and fully understood) space of uncertainty, where each state fully resolves all payoffrelevant uncertainty. That is, although they may have uncertainty, both the buyer and seller understand exactly what information is required in order to resolve the uncertainty. A state contains all the relevant information, so, given a state, both the buyer and seller can (at least in principle) place a price on all the relevant options. Suppose that we model the example above using two states: s, in which c is a QP-computer, and s , in which c is QP -computer. If the buyer knows that the seller knows the state, and the seller can reveal his information in a credible way, then we can assume without loss of generality that he will always do so and a transaction will take place only in state s. To see why, note that in state s, the seller s dominant strategy is to reveal his information, ensuring a sale. Because the buyer knows that the seller knows the state, she will interpret no information as a signal that the state is s . Thus, in either case the state is revealed.3 3This argument does not rely on the fact that there are only two states. Suppose that there are n states, say s1, . . . , sn, and the buyer is willing to pay pi for the computer if the true state is si, If the buyer does not know whether the seller knows the state, or if the seller cannot credibly reveal the state, the argument above fails; receiving no information could plausibly be the consequence of an uninformed seller. If and when information is revealed or a transaction takes place now depends on the beliefs of the agents. However, an enforceable contract can remedy the situation: a contract that stipulates the sale of the computer conditional on the true state being s results in essentially the same outcome as the case where the buyer knows that the seller knows the true state of the world: the buyer ends up with the computer if and only if the state is s. The fact that the buyer and seller agree on the underlying state space (i.e., the set of possible states of the world) makes it possible for an enforceable contract to overcome information asymmetries. We now turn to the situation with unawareness. If the buyer does not know what the seller is aware of, then we get a significant divergence between the situation with awareness and uncertainty. With unawareness, the seller will again not volunteer information, but the buyer cannot draw up a contract guaranteeing her the product she wants, since she cannot articulate the difference between the states. The efficacy of contracts relies critically on the parties common knowledge of the state space, which in general does not hold in the presence of unawareness. Note the critical role of the partialness of awareness here. If the buyer were fully aware of the concept, in the sense of her being aware of the properties that define it, she could write the relevant contracts and we would have a case of pure uncertainty. On the other hand, if she were completely unaware of quantum computers, she would not be able to reason about her value, or even consider buying one. The introduction of concepts allows us to consider agents with different levels of awareness. For example, perhaps a buyer becomes aware of a particular company that is offering a commercial quantum computer understood to be characterized by the concept T. If the buyer knows that T is such that x T(x) C(x) QP (x) , then the buyer, without being explicitly aware of QP or QP , can nonetheless articulate her desire to purchase a QP but not a QP computer. Indeed, the buyer can write a contract that gives her the right to return the computer c in the event that T(c) is true. In other words, the concept T acts as a proxy for the property QP, allowing the buyer to circumvent her scant awareness. Note that the observations above help to explain the prevalence of costly litigation and contractual disputes. In the world of pure uncertainty, it can be shown that contracts are always upheld in equilibrium. Indeed, if uncertainty resolved in such a way as to make some party renege, then this could be foreseen, and could be addressed by an appropriate contract, avoiding costly litigation. However, when parties are aware of different concepts, optimal complete contracts with p1 p2 . . . pn. Assume that the seller is willing to sell at any positive price. An easy induction on k shows that, without loss of generality, if the true state is sk, the seller might as well reveal this fact, as long as the buyer puts positive probability on a state k < k. (A formalization of this argument also requires common knowledge of rationality, or, more precisely, sufficiently deep knowledge of rationality.) cannot be drawn up, setting up a barrier to efficient trade. A legal system that punishes the strategic concealment of information can help to facilitate trade, as it provides recourse for unaware buyers who get swindled. 2 A logic of partial awareness In this section, we introduce our logic of partial awareness. 2.1 Syntax The syntax of our logic has the following building blocks: A countable set O of constant symbols, representing objects. Following Levesque (1990), we assume that O consists of a nonempty set of standard names d1, d2, . . ., which may be finite or countably infinite.4 Intuitively, the standard names will represent the domain elements. We explain the need for these shortly. A countably infinite set VO of object variables, which range over objects. A countable set P of unary predicate symbols. A countably infinite set VP of predicate variables. A countable set C of concept symbols. If d O, x VO, P P, Y VP, and C C , then P(d), P(x), Y (d), Y (x), C(d), and C(x) are atomic formulas. Starting with these atomic formulas, we construct the set of all formulas recursively: As usual, the set of formulas is closed under conjunction and negation, so if ϕ and ψ are formulas, then so are ϕ and ϕ ψ. We allow quantification over objects and over unary predicates, so that if ϕ is a formula, x VO, and Y VP, then xϕ and Y ϕ are formulas. Finally, we have two families of modal operators: taking {1 . . . n} to denote the set of agents, we have modal operators A1, . . . , An and K1, . . . , Kn, representing awareness and (explicit) knowledge, respectively. Thus, if ϕ is a formula, then so is Aiϕ and Kiϕ. Let L(O, P, C ) denote the resulting language. A formula that contains no free variables is called a sentence. 2.2 Semantics A model over the language L(O, P, C ) has to give meaning to each of the syntactic elements in the language. We use the standard possible-worlds semantics of knowledge. Thus, a model includes a set Ωof possible states or worlds (we use the two words interchangeably) and, for each agent i, a binary relation Ki on worlds. The intuition is that (ω, ω ) Ki (sometimes denoted ω Ki(ω)) if, in world ω, agent i considers ω possible. Following HR, we assume that each state ω is associated with a language. Formally, there is a function Φ on states such that Φ(ω) = (Oω, Pω, Cω), where Oω = O, Pω P, and Cω C . We discuss the reason for associating a language with each state below. Let L(Φ(ω)) denote the language associated with state ω. We also assume that associated with each state ω and agent i, there is the set of constant, predicate, and concept symbols that the agent is 4Levesque required there to be infinitely many standard names. aware of; this is given by the function A. At state ω, each agent can only be aware of symbols that are in Φ(ω). Thus, Ai(ω) Φ(ω). We assume that all agents are aware of the standard names at every state, so that A(ω) includes O. Like Levesque (1990), we take the domain D of a model over L(O, P, C ) to consist of the standard names in O. An interpretation I assigns meaning to the constant and predicate symbols in each state; more precisely, for each state ω, we have a function Iω taking O to elements of the domain D, P to subsets of D, and C to Boolean combinations of properties (i.e., predicates). This last item requires some explanation. Although elements of O and P are mapped to semantic objects (elements in the domain and sets of elements in the domain, respectively), elements of C are mapped to syntactic objects: Boolean combinations of properties. Let Lbc denote the Boolean combination of properties; if P P, let Lbc(P ) denote the Boolean combination of properties in P . We require that Iω(C) Lbc(Φ(ω)), so that the Boolean combination defining C in state ω must be expressible in L(Φ(ω)), the language of ω. We sometimes write c I ω rather than Iω(c), P I ω rather than Iω(P), and CI ω rather than Iω(C). We assume that standard names are mapped to themselves, so that (di)I w = di. Putting this together, a model for partial awareness has the form M = (Ω, D, Φ, A1 . . . , An, K1, . . . , Kn, I). The truth of a sentence ϕ L(O, P, C ) at a state ω in M is defined recursively as follows. (M, ω) |= P(d) iff P(d) L(Φ(ω)) and d P I ω, (M, ω) |= ϕ iff ϕ L(Φ(ω)) and (M, ω) |= ϕ, (M, ω) |= (ϕ ψ) iff (M, ω) |= ϕ and (M, ω) |= ψ, (M, ω) |= C(d) iff C(d) L(Φ(ω)) and (M, ω) |= CI ω(d), (M, ω) |= xϕ iff (M, ω) |= ϕ[x/d] for all constant symbols d O, where ϕ[x/d] denotes the result of replacing all free occurrences of x in ϕ by d, (M, ω) |= Y ϕ iff (M, ω) |= ϕ[Y/ψ], where ψ Lbc(Φ(ω)),5 (M, ω) |= Aiϕ iff ϕ L(Ai(ω)), (M, ω) |= Kiϕ iff (M, ω) |= Aiϕ and (M, ω ) |= ϕ for all ω Ki(ω). Note that what we are calling knowledge here is what has been called explicit knowledge in earlier work (Fagin and Halpern 1988; Halpern and Rˆego 2009; 2013): for agent i to know a formula ϕ, i must also be aware of it. Traditionally, Ki has been reserved for implicit knowledge (where no awareness has been required), and Xi has been used to denote explicit knowledge (where Xiϕ is defined as Kiϕ Aiϕ). Since we do not use implicit knowledge in this paper, we have decided to use the more mnemonic Ki for knowledge, even though it represents explicit knowledge. 5There is an abuse of notation here. For example, if ψ is P (Q R) and ϕ is Y (d), then ϕ[Y/ψ] is P(d) (Q(d) R(d)); that is, we apply the arguments of ϕ to all predicates in the ψ. We hope that the intended formula is clear in all cases. For the remainder of the paper, we restrict to models where agents know what they are aware of and knowledge essentially satisfies what are called the S5 properties. Specifically, we restrict to models M where each Ki is an equivalence relation and if ω Ki(ω ), then Ai(ω) = Ai(ω ). This implies that Ai(ω) L(Φ(ω )) and Ai(ω ) L(Φ(ω)). Since Ki is an equivalence relation, it partitions the states in Ω. Thus, we can define Ki by describing the partition. (In the economics literature, this partition is called i s information partition.) Given these assumptions, we can now explain why we need different languages at different states. Consider an agent who considers it possible that she is aware of the whole language. Thus, the agent considers possible a state ω such that Ai(ω ) = Φ(ω ). If we used the same language at all states, because the agent knows what she is aware of, that would mean that, at all states that the agent considered possible, she would know the whole language. Thus, if an agent is aware of all formulas, then she would know that she is aware of all formulas. This is a rather unreasonable property of awareness. It was precisely to avoid this property that Halpern and Rego (2013) allowed different languages to be associated with different states. We can also explain our use of standard names. Note that to give semantics to xϕ and Y ϕ, we do syntactic replacements. In the case of xϕ, we consider all ways of replacing x by a constant; in the case of Y ϕ, we consider all ways of replacing Y by a Boolean combination of properties. This is critical because we define awareness syntactically. Consider a formula such as Y Ai(Y (d)). The standard approach to giving semantics to such a quantified formula would say, roughly speaking, that (M, ω) |= Y Ai(Y (d)) if (M, ω) |= Ai(Y (d)) no matter which set of objects Y represents. The no matter which property (i.e., set of objects) Y represents would typically be captured by including a valuation V on the left-hand side of |=, where V interprets Y as a set of objects; we would then consider all valuations V that agree on their interpretations of all predicate variables but Y . Since we treat awareness syntactically, this approach will not work. We need to replace Y by a syntactic object and then evaluate whether agent i is aware of the resulting formula. This is exactly what we do: (M, ω) |= Ai(Y (d)) if (M, ω) |= Ai(ψ(d)) for ψ Lbc(Φ(ω)). A sentence ϕ is satisfiable if there exists a model M and a state ω in M such that (M, ω) |= ϕ. Given a model M, ϕ is valid in M, denoted M |= ϕ, if (M, ω) |= ϕ for all ω Ωsuch that ϕ L(Φ(ω)). Likewise, for some class of models N, ϕ is valid in N, denoted N |= ϕ, if N |= ϕ for all N N. Note that when we consider the validity of ϕ, we follow Halpern and Rˆego (2013) in requiring only that ϕ be true in states ω such that ϕ L(Φ(ω)). Thus, ϕ is valid if ϕ is true in all states ω where ϕ is part of the language of ω; we are not interested in whether ϕ is true if ϕ is not in the language (indeed, ϕ is guaranteed not to be true in this case). This completes our description of the syntax and semantics. Our language is quite expressive. Among other things, we can faithfully embed in it the propositional approach considered by HR and the object-based awareness approach of Board and Chung (2009). In more detail, HR consider a propositional logic of knowledge and awareness, which allows existential quantification over propositions, so has formulas of the form X(Ai(X) Aj(X)) (there is a formula that agent i is aware of that agent j is not aware of). We can capture the HR language by replacing each primitive proposition p by the atomic formula P(d), for some fixed standard name d, and replacing each proposition variable X by X(d). This replacement allows us to convert a formula ϕ in the HR language to a sentence ϕr in our language (the HR language has no analogue of concepts). We can then convert an HR model M to a model M r in our framework by using the same set of states, taking O = {d}, and taking C = , so that there are no concepts and a single object. It is easy to see that (M, ω) |= ϕ iff (M r, ω) |= ϕr. We can also accommodate the object-based awareness models of Board and Chung (2009). Here the construction is more straightforward: a model where C = and Pω = P for all states ω will do the trick. 3 Utility under introspective unawareness In order to explore how the appeal to concepts might be valuable in a contracting environment, we must add a bit of structure to our problem, dictating the agents preferences when they are unaware. This is more complicated than in previous work because we have unawareness of properties. Thus, unless the agent knows that she is aware of all properties, it is always possible there exists some property that she is unaware of. We focus on a setting that makes sense for contracting: namely, one where an agent s utility is determined by which set of objects he ends up with. We further simplify things by assuming that the utility of a set of objects is separable, so it is the sum of the utility of the individual objects. Thus, there is no complementarity or substitutability (e.g., it is not the case that the objects are a left shoe and a right shoe, so that having one shoe is useless, while having both has high utility); each agent s preferences can be characterized by a utility function defined on objects. In this section, we consider various assumptions on the utility function, which, roughly speaking, correspond to different ways of saying that all that an agent cares about are the properties and concepts in the agent s language that the objects satisfy. In the next section, we consider the consequences of these assumptions on a contracting scenario. Fix a model M = (Ω, D, Φ, A1 . . . , An, K1, . . . , Kn, I) over a language L(O, P, C ). Let U = {Ui,ω : D R}i {1...n},ω Ωdescribe the agents preferences; specifically, Ui,ω describes agent i s preferences in world ω by associating with each object its utility (a real number). Define, for notational expediency, the maps PROPω : D P and CONω : D C by taking PROPω(d) = {P Pω : d P I ω} and CONω(d) = {C Cω : d CI ω}. Thus, PROPω and CONω take a domain element to the set of properties (resp., concepts) it satisfies at ω. Further define PROPAi ω (d) = PROPω(d) Ai(ω) and CONAi ω (d) = CONw(d) Ai(ω); thus PROPAi ω and CONAi ω are the restrictions of PROPw and CONw to agent i s awareness. Our assumptions relate the agents preferences over domain elements to the properties (and concepts) that these elements possess. Because we allow for introspection, it is possible for an agent to care about aspects of an object that she is unaware of, although she will not be able to articulate exactly why she has such a preference. Thus, we take as a starting point the minimal restriction ensuring an agent s subjective valuation depends only on properties and concepts that the agent can articulate. A1. For all d, d D and all states ω, ω Ω, if PROPω(d) = PROPω (d ) then Ui,ω(d) = Ui,ω (d ). A1 can be viewed as the conjunction of two assumptions: first, if two different objects d and d possess exactly the same properties at a state ω, then they are valued identically at ω; second, if a given object d has the same properties at two different states ω and ω , then d has the same value at both ω and ω . Since each concept is defined (at a possible state) as a Boolean combination of properties, if two objects satisfy the same properties then they must also be instances of the same concepts. Thus, A1 is equivalent to the assumption that if PROPω(d) = PROPω (d) and CONω(d) = CONω (d ) then Ui,ω(d) = Ui,ω (d ). Under A1, an agent does not care about the label assigned to an object if d and d satisfy the same properties (and so the same concepts) the agent does not care that d is called d and d called d . A1 also rules out social preferences, or preferences that depend on epistemic conditions. For example, A1 does not allow agent i to value an object d according to agent j s valuation, or even agent j s current knowledge of i s valuations, although this might be relevant if i is interested in reselling d to j. Finally, A1 rules out the case where an agent values the same properties differently in different states. A1 allows an agent to value d and d differently even if she can express no distinction between d and d (conditional on the state. This can happen if d and d differ on properties that the agent is unaware of. Our next assumption reduces this flexibility in valuations, mandating that an agent s valuation depends only on aspects of the state of which she is aware. A2. If PROPAi ω (d) = PROPAi ω (d ) and CONAi ω (d) = CONAi ω (d ) then Ui,ω(d) = Ui,ω (d ). A2 says that the DM s valuation of objects cannot differ unless the objects are distinguished in some way the agent is aware of. If two objects are the same in every way that the agent can articulate, then she assigns them the same value. It is consistent with A2 that an agent i ascribes different utilities to d and d even though i is not aware of any property that distinguishes d and d . This can happen if d and d satisfy different concepts. In that case, i knows that some property must distinguish d and d , but it is a property that i is not aware of. For instance, an agent might value a quantum computer more than a conventional one even when she does not understand exactly how to define a quantum computer. As this discussion shows, unlike A1, in A2 we must explicitly refer to concepts; even though concepts are built from properties, the agent can be aware of concepts that are defined by properties that the agent is not aware of. A3. If PROPAi ω (d) = PROPAi ω (d ) then Ui,ω(d) = Ui,ω (d ). A3 says that an agent bases her preferences only on the properties of which she is aware (and not concepts). A3 can be thought of as a principle of neutrality towards unawareness; although two objects are distinguishable (say d is an instance of C while d is not), the agent values them identically as long as they are not distinguishable by properties that the agent is aware of. That is, while the agent knows there must be a property that separates d and d , because she is unaware of such a property, so she places no value on it. For example, while the agent might understand that there is a difference between classical and quantum computers, because she does not understand how these entities differ, she values them equally. It is immediate that A3 implies both A1 and A2. Moreover, in a model without unawareness of properties or concepts, A1, A2, and A3 collapse to the same restriction. While it may at first seem unreasonable to base preferences on properties of which you are unaware (as is allowed by A1), it actually is not so uncommon. People may prefer stock d to d although they cannot articulate a reason that d is better. Nevertheless, because they see other people buying d and not d , they assume that there is some significant property P of d (of which they are not aware) that d does not possess that accounts for other people s preferences. This can be modeled using the fact that different states have different languages associated with them. An agent i might not be aware of P at a world ω, but considers a world ω possible such that P(d) holds and P(d ) does not. Moreover, he considers P a good property; objects that have property P get a higher utility than those that do not, all else being equal. We remark that such reasoning certainly seems to play a role in the high valuation of some cryptocurrencies! Of course, if i has a predicate in the language that says other people like it , then d and d might be distinguishable using that predicate. This observation emphasizes the fact that A1 and A2 are very much language-dependent. If the agent cannot express various properties in his language, then he may not be able to make distinctions relevant to preferences. (See (Bjorndahl, Halpern, and Pass 2013) for an approach to game theory and decision theory that assumes that utilities are determined by the description of a state in some language.) 4 Contracts and conceptual unawareness We now consider the effect of A1, A2, and A3 on simple interpersonal contracts. Unlike the bulk of the Economics literature, where contracts are functions from a state space to outcomes, we here take a contract to be a syntactic object it makes direct reference to the language of our logic. Real world contracts are syntactic, in that they must literally articulate the contingencies on which they are based. But our motivation for considering syntactic contracts is more than a pursuit of descriptive accuracy. In models of unawareness, the set of contingencies that can be contracted on is a direct consequence of what agents can articulate. So, by considering the language that the agents use, we can directly examine the welfare implications of awareness.6 Suppose that we have two agents, 1 and 2. Let M be the model that describes their uncertainty and awareness, and suppose that their preferences are characterized by U. Assume that agent i is initially endowed with the set of domain elements Endi, for i = 1, 2, where End1 and End2 are disjoint. For simplicity, we assume (as is standard) that each agent will consume (i.e., use) exactly one object. Without trade, each agent can consume only an object from her own endowment; with trade, they may be able to do better. Let End1 End2 denote all pairs (d1, d2) of elements in End1 End2 such that d1 = d2. Note that we might have (d1, d2) End1 End2 even if d1 and d2 are both in End2 (and not in End1); the agents may both consume something that was in agent 2 s initial endowment. A contract is a pair Λ, c , where Λ is a finite set of sentences and c is a function from Λ to End1 End2. Let ci denote the ith component of c; that is, if c(λ) = (d1, d2), then ci(λ) = di. The intuition is that ci dictates which object should be consumed by agent i, contingent on the truth of the sentences in Λ. Given a model M, contract Λ, c must satisfy 1. M |= W 2. M |= (ϕ ψ) for all distinct sentences ϕ, ψ Λ. The first condition states that some sentence in Λ is true in every state (so the contract is complete), the second that the true sentence is unique at every state (so the contract is well defined). A contract is articulable in state ω , sometimes denoted ω -articulable, if Λ L(A1(ω ) A2(ω )), that is, if both agents are aware of all the statements in the contract. (Presumably, the act of reading the contract makes them aware all the statements even if they weren t aware of them beforehand.7) Note that if a contract is articulable in ω then Λ L(Φ(ω )). By conditions 1 and 2, in each state ω, there is a unique sentence in Λ that is true in ω: call that sentence ϕω. We take the outcome of the contract in state ω to be c(ϕω). We abuse notation and write c(ω) = (c1(ω), c2(ω)) to denote 6For contracts that do not involve concepts, what we are doing could also be done in a purely semantic framework, for example, that of Heifetz, Meier, and Schipper (2006). However, contracts that involve concepts cannot be expressed in a purely semantic framework, since concepts represent syntactic objects. Agents can be aware of a concept without being aware of the properties that the concept represents (as in the case of quantum computers); because of this, concepts allow us to indirectly get at awareness of unawareness. Awareness of unawareness seems difficult to express in a purely semantic framework (and, indeed, cannot be expressed in the framework of Heifetz, Meier, and Schipper). It seems to us that the use of concepts is indispensable in understanding how unawareness drives novel behavior in contracting environments; this is one of our reasons for modeling unawareness syntactically. Syntactic contracts were considered by Piermont (2017), with similar motivation. 7The fact that agents might not be aware of all statements in a contract before the contract is written clearly has strategic implications. Agent 1 may prefer to leave a clause out of a contract rather than making agent 2 aware of the issue. This issue is studied to some extent by Filiz (2012) and Ozbay (2007). the outcome of the contract in state ω. Thus, the value of the contract to agent i in state ω is Ui,ω(ci(ω)). We are interested in the value of concepts as a contracting device. To get at this, we must examine the difference between the optimal (or equilibrium) contract when Λ can be any subset of the language and the optimal (or equilibrium) contract when Λ cannot refer to concepts. Given M, U, and endowments End1, End2 D, say that a contract Λ, c is ω-efficient if there is no pair of objects (d1, d2) End1 End2 such that Ui,ω(di)) Ui,ω(ci(ω)) for i {1, 2}, with at least one inequality strict, and efficient if it is ω-efficient for all ω Ω. In other words, a contract is efficient if it realizes all the gains from trade, so there is no trade that would leave both agents better off. Finally, a contract is ω-acceptable for agent i if Ui,ω (c(ω)) max d Endi Ui,ω (d) for all ω Ki(ω). Agent i facing a take-it-or-leave-it offer for an acceptable contract will prefer that contract to her outside option (i.e., consuming an object in Endi). The following examples illustrate how limited awareness can impede the efficiency of contracts and how the reference to concepts can help an agent articulate her preference, tempering the effect of unawareness. Example 4.1. A buyer (agent 1) is trying to purchase a computer from a firm (agent 2). End1 = {d$} and End2 = {dcmp}, where d$ is a fixed amount of money and dcmp is the computer in question. There are three states: Ω= {ω1, ω2, ω3}. There are three predicates, P = {P, Q, R}. We have RI ω1 = RI ω2 = RI ω3 = {d$}; in addition, P I ω1 = P I ω2 = QI ω1 = {dcmp} and P I ω3 = QI ω2 = QI ω3 = , so that, in the three states, dcmp has properties P and Q, property P, and no properties, respectively. There is also a single concept, that of a quantum computer, QC: QCI ω1 = QCI ω2 = P Q and QCI ω3 = P Q R. Therefore c is an instance of QC in states ω1 and ω3. The buyer prefers to purchase the computer if and only if it has property Q; thus, the buyer s utility is such that U1,ω1(dcmp) > U1,ω1(d$) and U1,ωk(d$) > U1,ωk(dcmp) for k {2, 3}. Moreover, Ui,ω(d$) = Ui,ω (d$) for all agents i and states ω, ω . The firm wants to sell the computer in all states. For now, assume that both agents have full awareness, and their information partitions are given by K1 = {ω1, ω2, ω3} and K2 = {ω1, ω2}, {ω3} . Since there is no unawareness, this model trivially satisfies A3 (and hence A1 and A2). Because the buyer does not know the state, she is unwilling to make any unconditional trade (all constant contracts are unacceptable for the buyer). However, this is easily remedied by the use of a contract. The obvious contract, where Λ = {Q(dcmp), Q(dcmp) and the function c is given by Q(dcmp) 7 (dcmp, d$) Q(dcmp) 7 (d$, dcmp), is clearly efficient and acceptable to all parties. Example 4.1 highlights how contracting can facilitate trade in uncertain environments. Despite the fact that agents do not know which state has obtained, they can eliminate uncertainty by appealing to contracts. The next example illustrates the issues that arise when awareness is limited. Example 4.2. Let M and U be as in Example 4.1, except that now that agents are not completely aware. Specifically, Ai(ω) = (O, {P, R}, C ) for all ω Ωand i {1, 2}. Both agents are unaware of Q. This model satisfies assumptions A1 and A2, but not A3. The contract described in the previous example is no longer articulable. We can circumvent the agents linguistic limitations by writing a contract in terms of the concept QC. Indeed, the consider (Λ, c), where Λ = {P(dcmp) Q(dcmp), (P(dcmp Q(dcmp)) and c is given by P(dcmp) QC(dcmp) 7 (dcmp, d$) P(dcmp) QC(dcmp) 7 (d$, dcmp), implements the same consumption outcomes as the contract in Example 4.1. In Example 4.2, the buyer wants to purchase dcmp only when Q(dcmp) is true. Since she is unaware of the property Q, and knows only that there is some property (that is a conjunct of QC) that is desirable, she cannot directly demand a computer with property Q. Before analyzing the contract above, notice if the buyer knew the true state was not ω3, then she could get away with the simple contract that demands dcmp whenever QC(dcmp) is true. In states ω1 and ω2, the interpretation of QC is constant, and, given that P(dcmp) is true in both states, QC(dcmp) is equivalent to Q(dcmp) in these states, so the buyer could use the concept of a quantum computer as a proxy for the property Q. This simpler contract is not acceptable when the buyer considers all three states possible. In state ω3, the interpretation of a quantum computer is different, so that while dcmp is an instance of QC in ω3, it does not satisfy Q. The buyer is uncertain about the definition of a quantum computer; while she is unaware of the exact definition in each state, she can articulate the difference: in some states, P is a property of quantum computers, while in others it is not. By exploiting this difference, she can construct the welfareoptimal contract she demands dcmp whenever it possesses the property that defines a quantum computer in addition to P. These examples show how the awareness of agents can affect the set of trading outcomes that can be implemented via (syntactic) contracts. Collectively, they suggest a connection between the efficacy of contracting and the relationship between preference and properties as embodied by assumptions A1 A3. Under the additional assumption that the agents are aware of the same things in the actual world (a reasonable assumption if we assume that the language talks only about contract-relevant features, and both agents have read the contract, so are aware of all the properties and concepts that the contract mentions), this connection is made formal in the following result, whose proof (like that of all other theorems) is left to the full paper, which can be found on arxiv. Theorem 4.1. Given a model M = (Ω, D, Φ, A1 . . . , An, K1, . . . , Kn, I), preferences U, endowments End1, End2 D, and a state ω Ωsuch that A1(ω ) = A2(ω ) and K1(ω ) K2(ω ) is finite, the following hold: (a) If M, U satisfies A1 and A2, then there exists a contract that is ω -articulable, ω -efficient, and ω - acceptable for i = 1, 2. (b) If in addition, for i = 1, 2, Ai is constant on K1(ω ) K2(ω ), then there exists a contract that is ω -articulable, ω -efficient, and ω -acceptable for all ω K1(ω ) K2(ω ). (c) If in addition M, U satisfies A3, then there exists a contract Λ, c that is ω -articulable, ω -efficient, and ω -acceptable for i at all ω K1(ω ) K2(ω ) such that Λ Lbc. Theorem 4.1(a) says that if agents preferences can depend only on properties and concepts that they are aware of, then gains from trade can be fully realized. Even if agents are unaware of some preference-relevant properties, as long as they do not strictly prefer one object to another without being aware of some tangible way that the objects differ, then they can still articulate an optimal contract. As Example 4.2 shows, this contract might need to mention concepts. Part (b) states that if, in addition, each agent knows what the other is aware of, then each of them knows that gains from trade can be achieved. That is, both agents know that, no matter what the true state of the world is (from their perspective), trading is worthwhile. Theorem 4.1(c) says that if agents preferences depend only on the properties that they are aware of, then they gain nothing from the ability to contract over concepts; there is a contract that they know to be efficient that makes reference only to properties. Theorem 4.1 requires agents to be aware of the same properties in ω . As we argued above, this is a reasonable assumption. As we show by example in the supplementary material, the assumption is also necessary. 5 Axiomatization and complexity We can adapt the axioms used by Halpern and Rˆego (2013) to get a sound and complete axiomatization for our logic, provided that the set P of predicates is infinite. This assumption seems reasonable, given that we are mainly interested in agents who are never sure that they are aware of all predicates. (HR make an analogous assumption.) Consider the following axiom system, which we call AX. Axioms: Prop. All substitution instances of valid formulas of propositional logic. AGP. Aiϕ ( P P Φ(ϕ)Ai P(x)) ( C C Φ(ϕ)Ai C (x)), where x is an arbitrary object variable and Φ(ϕ) consists of all the predicate and concept symbols in P C that appear in ϕ.8 8As usual, the empty conjunction is taken to be vacuously true, so that Aiϕ is vacuously true if there are no symbols in P C occur in ϕ. KA. Aiϕ Ki Aiϕ K. (Kiϕ Ki(ϕ ψ)) Kiψ. T. Kiϕ ϕ. 4. Kiϕ Ki Kiϕ. 5. ( Kiϕ Aiϕ) Ki Kiϕ. A0. Kiϕ Aiϕ. Con. X( x(C(x) X(x))). 1 x xψ ψ[x/c] for c O. 1 X. Xϕ ϕ[X/ψ] if ψ is either in Lbc or a concept. K x. x(ϕ ψ) ( xϕ xψ). K X. X(ϕ ψ) ( Xϕ Xψ). N x. ϕ xϕ if x is not free in ϕ. N X. ϕ Xϕ if X is not free in ϕ. Barcanx. x Kiϕ Ki xϕ. Barcan X. (Ai( Xϕ) X(Ai(X(c))) Kiϕ) Ki( XAi(X(c)) Xϕ). FAX. X Ai(X(c)) Ki( X Ai(X(c))). Finx. If O = {c1, . . . , cn}, then xϕ ϕ[x/c1] . . . ϕ[x/cn]. Rules of Inference: MP. From ϕ and ϕ ψ infer ψ (modus ponens). Gen K. From ϕ Aiϕ infer Kiϕ. Gen x. From ϕ infer xϕ[c/x], where c O. Gen X. If P is a predicate symbol, then from ϕ infer Xϕ[P/X]. Theorem 5.1. AX is a sound and complete axiomatization of L(O, P, C ) with respect to the class of models of partial awareness, if P is infinite. Since the logic is axiomatizable, the validity problem is recursively enumerable. This is also a lower bound on its complexity, even if we do not allow quantification over predicates, since first-order epistemic logic with just two unary predicates was shown by Kripke (1962) to be undecidable. Kripke s proof used the well-known fact that first-order logic with a single binary predicate R is undecidable, and the observation that R(x, y) can be represented as K (P(x) Q(y)). (We must add the formula x(A(P(x) Q(x)) to ensure that awareness does not cause a problem.) We thus get Thus, Theorem 5.2. The validity problem for the language L(O, P, C ) in the class of models of partial awareness is r.e.-complete if |P| 2. 6 Conclusion We have defined and axiomatized a modal logic that captures partial unawareness by allowing an agent to be aware of a concept without being aware of the properties that define it. The logic also allows agents to reason about their own unawareness. We show that such a logic is critical for analyzing interpersonal contracts, and that referencing concepts that agents are only partially aware of can improve welfare. We believe that the logic should also be applicable to other domains, such an analyzing communication between people. We hope to consider such applications in the future. Our analysis of contracts assumed that both agents were aware of all statements in a contract. This makes sense after the contract has been signed, but may well not be true before the contract is written. We believe that an extension of our language to deal with the effects of making other agents aware of certain formulas will allow us explore and analyze the dynamic process of contract writing. Acknowledgments: Halpern was supported in part by NSF grants IIS-1703846 and IIS-1718108, ARO grant W911NF-17-1-0592, and a grant from the Open Philanthropy project. References Bjorndahl, A.; Halpern, J. Y.; and Pass, R. 2013. Languagebased games. In Theoretical Aspects of Rationality and Knowledge: Proc. 14th Conference (TARK 2013), 39 48. Board, O., and Chung, K.-S. 2009. Object-based unawareness: theory and applications. Working paper 378, University of Pittsburgh. Fagin, R., and Halpern, J. Y. 1988. Belief, awareness, and limited reasoning. Artificial Intelligence 34:39 76. Filiz-Ozbay, E. 2012. Incorporating unawareness into contract theory. Games and Economic Behavior 76:181 194. Halpern, J. Y., and Rˆego, L. C. 2009. Reasoning about knowledge of unawareness. Games and Economic Behavior 67(2):503 525. Halpern, J. Y., and Rˆego, L. C. 2013. Reasoning about knowledge of unawareness revisited. Mathematical Social Sciences 66(2):73 84. Heifetz, A.; Meier, M.; and Schipper, B. 2006. Interactive unawareness. Journal of Economic Theory 130:78 94. Kripke, S. 1962. The undecidability of monadic modal quantification theory. Zeitschrift f ur Mathematische Logik und Grundlagen der Mathematik 8:113 116. Levesque, H. J. 1990. All I know: a study in autoepistemic logic. Artificial Intelligence 42(3):263 309. Modica, S., and Rustichini, A. 1994. Awareness and partitional information structures. Theory and Decision 37:107 124. Modica, S., and Rustichini, A. 1999. Unawareness and partitional information structures. Games and Economic Behavior 27(2):265 298. Ozbay, E. 2007. Unawareness and strategic announcements in games with uncertainty. In Theoretical Aspects of Rationality and Knowledge: Proc. 11th Conference (TARK 2007), 231 238. Piermont, E. 2017. Introspective unawareness and observable choice. Games and Economic Behavior 106:134 152. Sillari, G. 2008. Quantified logic of awareness and impossible possible worlds. Review of Symbolic Logic 1(4):514 529.