# bounded_optimal_exploration_in_mdp__74452c56.pdf Bounded Optimal Exploration in MDP Kenji Kawaguchi Massachusetts Institute of Technology Cambridge, MA, 02139 kawaguch@mit.edu Within the framework of probably approximately correct Markov decision processes (PAC-MDP), much theoretical work has focused on methods to attain near optimality after a relatively long period of learning and exploration. However, practical concerns require the attainment of satisfactory behavior within a short period of time. In this paper, we relax the PAC-MDP conditions to reconcile theoretically driven exploration methods and practical needs. We propose simple algorithms for discrete and continuous state spaces, and illustrate the benefits of our proposed relaxation via theoretical analyses and numerical examples. Our algorithms also maintain anytime error bounds and average loss bounds. Our approach accommodates both Bayesian and non Bayesian methods. Introduction The formulation of sequential decision making as a Markov decision process (MDP) has been successfully applied to a number of real-world problems. MDPs provide the ability to design adaptable agents that can operate effectively in uncertain environments. In many situations, the environment we wish to model has unknown aspects, and thus the agent needs to learn an MDP by interacting with the environment. In other words, the agent has to explore the unknown aspects of the environment to learn the MDP. A considerable amount of theoretical work on MDPs has focused on efficient exploration, and a number of principled methods have been derived with the aim of learning an MDP to obtain a nearoptimal policy. For example, Kearns and Singh (2002) and Strehl and Littman (2008a) considered discrete state spaces, whereas Bernstein and Shimkin (2010) and Pazis and Parr (2013) examined continuous state spaces. In practice, however, heuristics are still commonly used (Li 2012). The focus of theoretical work (learning a nearoptimal policy within a polynomial yet long time) has apparently diverged from practical needs (learning a satisfactory policy within a reasonable time). In this paper, we modify the prevalent theoretical approach to develop theoretically driven methods that come close to practical needs. Copyright c 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Preliminaries An MDP (Puterman 2004) can be represented as a tuple (S, A, R, P, γ), where S is a set of states, A is a set of actions, P is the transition probability function, R is a reward function, and γ is a discount factor. The value of policy π at state s, V π(s), is the cumulative (discounted) expected reward, which is given by: V π(s) = i=0 γi R (si, π(si), si+1) | s0 = s, π , where the expecta- tion is over the sequence of states si+1 P(S|si, π(si)) for all i 0. Using Bellman s equation, the value of the optimal policy or the optimal value, V (s), can be written as V (s) = maxa s P(s |s,a))[R(s, a, s ) + γV (s )]. In many situations, the transition function P and/or the reward function R are initially unknown. Under such conditions, we often want a policy of an algorithm at time t, At, to yield a value V At(st) that is close to the optimal value V (st) after some exploration. Here, st denotes the current state at time t. More precisely, we may want the following: for all ϵ > 0 and for all δ = (0, 1), V At(st) V (st) ϵ, with probability at least 1 δ when t τ, where τ is the exploration time. The algorithm with a policy At is said to be probably approximately correct for MDPs (PAC-MDP) (Strehl 2007) if this condition holds with τ being at most polynomial in the relevant quantities of MDPs. The notion of PAC-MDP has a strong theoretical basis and is widely applicable, avoiding the need for additional assumptions, such as reachability in state space (Jaksch, Ortner, and Auer 2010), access to a reset action (Fiechter 1994), and access to a parallel sampling oracle (Kearns and Singh 1999). However, the PAC-MDP approach often results in an algorithm over-exploring the state space, causing a low reward per unit time for a long period of time. Accordingly, past studies that proposed PAC-MDP algorithms have rarely presented a corresponding experimental result, or have done so by tuning the free parameters, which renders the relevant algorithm no longer PAC-MDP (Strehl, Li, and Littman 2006; Kolter and Ng 2009; Sorg, Singh, and Lewis 2010). This problem was noted in (Kolter and Ng 2009; Brunskill 2012; Kawaguchi and Araya 2013). Furthermore, in many problems, it may not even be possible to guarantee V At close to V within the agent s lifetime. Li (2012) noted that, despite the strong theoretical basis of the PAC-MDP approach, heuristic-based methods remain popular in practice. This Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16) would appear to be a result of the above issues. In summary, there seems to be a dissonance between a strong theoretical approach and practical needs. Bounded Optimal Learning The practical limitations of the PAC-MDP approach lie in their focus on correctness without accommodating the time constraints that occur naturally in practice. To overcome the limitation, we first define the notion of reachability in model learning, and then relax the PAC-MDP objective based on it. For brevity, we focus on the transition model. Reachability in Model Learning For each state-action pair (s, a), let M(s,a) be a set of all transition models and Pt( |s, a) M(s,a) be the current model at time t (i.e., Pt( |s, a) : S [0, )). Define S (s,a) to be a set of possible future samples as S (s,a) = {s |P(s |s, a) > 0}. Let f(s,a) : M(s,a) S (s,a) M(s,a) represent the model update rule; f(s,a) maps a model (in M(s,a)) and a new sample (in S (s,a)) to a corresponding new model (in M(s,a)). We can then write L = (M, f) to represent a learning method of an algorithm, where M = (s,a) (S,A)M(s,a) and f = {f(s,a)}(s,a) (S,A). The set of h-reachable models, ML,t,h,(s,a), is recursively defined as ML,t,h,(s,a) = { P M(s,a)| P = f(s,a)( P, s ) for some P ML,t,h 1,(s,a) and s S (s,a)} with the boundary condition Mt,0,(s,a) = { Pt( |s, a)}. Intuitively, the set of h-reachable models, ML,t,h,(s,a) M(s,a), contains the transition models that can be obtained if the agent updates the current model at time t using any combination of h additional samples s 1, s 2, ..., s h P(S|s, a). Note that the set of h-reachable models is defined separately for each state-action pair. For example, ML,t,h,(s1,a1) contains only those models that are reachable using the h additional samples drawn from P(S|s1, a1). We define the h-reachable optimal value V d L,t,h(s) with respect to a distance function d as V d L,t,h(s) = max a s P d L,t,h(s |s, a)[R(s, a, s ) + γV d L,t,h(s )], P d L,t,h( |s, a) = arg min P ML,t,h,(s,a) d( P( |s, a), P( |s, a)). Intuitively, the h-reachable optimal value, V d L,t,h(s), is the optimal value estimated with the best model in the set of h-reachable models (here, the term best is in terms of the distance function d( , )). PAC in Reachable MDP Using the concept of reachability in model learning, we define the notion of probably approximately correct in an h-reachable MDP (PAC-RMDP(h)). Let P(x1, x2, ..., xn) be a polynomial in x1, x2, ..., xn and |MDP| be the complexity of an MDP (Li 2012). Definition 1. (PAC-RMDP(h)) An algorithm with a policy At and a learning method L is PAC-RMDP(h) with respect to a distance function d if for all ϵ > 0 and for all δ = (0, 1), 1) there exists τ = O(P(1/ϵ, 1/δ, 1/(1 γ), |MDP|, h)) such that for all t τ, V At(st) V d L,t,h(st) ϵ with probability at least 1 δ, and 2) there exists h (ϵ, δ) = O(P(1/ϵ, 1/δ, 1/(1 γ), |MDP|)) such that for all t 0, |V (st) V d L,t,h (ϵ,δ)(st)| ϵ. with probability at least 1 δ. The first condition ensures that the agent efficiently learns the h-reachable models. The second condition guarantees that the learning method L and the distance function d are not arbitrarily poor. In the following, we relate PAC-RMDP(h) to PACMDP and near-Bayes optimality. The proofs are given in the appendix. The appendix is included in an extended version of the paper that can be found here: http://lis.csail.mit.edu/new/publications.php. Proposition 1. (PAC-MDP) If an algorithm is PACRMDP(h (ϵ, δ)), then it is PAC-MDP, where h (ϵ, δ) is given in Definition 1. Proposition 2. (Near-Bayes optimality) Consider modelbased Bayesian reinforcement learning (Strens 2000). Let H be a planning horizon in the belief space b. Assume that the Bayesian optimal value function, V b,H, converges to the H-reachable optimal function such that, for all ϵ > 0, |V d L,t,H(st) V b,H(st, bt)| ϵ for all but polynomial time steps. Then, a PAC-RMDP(H) algorithm with a policy At obtains an expected cumulative reward V At(st) V b,H(st, bt) 2ϵ for all but polynomial time steps with probability at least 1 δ. Note that V At(st) is the actual expected cumulative reward with the expectation over the true dynamics P, whereas V b,H(st, bt) is the believed expected cumulative reward with the expectation over the current belief bt and its belief evolution. In addition, whereas the PAC-RMDP(H) condition guarantees convergence to an H-reachable optimal value function, Bayesian optimality does not1. In this sense, Proposition 2 suggests that the theoretical guarantee of PAC-RMDP(H) would be stronger than that of near-Bayes optimality with an H step lookahead. Summarizing the above, PAC-RMDP(h (ϵ, δ)) implies PAC-MDP, and PAC-RMDP(H) is related to near-Bayes optimality. Moreover, as h decreases in the range (0, h ) or 1A Bayesian estimation with random samples converges to the true value under certain assumptions. However, for exploration, the selection of actions can cause the Bayesian optimal agent to ignore some state-action pairs, removing the guarantee of the convergence. This effect was well illustrated by Li (2009, Example 9). Algorithm 1 Discrete PAC-RMDP Parameter: h 0 for time step t = 1, 2, 3, ... do Action: Take action based on V A(st) in Equation (1) Observation: Save the sufficient statistics Estimate: Update the model Pt,0 (0, H), the theoretical guarantee of PAC-RMDP(h) becomes weaker than previous theoretical objectives. This accommodates the practical need to improve the trade-off between the theoretical guarantee (i.e., optimal behavior after a long period of exploration) and practical performance (i.e., satisfactory behavior after a reasonable period of exploration) via the concept of reachability. We discuss the relationship to bounded rationality (Simon 1982) and bounded optimality (Russell and Subramanian 1995) as well as the corresponding notions of regret and average loss in the appendix of the extended version. Discrete Domain To illustrate the proposed concept, we first consider a simple case involving finite state and action spaces with an unknown transition function P. Without loss of generality, we assume that the reward function R is known. Let V A(s) be the internal value function used by the algorithm to choose an action. Let V A(s) be the actual value function according to true dynamics P. To derive the algorithm, we use the principle of optimism in the face of uncertainty, such that V A(s) V d L,t,h(s) for all s S. This can be achieved using the following internal value function: V A(s) = max a, P ML,t,h,(s,a) s P(s |s, a)[R(s, a, s ) + γ V A(s )] (1) The pseudocode is shown in Algorithm 1. In the following, we consider the special case in which we use the sample mean estimator (which determines L). That is, we use Pt(s |s, a) = nt(s, a, s )/nt(s, a), where nt(s, a) is the number of samples for the state-action pair (s, a), and nt(s, a, s ) is the number of samples for the transition from s to s given an action a. In this case, the maximum over the model in Equation (1) is achieved when all future h observations are transitions to the state with the best value. Thus, V A can be computed by V A(s) = maxa s S nt(s,a,s ) nt(s,a)+h[R(s, a, s ) + γ V A(s )] + maxs h nt(s,a)+h[R(s, a, s ) + γ V A(s )]. Analysis We first show that Algorithm 1 is PAC-RMDP(h) for all h 0 (Theorem 1), maintains an anytime error bound and average loss bound (Corollary 1 and the following discussion), and is related with previous algorithms (Remarks 1 and 2). We then analyze its explicit exploration runtime (Definition 3). We assume that Algorithm 1 is used with the sample mean estimator, which determines L. We fix the distance function as d( P( |s, a), P( |s, a)) = P( |s, a) P( |s, a) 1. The proofs are given in the appendix of the extended version. Theorem 1. (PAC-RMDP) Let At be a policy of Algorithm 1. Let z = max(h, ln(2|S||S||A|/δ) ϵ(1 γ) ). Then, for all ϵ > 0, for all δ = (0, 1), and for all h 0, 1) for all but at most O z|S||A| ϵ2(1 γ)2 ln |S||A| δ time steps, V At(st) V d L,t,h(st) ϵ, with probability at least 1 δ, and 2) there exist h (ϵ, δ) = O(P(1/ϵ, 1/δ, 1/(1 γ), |MDP|)) such that |V (st) V d L,t,h (ϵ,δ)(st)| ϵ with probability at least 1 δ. Definition 2. (Anytime error) The anytime error ϵt,h R is the smallest value such that V At(st) V d L,t,h(st) ϵt,h. Corollary 1. (Anytime error bound) With probability at least 1 δ, if h ln(2|S||S||A|/δ) |S||A| t(1 γ)3 ln |S||A| δ ln 2|S||S||A| wise, ϵt,h = O h|S||A| t(1 γ)2 ln |S||A| The anytime T-step average loss is equal to 1 T T t=1(1 γT +1 t)ϵt,h. Moreover, in this simple problem, we can relate Algorithm 1 to a particular PAC-MDP algorithm and a near-Bayes optimal algorithm. Remark 1. (Relation to MBIE) Let m = O( |S| ϵ2(1 γ)4 + 1 ϵ2(1 γ)4 ln |S||A| ϵ(1 γ)δ ). Let h (s, a) = n(s,a)z(s,a) 1 z(s,a) , where z(s, a) = 2 2[ln(2|S| 2) ln(δ/(2|S||A|m))]/n(s, a). Then, Algorithm 1 with the input parameter h = h (s, a) behaves identically to a PAC-MDP algorithm, Model Based Interval Estimation (MBIE) (Strehl and Littman 2008a), the sample complexity of which is O( |S||A| ϵ3(1 γ)6 (|S| + ln |S||A| ϵ(1 γ)δ) ln 1 δ ln 1 ϵ(1 γ))). Remark 2. (Relation to BOLT) Let h = H, where H is a planning horizon in the belief space b. Assume that Algorithm 1 is used with an independent Dirichlet model for each (s, a), which determines L. Then, Algorithm 1 behaves identically to a near-Bayes optimal algorithm, Bayesian Optimistic Local Transitions (BOLT) (Araya L opez, Thomas, and Buffet 2012), the sample complexity of which is O( H2|S||A| ϵ2(1 γ)2 ln |S||A| As expected, the sample complexity for PAC-RMDP(h) (Theorem 1) is smaller than that for PAC-MDP (Remark 1) (at least when h |S|(1 γ) 3), but larger than that for near-Bayes optimality (Remark 2) (at least when h H). Note that BOLT is not necessarily PAC-RMDP(h), because misleading priors can violate both conditions in Definition 1. Further Discussion An important observation is that, when h |S| ϵ(1 γ) ln |S||A| δ , the sample complexity of Algorithm 1 is dominated by the number of samples required to refine the model, rather than the explicit exploration of unknown aspects of the world. Recall that the internal value function V A is designed to force the agent to explore, whereas the use of the currently estimated value function V d L,t,0(s) results in exploitation. The difference between V A and V L,t,0(s) decreases at a rate of O(h/nt(s, a)), whereas the error between V A and V d L,t,0(s) decreases at a rate of O(1/ nt(s, a)). Thus, Algorithm 1 would stop the explicit exploration much sooner (when V A and V d L,t,0(s) become close), and begin exploiting the model, while still refining it, so that V d L,t,0(s) tends to V A. In contrast, PAC-MDP algorithms are forced to explore until the error between V A and V becomes sufficiently small, where the error decreases at a rate of O(1/ nt(s, a)). This provides some intuition to explain why a PAC-RMDP(h) algorithm with small h may avoid over-exploration, and yet, in some cases, learn the true dynamics to a reasonable degree, as shown in the experimental examples. In the following, we formalize the above discussion. Definition 3. (Explicit exploration runtime) The explicit exploration runtime is the smallest integer τ such that for all t τ, | V At(st) V d L,t,0(st)| ϵ. Corollary 2. (Explicit exploration bound) With probability at least 1 δ, the explicit exploration runtime of Algorithm 1 is O( h|S||A| ϵ(1 γ) Pr[AK] ln |S||A| δ ) = O( h|S||A| ϵ2(1 γ)2 ln |S||A| δ ), where AK is the escape event defined in the proof of Theorem 1. If we assume Pr[AK] to stay larger than a fixed constant, or to be very small ( ϵ(1 γ) 3Rmax ) (so that Pr[AK] does not appear in Corollary 2 as shown in the corresponding case analysis for Theorem 1), the explicit exploration runtime can be reduced to O( h|S||A| ϵ(1 γ) ln |S||A| δ ). Intuitively, this happens when the given MDP does not have low yet not-too low probability and high-consequence transition that is initially unknown. Naturally, such a MDP is difficult to learn, as reflected in Corollary 2. Experimental Example We compare the proposed algorithm with MBIE (Strehl and Littman 2008a), variance-based exploration (VBE) (Sorg, Singh, and Lewis 2010), Bayesian Exploration Bonus (BEB) (Kolter and Ng 2009), and BOLT (Araya-L opez, Thomas, and Buffet 2012). These algorithms were designed to be PAC-MDP or near-Bayes optimal, but have been used with parameter settings that render them neither PAC-MDP nor near-Bayes optimal. In contrast to the experiments in previous research, we present results with ϵ set to several theoretically meaningful values2 as well as one theoretically 2MBIE is PAC-MDP with the parameters δ and ϵ. VBE is PACMDP in the assumed (prior) input distribution with the parame- 0 500 1000 1500 2000 2500 3000 Avegrage Reward per Timestep PAC-RMDP(8) PAC-RMDP(1) PAC-RMDP(16) MBIE(104,0.2) MBIE(0.01,0.1) Figure 1: Average total reward per time step for the Chain Problem. The algorithm parameters are shown as PACRMDP(h), MBIE(ϵ, δ), VBE(δ), BEB(β), and BOLT(η). non-meaningful value to illustrate its property3. Because our algorithm is deterministic with no sampling and no assumptions on the input distribution, we do not compare it with algorithms that use sampling, or rely heavily on knowledge of the input distribution. We consider a five-state chain problem (Strens 2000), which is a standard toy problem in the literature. In this problem, the optimal policy is to move toward the state farthest from the initial state, but the reward structure explicitly encourages an exploitation agent, or even an ϵ-greedy agent, to remain in the initial state. We use a discount factor of γ = 0.95 and a convergence criterion for the value iteration of ϵ = 0.01. Figure 1 shows the numerical results in terms of the average reward per time step (average over 1000 runs). As can be seen from this figure, the proposed algorithm worked better. MBIE and VBE work reasonably if we discard the theoretical guarantee. As the maximum reward is Rmax = 1, the upper bound on the value function is i=1 γi Rmax = 1 1 γ Rmax = 20. Thus, ϵ-closeness does not yield any useful information when ϵ 20. A similar problem was noted by Kolter and Ng (2009) and Araya-L opez, Thomas, and Buffet (2012). In the appendix of the extended version, we present the results for a problem with low-probability high-consequence transitions, in which PAC-RMDP(8) produced the best result. ter δ. BEB and BOLT are near-Bayes optimal algorithms whose parameters β and η are fully specified by their analyses, namely β = 2H2 and η = H. Following Araya-L opez, Thomas, and Buffet (2012), we set β and η using the ϵ -approximated horizon H logγ(ϵ (1 γ)) = 148. We use the sample mean estimator for the PAC-MDP and PAC-RMDP(h) algorithms, and an independent Dirichlet model for the near-Bayes optimal algorithms. 3We can interpolate their qualitative behaviors with values of ϵ other than those presented here. This is because the principle behind our results is that small values of ϵ causes over-exploration due to the focus on the near-optimality. Continuous Domain In this section, we consider the problem of a continuous state space and discrete action space. The transition function is possibly nonlinear, but can be linearly parameterized as: s(i) t+1 = θT (i)Φ(i)(st, at)+ζ(i) t , where the state st S Rn S is represented by n S state parameters (s(i) R with i {1, ..., ns}), and at A is the action at time t. We assume that the basis functions Φ(i) : S A Rni are known, but the weights θ Rni are unknown. ζ(i) t R is the noise term and given by ζ(i) t N(0, σ2 (i)). In other words, P(s(i) t+1|st, at) = N(θT (i)Φ(i)(st, at), σ2 (i)). For brevity, we focus on unknown transition dynamics, but our method is directly applicable to unknown reward functions if the reward is represented in the above form. This problem is a slightly generalized version of those considered by Abbeel and Ng (2005), Strehl and Littman (2008b), and Li et al. (2011). Algorithm We first define the variables used in our algorithm, and then explain how the algorithm works. Let ˆθ(i) be the vector of the model parameters for the ith state component. Let Xt,i Rt ni consist of t input vectors ΦT (i)(s, a) R1 ni at time t. We then denote the eigenvalue decomposition of the input matrix as XT t,i Xt,i = Ut,i Dt,i(λ(1), . . . , λ(n))U T t,i, where Dt,i(λ(1), ..., λ(n)) Rni ni represents a diagonal matrix. For simplicity of notation, we arrange the eigenvectors and eigenvalues such that the diagonal elements of Dt,i(λ(1), ..., λ(n)) are λ(1), ..., λ(j) 1 and λ(j+1), ..., λ(n) < 1 for some 0 j n. We now define the main variables used in our algorithm: zt,i := (XT t,i Xt,i) 1, gt,i := Ut,i Dt,i( 1 λ(1) , . . . , 1 λ(j) , 0, . . . , 0)U T t,i, and wt,i := Ut,i Dt,i(0, . . . , 0, 1(j+1), . . . , 1(n))U T t,i. Let Δ(i) sups,a|(θ(i) ˆθ(i))T Φ(i)(s, a)| be the upper bound on the model error. Define ς(M) = 2 ln(π2M 2nsh/(6δ)) where M is the number of calls for Ih (i.e., the number of computing r in Algorithm 2). With the above variables, we define the h-reachable model interval Ih as Ih(Φ(i)(s, a), Xt,i)/[h(Δ(i) + ς(M)σ(i))] = |ΦT (i)(s, a)gt,iΦ(i)(s, a)|+ ΦT (i)(s, a)zt,i wt,iΦ(i)(s, a) . The h-reachable model interval is a function that maps a new state-action pair considered in the planning phase, Φ(i)(s, a), and the agent s experience, Xt,i, to the upper bound of the error in the model prediction. We define the column vector consisting of n S h-reachable intervals as Ih(s, a, Xt) = [Ih(Φ(1)(s, a), Xt,1), ..., Ih(Φ(n S)(s, a), Xt,n S)]T . We also leverage the continuity of the internal value function V to avoid an expensive computation (to translate the error in the model to the error in value). Assumption 1. (Continuity) There exists L R such that, for all s, s S, | V (s) V (s )| L s s . Algorithm 2 Linear PAC-RMDP Parameter: h, δ Optional: Δ(i), L Initialize: ˆθ, Δ(i), and L for time step t = 1, 2, 3, ... do Action: take an action based on ˆp(s |s, a) N(ˆθT Φ(s, a), σ2I) r(s, a, s ) R(s, a, s ) + L Ih(s, a, Xt 1) Observation: Save the input-output pair (st+1, Φt(st, at)) Estimate:Estimate ˆθ(i), Δ(i)(if not given), and L(if not given) We set the degree of optimism for a state-action pair to be proportional to the uncertainty of the associated model. Using the h-reachable model interval, this can be achieved by simply adding a reward bonus that is proportional to the interval. The pseudocode for this is shown in Algorithm 2. Following previous work (Strehl and Littman 2008b; Li et al. 2011), we assume access to an exact planning algorithm. This assumption would be relaxed by using a planning method that provides an error bound. We assume that Algorithm 2 is used with least-squares estimation, which determines L. We fix the distance function as d( P( |s, a), P( |s, a)) = |Es P ( |s,a)[s ] Es P ( |s,a)[s ]| (since the unknown aspect is the mean, this choice makes sense). In the following, we use n to represent the average value of {n(1), ..., n(n S)}. The proofs are given in the appendix of the extended version. Lemma 3. (Sample complexity of PAC-MDP) For our problem setting, the PAC-MDP algorithm proposed by Strehl and Littman (2008b) and Li et al. (2011) has sample complexity O n2 S n2 ϵ5(1 γ)10 . Theorem 2. (PAC-RMDP) Let At be the policy of Algorithm 2. Let z = max(h2 ln m2nsh δ , L2n S n ln2 m δ ). Then, for all ϵ > 0, for all δ = (0, 1), and for all h 0, 1) for all but at most m = O z L2n S n ln2 m ϵ3(1 γ)2 ln2 n S steps (with m m ), V At(st) V d L,t,h(st) ϵ, with probability at least 1 δ, and 2) there exists h (ϵ, δ) = O(P(1/ϵ, 1/δ, 1/(1 γ), |MDP|)) such that |V (st) V d L,t,h (ϵ,δ)(st)| ϵ with probability at least 1 δ. Corollary 3. (Anytime error bound) With probability at least 1 δ, if h2 ln m2nsh δ L2n S n ln2 m L4n2 S n2 ln2 m t(1 γ) ln3 n S ; otherwise, ϵt,h = O h2L2n S n ln2 m t(1 γ) ln2 n S The anytime T-step average loss is equal to 1 T T t=1(1 γT +1 t)ϵt,h. Corollary 4. (Explicit exploration runtime) With probability at least 1 δ, the explicit exploration runtime of Algorithm 2 is O h2L2n S n ln m ϵ2 Pr[Ak] ln2 n S O h2L2n S n ln m ϵ3(1 γ) ln2 n S δ , where AK is the escape event defined in the proof of Theorem 2. Experimental Examples We consider two examples: the mountain car problem (Sutton and Barto 1998), which is a standard toy problem in the literature, and the HIV problem (Ernst et al. 2006), which originates from a real-world problem. For both examples, we compare the proposed algorithm with a directly related PAC-MDP algorithm (Strehl and Littman 2008b; Li et al. 2011). For the PAC-MDP algorithm, we present the results with ϵ set to several theoretically meaningful values and one theoretically non-meaningful value to illustrate its property4. We used δ = 0.9 for the PAC-MDP and PAC-RMDP algorithms5. The ϵ-greedy algorithm is executed with ϵ = 0.1. In the planning phase, L is estimated as L maxs,s Ω| V A(s) V A(s )|/ s s , where Ω is the set of states that are visited in the planning phase (i.e., fitted value iteration and a greedy roll-out method). For both problems, more detailed descriptions of the experimental settings are available in the appendix of the extended version. Mountain Car In the mountain car problem, the reward is negative everywhere except at the goal. To reach the goal, the agent must first travel far away, and must explore the world to learn this mechanism. Each episode consists of 2000 steps, and we conduct simulations for 100 episodes. The numerical results are shown in Figure 2. As in the discrete case, we can see that the PAC-RMDP(h) algorithm worked well. The best performance, in terms of the total reward, was achieved by PAC-RMDP(10). Since this problem required a number of consecutive explorations, the random exploration employed by the ϵ-greedy algorithm did not allow the agent to reach the goal. As a result of exploration and the randomness in the environment, the PAC-MDP algorithm reached the goal several times, but kept exploring the environment to ensure near-optimality. From Figure 2, we can see that the PAC-MDP algorithm quickly converges to good behavior if we discard the theoretical guarantee (the difference between the values in the optimal value function had an upper bound of 120, and the total reward had an upper bound of 2000. Hence, ϵ > 2000 does not yield a useful theoretical guarantee). Simulated HIV Treatment This problem is described by a set of six ordinary differential equations (Ernst et al. 2006). An action corresponds to whether the agent administers two treatments (RTIs and PIs) to patients (thus, there are four actions). Two types of exploration are required: one to learn the effect of using treatments on viruses, and another to learn the effect of not using treatments on immune systems. Learning the former is necessary to reduce the population of viruses, 4See footnote 3 on the consideration of different values of ϵ. 5We considered δ = [0.5, 0.8, 0.9, 0.95], but there was no change in any qualitative behavior of interest in our discussion. 0 20 40 60 80 100 Total Reward PAC-RMDP(1) PAC-RMDP(10) PAC-RMDP(1000) PAC-MDP(1) PAC-MDP(120) PAC-MDP(2000) PAC-MDP(5000) Figure 2: Total reward per episode for the mountain car problem with PAC-RMDP(h) and PAC-MDP(ϵ). Total Reward PAC-RMDP(1) PAC-RMDP(100) PAC-RMDP(10) PAC-MDP(3010) PAC-MDP(1010) PAC-MDP(105) PAC-MDP(10) F-greedy Figure 3: Total reward per episode for the HIV problem with PAC-RMDP(h) and PAC-MDP(ϵ). but the latter is required to prevent the overuse of treatments, which weakens the immune system. Each episode consists of 1000 steps (i.e., days), and we conduct simulations for 30 episodes. As shown in Figure 3, the PAC-MDP algorithm worked reasonably well with ϵ = 3010. However, the best total reward did not exceed 3010, and so the PAC-MDP guarantee with ϵ = 3010 does not seem to be useful. The ϵ-greedy algorithm did not work well, as this example required sequential exploration at certain periods to learn the effects of treatments. In this paper, we have proposed the PAC-RMDP framework to bridge the gap between theoretical objectives and practical needs. Although the PAC-RMDP(h) algorithms worked well in our experimental examples with small h, it is possible to devise a problem in which the PAC-RMDP algorithm should be used with large h. In extreme cases, the algorithm would reduce to PAC-MDP. Thus, the adjustable theoretical guarantee of PAC-RMDP(h) via the concept of reachability seems to be a reasonable objective. Whereas the development of algorithms with traditional objectives (PAC-MDP or regret bounds) requires the consideration of confidence intervals, PAC-RMDP(h) concerns a set of h-reachable models. For a flexible model, the derivation of the confidence interval would be a difficult task, but a set of h-reachable models can simply be computed (or approximated) via lookahead using the model update rule. Thus, future work includes the derivation of a PAC-RMDP algorithm with a more flexible and/or structured model. Acknowledgment The author would like to thank Prof. Michael Littman, Prof. Leslie Kaelbling and Prof. Tom as Lozano-P erez for their thoughtful comments and suggestions. We gratefully acknowledge support from NSF grant 1420927, from ONR grant N00014-14-1-0486, and from ARO grant W911NF1410433. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of our sponsors. References Abbeel, P., and Ng, A. Y. 2005. Exploration and apprenticeship learning in reinforcement learning. In Proceedings of the 22nd international conference on Machine learning (ICML). Araya-L opez, M.; Thomas, V.; and Buffet, O. 2012. Nearoptimal BRL using optimistic local transitions. In Proceedings of the 29th International Conference on Machine Learning (ICML). Bernstein, A., and Shimkin, N. 2010. Adaptive-resolution reinforcement learning with polynomial exploration in deterministic domains. Machine learning 81(3):359 397. Brunskill, E. 2012. Bayes-optimal reinforcement learning for discrete uncertainty domains. In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems (AAMAS). Ernst, D.; Stan, G.-B.; Goncalves, J.; and Wehenkel, L. 2006. Clinical data based optimal STI strategies for HIV: a reinforcement learning approach. In Proceedings of the 45th IEEE Conference on Decision and Control. Fiechter, C.-N. 1994. Efficient reinforcement learning. In Proceedings of the seventh annual ACM conference on Computational learning theory (COLT). Jaksch, T.; Ortner, R.; and Auer, P. 2010. Near-optimal regret bounds for reinforcement learning. The Journal of Machine Learning Research (JMLR) 11:1563 1600. Kawaguchi, K., and Araya, M. 2013. A greedy approximation of Bayesian reinforcement learning with probably optimistic transition model. In Proceedings of AAMAS 2013 workshop on adaptive learning agents, 53 60. Kearns, M., and Singh, S. 1999. Finite-sample convergence rates for Q-learning and indirect algorithms. In Proceedings of Advances in neural information processing systems (NIPS). Kearns, M., and Singh, S. 2002. Near-optimal reinforcement learning in polynomial time. Machine Learning 49(23):209 232. Kolter, J. Z., and Ng, A. Y. 2009. Near-Bayesian exploration in polynomial time. In Proceedings of the 26th Annual International Conference on Machine Learning (ICML). Li, L.; Littman, M. L.; Walsh, T. J.; and Strehl, A. L. 2011. Knows what it knows: a framework for self-aware learning. Machine learning 82(3):399 443. Li, L. 2009. A unifying framework for computational reinforcement learning theory. Ph.D. Dissertation, Rutgers, The State University of New Jersey. Li, L. 2012. Sample complexity bounds of exploration. In Reinforcement Learning. Springer. 175 204. Pazis, J., and Parr, R. 2013. PAC Optimal Exploration in Continuous Space Markov Decision Processes. In Proceedings of the 27th AAAI conference on Artificial Intelligence (AAAI). Puterman, M. L. 2004. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons. Russell, S. J., and Subramanian, D. 1995. Provably bounded-optimal agents. Journal of Artificial Intelligence Research (JAIR) 575 609. Simon, H. A. 1982. Models of bounded rationality, volumes 1 and 2. MIT press. Sorg, J.; Singh, S.; and Lewis, R. L. 2010. Variance-based rewards for approximate Bayesian reinforcement learning. In Proceedings of the 26th Conference on Uncertainty in Artificial Intelligence (UAI). Strehl, A. L., and Littman, M. L. 2008a. An analysis of model-based interval estimation for Markov decision processes. Journal of Computer and System Sciences 74(8):1309 1331. Strehl, A. L., and Littman, M. L. 2008b. Online linear regression and its application to model-based reinforcement learning. In Proceedings of Advances in Neural Information Processing Systems (NIPS), 1417 1424. Strehl, A. L.; Li, L.; and Littman, M. L. 2006. Incremental model-based learners with formal learning-time guarantees. In Proceedings of the 22th Conference on Uncertainty in Artificial Intelligence (UAI). Strehl, A. L. 2007. Probably approximately correct (PAC) exploration in reinforcement learning. Ph.D. Dissertation, Rutgers University. Strens, M. 2000. A Bayesian framework for reinforcement learning. In Proceedings of the 16th International Conference on Machine Learning (ICML). Sutton, R. S., and Barto, A. G. 1998. Reinforcement learning: An introduction. MIT press Cambridge.