# provably_efficient_qlearning_with_low_switching_cost__8d1d0093.pdf Provably Efficient Q-Learning with Low Switching Cost Yu Bai Stanford University yub@stanford.edu Tengyang Xie Nan Jiang UIUC {tx10, nanjiang}@illinois.edu Yu-Xiang Wang UC Santa Barbara yuxiangw@cs.ucsb.edu We take initial steps in studying PAC-MDP algorithms with limited adaptivity, that is, algorithms that change its exploration policy as infrequently as possible during regret minimization. This is motivated by the difficulty of running fully adaptive algorithms in real-world applications (such as medical domains), and we propose to quantify adaptivity using the notion of local switching cost. Our main contribution, Q-Learning with UCB2 exploration, is a model-free algorithm for H-step episodic MDP that achieves sublinear regret whose local switching cost in K episodes is O(H3SA log K), and we provide a lower bound of Ω(HSA) on the local switching cost for any no-regret algorithm. Our algorithm can be naturally adapted to the concurrent setting [13], which yields nontrivial results that improve upon prior work in certain aspects. 1 Introduction This paper is concerned with reinforcement learning (RL) under limited adaptivity or low switching cost, a setting in which the agent is allowed to act in the environment for a long period but is constrained to switch its policy for at most N times. A small switching cost N restricts the agent from frequently adjusting its exploration strategy based on feedback from the environment. There are strong practical motivations for developing RL algorithms under limited adaptivity. The setting of restricted policy switching captures various real-world settings where deploying new policies comes at a cost. For example, in medical applications where actions correspond to treatments, it is often unrealistic to execute fully adaptive RL algorithms instead one can only run a fixed policy approved by the domain experts to collect data, and a separate approval process is required every time one would like to switch to a new policy [19, 2, 3]. In personalized recommendation [25], it is computationally impractical to adjust the policy online based on instantaneous data (for instance, think about online video recommendation where there are millions of users generating feedback at every second). A more common practice is to aggregate data in a long period before deploying a new policy. In problems where we run RL for compiler optimization [4] and hardware placements [20], as well as for learning to optimize databases [18], often it is desirable to limit the frequency of changes to the policy since it is costly to recompile the code, to run profiling, to reconfigure an FPGA devices, or to restructure a deployed relational database. The problem is even more prominent in the RL-guided new material discovery as it takes time to fabricate the materials and setup the experiments [24, 21]. In many of these applications, adaptivity turns out to be really the bottleneck. Understanding limited adaptivity RL is also important from a theoretical perspective. First, algorithms with low adaptivity (a.k.a. batched algorithms) that are as effective as their fully sequential counterparts have been established in bandits [23, 12], online learning [8], and optimization [11], and it would be interesting to extend such undertanding into RL. Second, algorithms with few policy switches are naturally easy to parallelize as there is no need for parallel agents to communicate if 33rd Conference on Neural Information Processing Systems (Neur IPS 2019), Vancouver, Canada. they just execute the same policy. Third, limited adaptivity is closed related to off-policy RL1 and offers a relaxation less challenging than the pure off-policy setting. We would also like to note that limited adaptivity can be viewed as a constraint for designing RL algorithms, which is conceptually similar to those in constrained MDPs [9, 26]. In this paper, we take initial steps towards studying theoretical aspects of limited adaptivity RL through designing low-regret algorithms with limited adaptivity. We focus on model-free algorithms, in particular Q-Learning, which was recently shown to achieve a e O( p poly(H) SAT) regret bound with UCB exploration and a careful stepsize choice by Jin et al. [16]. Our goal is to design Q-Learning type algorithms that achieve similar regret bounds with a bounded switching cost. The main contributions of this paper are summarized as follows: We propose a notion of local switching cost that captures the adaptivity of an RL algorithm in episodic MDPs (Section 2). Algorithms with lower local switching cost will make fewer switches in its deployed policies. Building on insights from the UCB2 algorithm in multi-armed bandits [5] (Section 3), we propose our main algorithms, Q-Learning with UCB2-{Hoeffding, Bernstein} exploration. We prove that these two algorithms achieve e O( H{4,3}SAT) regret (respectively) and O(H3SA log(K/A)) local switching cost (Section 4). The regret matches their vanilla counterparts of [16] but the switching cost is only logarithmic in the number of episodes. We show how our low switching cost algorithms can be applied in the concurrent RL setting [13], in which multiple agents can act in parallel (Section 5). The parallelized versions of our algorithms with UCB2 exploration give rise to Concurrent Q-Learning algorithms, which achieve a nearly linear speedup in execution time and compares favorably against existing concurrent algorithms in sample complexity for exploration. We show a simple Ω(HSA) lower bound on the switching cost for any sublinear regret algorithm, which has at most a O(H2 log(K/A)) gap from the upper bound (Section 7). 1.1 Prior work Low-regret RL Sample-efficient RL has been studied extensively since the classical work of Kearns and Singh [17] and Brafman and Tennenholtz [7], with a focus on obtaining a near-optimal policy in polynomial time, i.e. PAC guarantees. A subsequent line of work initiate the study of regret in RL and provide algorithms that achieve regret e O( p poly(H, S, A) T) [15, 22, 1]. In our episodic MDP setting, the information-theoretic lower bound for the regret is Ω( H2SAT), which is matched in recent work by the UCBVI [6] and ORLC [10] algorithms. On the other hand, while all the above low-regret algorithms are essentially model-based, the recent work of [16] shows that model-free algorithms such as Q-learning are able to achieve e O( H{4,3}SAT) regret which is only O( H) worse than the lower bound. Low switching cost / batched algorithms Auer et al. [5] propose UCB2 in bandit problems, which achieves the same regret bound as UCB but has switching cost only O(log T) instead of the naive O(T). Cesa-Bianchi et al. [8] study the switching cost in online learning in both the adversarial and stochastic setting, and design an algorithm for stochastic bandits that acheive optimal regert and O(log log T) switching cost. Learning algorithms with switching cost bounded by a fixed O(1) constant is often referred to as batched algorithms. Minimax rates for batched algorithms have been established in various problems such as bandits [23, 12] and convex optimization [11]. In all these scenarios, minimax optimal M-batch algorithms are obtained for all M, and their rate matches that of fully adaptive algorithms once M = O(log log T). 1In particular, N = 0 corresponds to off-policy RL, where the algorithm can only choose one data collection policy [14]. 2 Problem setup In this paper, we consider undiscounted episodic tabular MDPs of the form (H, S, P, A, r). The MDP has horizon H with trajectories of the form (x1, a1, . . . , x H, a H, x H+1), where xh S and ah A. The state space S and action space A are discrete with |S| = S and |A| = A. The initial state x1 can be either adversarial (chosen by an adversary who has access to our algorithm), or stochastic specified by some distribution P0(x1). For any (h, xh, ah) [H] S A, the transition probability is denoted as Ph(xh+1|xh, ah). The reward is denoted as rh(xh, ah) [0, 1], which we assume to be deterministic2. We assume in addition that rh+1(x) = 0 for all x, so that the last state x H+1 is effectively an (uninformative) absorbing state. A deterministic policy π consists of H sub-policies πh( ) : S A. For any deterministic policy π, let V π h ( ) : S R and Qπ h( , ) : S A R denote its value function and state-action value function at the h-th step respectively. Let π denote an optimal policy, and V h = V π h and Q h = Qπ h denote the optimal V and Q functions for all h. As a convenient short hand, we denote [Ph Vh+1](x, a) := Ex P( |x,a)[Vh+1(x )] and also use [b Ph Vh+1](xh, ah) := Vh+1(xh+1) in the proofs to denote observed transition. Unless otherwise specified, we will focus on deterministic policies in this paper, which will be without loss of generality as there exists at least one deterministic policy π that is optimal. Regret We focus on the regret for measuring the performance of RL algorithms. Let K be the number of episodes that the agent can play. (so that total number of steps is T := KH.) The regret of an algorithm is defined as Regret(K) := V 1 (xk 1) V πk 1 (xk 1) , where πk is the policy it employs before episode k starts, and V 1 is the optimal value function for the entire episode. Miscellanous notation We use standard Big-Oh notations in this paper: An = O(Bn) means that there exists an absolute constant C > 0 such that An CBn (similarly An = Ω(Bn) for An CBn). An = e O(Bn) means that An Cn Bn where Cn depends at most poly-logarithmically on all the problem parameters. 2.1 Measuring adaptivity through local switching cost To quantify the adaptivity of RL algorithms, we consider the following notion of local switching cost for RL algorithms. Definition 2.1. The local switching cost (henceforth also switching cost ) between any pair of policies (π, π ) is defined as the number of (h, x) pairs on which π and π are different: nswitch(π, π ) := (h, x) [H] S : πh(x) = [π ]h(x) . For an RL algorithm that employs policies (π1, . . . , πK), its local switching cost is defined as k=1 nswitch(πk, πk+1). Note that (1) Nswitch is a random variable in general, as πk can depend on the outcome of the MDP; (2) we have the trivial bound nswitch(π, π ) HS for any (π, π ) and Nswitch(A) HS(K 1) for any algorithm A.3 Remark The local switching cost extends naturally the notion of switching cost in online learning [8] and is suitable in scenarios where the cost of deploying a new policy scales with the portion of (h, x) 2Our results can be straightforwardly extended to the case with stochastic rewards. 3To avoid confusion, we also note that our local switching cost is not to measure the change of the sub-policy πh between timestep h and h + 1 (which is in any case needed due to potential non-stationarity), but rather to measure the change of the entire policy πk = πh k between episode k and k + 1. on which the action πh(x) is changed. A closely related notion of adaptivity is the global switching cost, which simply measures how many times the algorithm switches its entire policy: N gl switch = k=1 1 {πk = πk+1} . As πk = πk+1 implies nswitch(πk, πk+1) 1, we have the trivial bound that N gl switch Nswitch. However, the global switching cost can be substantially smaller for algorithms that tend to change the policy entirely rather than locally . In this paper, we focus on bounding Nswitch, and leave the task of tighter bounds on N gl switch as future work. 3 UCB2 for multi-armed bandits To gain intuition about the switching cost, we briefly review the UCB2 algorithm [5] on multi-armed bandit problems, which achieves the same regret bound as the original UCB but has a substantially lower switching cost. The multi-armed bandit problem can be viewed as an RL problem with H = 1, S = 1, so that the agent needs only play one action a A and observe the (random) reward r(a) [0, 1]. The distribution of r(a) s are unknown to the agent, and the goal is to achieve low regret. The UCB2 algorithm is a variant of the celebrated UCB (Upper Confidence Bound) algorithm for bandits. UCB2 also maintains upper confidence bounds on the true means µ1, . . . , µA, but instead plays each arm multiple times rather than just once when it s found to maximize the upper confidence bound. Specifically, when an arm is found to maximize the UCB for the r-th time, UCB2 will play it τ(r) τ(r 1) times, where τ(r) = (1 + η)r for r = 0, 1, 2, . . . and some parameter η (0, 1) to be determined. 4 The full UCB2 algorithm is presented in Algorithm 1. Algorithm 1 UCB2 for multi-armed bandits input Parameter η (0, 1). Initialize: rj = 0 for j = 1, . . . , A. Play each arm once. Set t 0 and T T A. while t T do Select arm j that maximizes rj + arj, where rj is the average reward obtained from arm j and ar = O( p log T/τ(r)) (with some specific choice.) Play arm j exactly τ(rj + 1) τ(rj) times. Set t t + τ(rj + 1) τ(rj) and rj rj + 1. end while Theorem 1 (Auer et al. [5]). For T maxi:µi<µ 1 2 2 i , the UCB2 algorithm acheives expected regret bound where i := µ µi is the gap between arm i and the optimal arm. Further, the switching cost is at most O( A log(T/A) The switching cost bound in Theorem 1 comes directly from the fact that PA i=1(1+η)ri T implies PA i=1 ri O(A log(T/A)/η), by the convexity of r 7 (1 + η)r and Jensen s inequality. Such an approach can be fairly general, and we will follow it in sequel to develop RL algorithm with low switching cost. 4For convenience, here we treat (1 + η)r as an integer. In Q-learning we could not make this approximation (as we choose η super small), and will massage the sequence τ(r) to deal with it. 4 Q-learning with UCB2 exploration In this section, we propose our main algorithm, Q-learning with UCB2 exploration, and show that it achieves sublinear regret as well as logarithmic local switching cost. 4.1 Algorithm description High-level idea Our algorithm maintains wo sets of optimistic Q estimates: a running estimate e Q which is updated after every episode, and a delayed estimate Q which is only updated occasionally but used to select the action. In between two updates to Q, the policy stays fixed, so the number of policy switches is bounded by the number of updates to Q. To describe our algorithm, let τ(r) be defined as τ(r) = (1 + η)r , r = 1, 2, . . . and define the triggering sequence as {tn}n 1 = {1, 2, . . . , τ(r )} {τ(r + 1), τ(r + 2), . . .}, (1) where the parameters (η, r ) will be inputs to the algorithm. Define for all t {1, 2, . . .} the quantities τlast(t) := max {tn : tn t} and αt = H + 1 Two-stage switching strategy The triggering sequence (1) defines a two-stage strategy for switching policies. Suppose for a given (h, xh), the algorithm decides to take some particular ah for the t-th time, and has observed (rh, xh+1) and updated the running estimate e Qh(xh, ah) accordingly. Then, whether to also update the policy network Q is decided as Stage I: if t τ(r ), then always perform the update Qh(xh, ah) e Qh(xh, ah). Stage II: if t > τ(r ), then perform the above update only if t is in the triggering sequence, that is, t = τ(r) = (1 + η)r for some r > r . In other words, for any state-action pair, the algorithm performs eager policy update in the beginning τ(r ) visitations, and switches to delayed policy update after that according to UCB2 scheduling. Optimistic exploration bonus We employ either a Hoeffding-type or a Bernstein-type exploration bonus to make sure that our running Q estimates are optimistic. The full algorithm with Hoeffdingstyle bonus is presented in Algorithm 2. 4.2 Regret and switching cost guarantee We now present our main results. Theorem 2 (Q-learning with UCB2H exploration achieves sublinear regret and low switching cost). Choosing η = 1 2H(H+1) and r = l log(10H2) log(1+η) m , with probability at least 1 p, the regret of Algorithm 2 is bounded by e O( H4SAT). Further, the local switching cost is bounded as Nswitch O(H3SA log(K/A)). Theorem 2 shows that the total regret of Q-learning with UCB2 exploration is e O( H4SAT), the same as UCB version of [16]. In addition, the local switching cost of our algorithm is only O(H3SA log(K/A)), which is logarithmic in K, whereas the UCB version can have in the worst case the trivial bound HS(K 1). We give a high-level overview of the proof Theorem 2 in Section 6, and defer the full proof to Appendix A. Bernstein version Replacing the Hoeffding bonus with a Bernstein-type bonus, we can achieve e O( H3SAT) regret ( H better than UCB2H) and the same switching cost bound. Algorithm 2 Q-learning with UCB2-Hoeffding (UCB2H) Exploration input Parameter η (0, 1), r Z>0, and c > 0. Initialize: e Qh(x, a) H, Qh e Qh, Nh(x, a) 0 for all (x, a, h) S A [H]. for episode k = 1, . . . , K do Receive x1. for step h = 1, . . . , H do Take action ah arg maxa Qh(xh, a ), and observe xh+1. t = Nh(xh, ah) Nh(xh, ah) + 1; bt = c p H3ℓ/t (Hoeffding-type bonus); e Qh(xh, ah) (1 αt) e Qh(xh, ah) + αt[rh(xh, ah) + e Vh+1(xh+1) + bt]. e Vh(xh) min n H, maxa A e Qh(xh, a ) o . if t {tn}n 1 (where tn is defined in (1)) then (Update policy) Qh(xh, ) e Qh(xh, ). end if end for end for Theorem 3 (Q-learning with UCB2B exploration achieves sublinear regret and low switching cost). Choosing η = 1 2H(H+1) and r = l log(10H2) log(1+η) m , with probability at least 1 p, the regret of Algorithm 1 is bounded by e O( H3SAT) as long as T = eΩ(H6S2A2). Further, the local switching cost is bounded as Nswitch O(H3SA log(K/A)). The full algorithm description, as well as the proof of Theorem 3, are deferred to Appendix B. Compared with Q-learning with UCB [16], Theorem 2 and 3 demonstrate that vanilla low-regret RL algorithms such as Q-Learning can be turned into low switching cost versions without any sacrifice on the regret bound. 4.3 PAC guarantee Our low switching cost algorithms can also achieve the PAC learnability guarantee. Specifically, we have the following Corollary 4 (PAC bound for Q-Learning with UCB2 exploration). Suppose (WLOG) that x1 is deterministic. For any ε > 0, Q-Learning with {UCB2H, UCB2B} exploration can output a (stochastic) policy bπ such that with high probability V 1 (x1) V bπ 1 (x1) ε after K = e O(H{5,4}SA/ε2) episodes. The proof of Corollary 4 involves turning the regret bounds in Theorem 2 and 3 to PAC bounds using the online-to-batch conversion, similar as in [16]. The full proof is deferred to Appendix C. 5 Application: Concurrent Q-Learning Our low switching cost Q-Learning can be applied to developing algorithms for Concurrent RL [13] a setting in which multiple RL agents can act in parallel and hopefully accelerate the exploration in wall time. Setting We assume there are M agents / machines, where each machine can interact with a independent copy of the episodic MDP (so that the transitions and rewards on the M MDPs are mutually independent). Within each episode, the M machines must play synchronously and cannot communiate, and can only exchange information after the entire episode has finished. Note that our setting is in a way more stringent than [13], which allows communication after each timestep. We define a round as the duration in which the M machines simultanesouly finish one episode and (optionally) communicate and update their policies. We measure the performance of a concurrent algorithm in its required number of rounds to find an ε near-optimal policy. With larger M, we expect such number of rounds to be smaller, and the best we can hope for is a linear speedup in which the number of rounds scales as M 1. Concurrent Q-Learning Intuitively, any low switching cost algorithm can be made into a concurrent algorithm, as its execution can be parallelized in between two consecutive policy switches. Indeed, we can design concurrent versions of our low switching Q-Learning algorithm and achieve a nearly linear speedup. Theorem 5 (Concurrent Q-Learning achieves nearly linear speedup). There exists concurrent versions of Q-Learning with {UCB2H, UCB2B} exploration such that, given a budget of M parallel machines, returns an ε near-optimal policy in e O H3SA + H{5,4}SA rounds of execution. Theorem 5 shows that concurrent Q-Learning has a linear speedup so long as M = e O(H{2,1}/ε2). In particular, in high-accuracy (small ε) cases, the constant overhead term H3SA can be negligible and we essentially have a linear speedup over a wide range of M. The proof of Theorem 5 is deferred to Appendix D. Comparison with existing concurrent algorithms Theorem 5 implies a PAC mistake bound as well: there exists concurrent algorithms on M machines, Concurrent Q-Learning with {UCB2H, UCB2B}, that performs a ε near-optimal action on all but e O H4SAM + H{6,5}SA actions with high probability (detailed argument in Appendix D.2). We compare ourself with the Concurrent MBIE (CMBIE) algorithm in [13], which considers the discounted and infinite-horizon MDPs, and has a mistake bound5 ε(1 γ )2 + S 2A := N CMBIE ε Our concurrent Q-Learning compares favorably against CMBIE in terms of the mistake bound: Dependence on ε. CMBIE achieves N CMBIE ε = e O(ε 3 + ε 1M), whereas our algorithm achieves N CQL ε = e O(ε 2 + M), better by a factor of ε 1. Dependence on (H, S, A). These are not comparable in general, but under the typical correspondence6 S HS, A A, (1 γ ) 1 H, we get N CMBIE ε = e O(H3SAMε 1 + H8S2Aε 3). Compared to N CQL ε , CMBIE has a higher dependence on H as well as a S2 term due to its model-based nature. 6 Proof overview of Theorem 2 The proof of Theorem 2 involves two parts: the switching cost bound and the regret bound. The switching cost bound results directly from the UCB2 switching schedule, similar as in the bandit case (cf. Section 3). However, such a switching schedule results in delayed policy updates, which makes establishing the regret bound technically challenging. The key to the e O(poly(H) SAT) regret bound for vanilla Q-Learning in [16] is a propagation of error argument, which shows that the regret7 from the h-th step and forward (henceforth the 5(S , A , γ ) are the {# states, # actions, discount factor} of the discounted infinite-horizon MDP. 6One can transform an episodic MDP with S states to an infinite-horizon MDP with HS states. Also note that the effective horizon for discounted MDP is (1 γ) 1. 7Technically it is an upper bound on the regret. h-regret), defined as K X k=1 eδk h := h e V k h V πk h i (xk h), is bounded by 1 + 1/H times the (h + 1)-regret, plus some bounded error term. As (1 + 1/H)H = O(1), this fact can be applied recursively for h = H, . . . , 1 which will result in a total regret bound that is not exponential in H. The control of the (excess) error propagation factor by 1/H and the ability to converge are then achieved simultaneously via the stepsize choice αt = H+1 In constrast, our low-switching version of Q-Learning updates the exploration policy in a delayed fashion according to the UCB2 schedule. Specifically, the policy at episode k does not correspond to the argmax of the running estimate e Qk, but rather a previous version Qk = e Qk for some k k. This introduces a mismatch between the Q used for exploration and the Q being updated, and it is a priori possible whether such a mismatch will blow up the propagation of error. We resolve this issue via a novel error analysis, which at a high level consists of the following steps: (i) We show that the quantity eδk h is upper bounded by a max error eδk h max n e Qk h , e Qk h o Qπk h (xk h, ak h) = e Qk h Qπk h + h e Qk h e Qk h i (xk h, ak h) (Lemma A.3). On the right hand side, the first term e Qk h Qπk h does not have a mismatch (as πk depends on e Qk ) and can be bounded similarly as in [16]. The second term [ e Qk h e Qk h ]+ is a perturbation term, which we bound in a precise way that relates to stepsizes in between episodes k to k and the (h + 1)-regret (Lemma A.4). (ii) We show that, under the UCB2 scheduling, the combined error above results a mild blowup in the relation between h-regret and (h + 1)-regret the multiplicative factor can be now bounded by (1+1/H)(1+O(ηH)) (Lemma A.5). Choosing η = O(1/H2) will make the multiplicative factor 1 + O(1/H) and the propagation of error argument go through. We hope that the above analysis can be applied more broadly in analyzing exploration problems with delayed updates or asynchronous parallelization. 7 Lower bound on switching cost Theorem 6. Let A 4 and M be the set of episodic MDPs satisfying the conditions in Section 2. For any RL algorithm A satisfying Nswitch HSA/2, we have sup M M Ex1,M k=1 V 1 (x1) V πk 1 (x1) i.e. the worst case regret is linear in K. Theorem 6 implies that the switching cost of any no-regret algorithm is lower bounded by Ω(HSA), which is quite intuitive as one would like to play each action at least once on all (h, x). Compared with the lower bound, the switching cost O(H3SA log K) we achieve through UCB2 scheduling is at most off by a factor of O(H2 log K). We believe that the log K factor is not necessary as there exist algorithms achieving double-log [8] in bandits, and would also like to leave the tightening of the H2 factor as future work. The proof of Theorem 6 is deferred to Appendix E. 8 Conclusion In this paper, we take steps toward studying limited adaptivity RL. We propose a notion of local switching cost to account for the adaptivity of RL algorithms. We design a Q-Learning algorithm with infrequent policy switching that achieves e O( H{4,3}SAT) regret while switching its policy for at most O(log T) times. Our algorithm works in the concurrent setting through parallelization and achieves nearly linear speedup and favorable sample complexity. Our proof involves a novel perturbation analysis for exploration algorithms with delayed updates, which could be of broader interest. There are many interesting future directions, including (1) low switching cost algorithms with tighter regret bounds, most likely via model-based approaches; (2) algorithms with even lower switching cost; (3) investigate the connection to other settings such as off-policy RL. Acknowledgment The authors would like to thank Emma Brunskill, Ramtin Keramati, Andrea Zanette, and the staff of CS234 at Stanford for the valuable feedback at an earlier version of this work, and Chao Tao for the very insightful feedback and discussions on the concurrent Q-learning algorithm. YW was supported by a start-up grant from UCSB CS department, NSF-OAC 1934641 and a gift from AWS ML Research Award. [1] S. Agrawal and R. Jia. Optimistic posterior sampling for reinforcement learning: worst-case regret bounds. In Advances in Neural Information Processing Systems, pages 1184 1194, 2017. [2] D. Almirall, S. N. Compton, M. Gunlicks-Stoessel, N. Duan, and S. A. Murphy. Designing a pilot sequential multiple assignment randomized trial for developing an adaptive treatment strategy. Statistics in medicine, 31(17):1887 1902, 2012. [3] D. Almirall, I. Nahum-Shani, N. E. Sherwood, and S. A. Murphy. Introduction to smart designs for the development of adaptive interventions: with application to weight loss research. Translational behavioral medicine, 4(3):260 274, 2014. [4] A. H. Ashouri, W. Killian, J. Cavazos, G. Palermo, and C. Silvano. A survey on compiler autotuning using machine learning. ACM Computing Surveys (CSUR), 51(5):96, 2018. [5] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2-3):235 256, 2002. [6] M. G. Azar, I. Osband, and R. Munos. Minimax regret bounds for reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 263 272. JMLR. org, 2017. [7] R. I. Brafman and M. Tennenholtz. R-max-a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research, 3(Oct):213 231, 2002. [8] N. Cesa-Bianchi, O. Dekel, and O. Shamir. Online learning with switching costs and other adaptive adversaries. In Advances in Neural Information Processing Systems, pages 1160 1168, 2013. [9] Y. Chow, O. Nachum, E. Duenez-Guzman, and M. Ghavamzadeh. A lyapunov-based approach to safe reinforcement learning. In Advances in Neural Information Processing Systems, pages 8092 8101, 2018. [10] C. Dann, L. Li, W. Wei, and E. Brunskill. Policy certificates: Towards accountable reinforcement learning. ar Xiv preprint ar Xiv:1811.03056, 2018. [11] J. Duchi, F. Ruan, and C. Yun. Minimax bounds on stochastic batched convex optimization. In Conference On Learning Theory, pages 3065 3162, 2018. [12] Z. Gao, Y. Han, Z. Ren, and Z. Zhou. Batched multi-armed bandits problem. ar Xiv preprint ar Xiv:1904.01763, 2019. [13] Z. Guo and E. Brunskill. Concurrent pac rl. In Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015. [14] J. P. Hanna, P. S. Thomas, P. Stone, and S. Niekum. Data-efficient policy evaluation through behavior policy search. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1394 1403. JMLR. org, 2017. [15] T. Jaksch, R. Ortner, and P. Auer. Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research, 11(Apr):1563 1600, 2010. [16] C. Jin, Z. Allen-Zhu, S. Bubeck, and M. I. Jordan. Is Q-learning provably efficient? In Advances in Neural Information Processing Systems, pages 4868 4878, 2018. [17] M. Kearns and S. Singh. Near-optimal reinforcement learning in polynomial time. Machine learning, 49(2-3):209 232, 2002. [18] S. Krishnan, Z. Yang, K. Goldberg, J. Hellerstein, and I. Stoica. Learning to optimize join queries with deep reinforcement learning. ar Xiv preprint ar Xiv:1808.03196, 2018. [19] H. Lei, I. Nahum-Shani, K. Lynch, D. Oslin, and S. A. Murphy. A "smart" design for building individualized treatment sequences. Annual review of clinical psychology, 8:21 48, 2012. [20] A. Mirhoseini, H. Pham, Q. V. Le, B. Steiner, R. Larsen, Y. Zhou, N. Kumar, M. Norouzi, S. Bengio, and J. Dean. Device placement optimization with reinforcement learning. In nternational Conference on Machine Learning (ICML-17), pages 2430 2439. JMLR. org, 2017. [21] P. Nguyen, T. Tran, S. Gupta, S. Rana, M. Barnett, and S. Venkatesh. Incomplete conditional density estimation for fast materials discovery. In Proceedings of the 2019 SIAM International Conference on Data Mining, pages 549 557. SIAM, 2019. [22] I. Osband, D. Russo, and B. Van Roy. (more) efficient reinforcement learning via posterior sampling. In Advances in Neural Information Processing Systems, pages 3003 3011, 2013. [23] V. Perchet, P. Rigollet, S. Chassang, E. Snowberg, et al. Batched bandit problems. The Annals of Statistics, 44(2):660 681, 2016. [24] P. Raccuglia, K. C. Elbert, P. D. Adler, C. Falk, M. B. Wenny, A. Mollo, M. Zeller, S. A. Friedler, J. Schrier, and A. J. Norquist. Machine-learning-assisted materials discovery using failed experiments. Nature, 533(7601):73, 2016. [25] G. Theocharous, P. S. Thomas, and M. Ghavamzadeh. Personalized ad recommendation systems for life-time value optimization with guarantees. In Twenty-Fourth International Joint Conference on Artificial Intelligence, 2015. [26] M. Yu, Z. Yang, M. Kolar, and Z. Wang. Convergent policy optimization for safe reinforcement learning. In Advances in Neural Information Processing Systems, 2019.