# boltzmann_exploration_done_right__d6ef07fb.pdf Boltzmann Exploration Done Right Nicolò Cesa-Bianchi Università degli Studi di Milano Milan, Italy nicolo.cesa-bianchi@unimi.it Claudio Gentile INRIA Lille Nord Europe Villeneuve d Ascq, France cla.gentile@gmail.com Gábor Lugosi ICREA & Universitat Pompeu Fabra Barcelona, Spain gabor.lugosi@gmail.com Gergely Neu Universitat Pompeu Fabra Barcelona, Spain gergely.neu@gmail.com Boltzmann exploration is a classic strategy for sequential decision-making under uncertainty, and is one of the most standard tools in Reinforcement Learning (RL). Despite its widespread use, there is virtually no theoretical understanding about the limitations or the actual benefits of this exploration scheme. Does it drive exploration in a meaningful way? Is it prone to misidentifying the optimal actions or spending too much time exploring the suboptimal ones? What is the right tuning for the learning rate? In this paper, we address several of these questions for the classic setup of stochastic multi-armed bandits. One of our main results is showing that the Boltzmann exploration strategy with any monotone learning-rate sequence will induce suboptimal behavior. As a remedy, we offer a simple non-monotone schedule that guarantees near-optimal performance, albeit only when given prior access to key problem parameters that are typically not available in practical situations (like the time horizon T and the suboptimality gap ). More importantly, we propose a novel variant that uses different learning rates for different arms, and achieves a distribution-dependent regret bound of order K log2 T and a distributionindependent bound of order KT log K without requiring such prior knowledge. To demonstrate the flexibility of our technique, we also propose a variant that guarantees the same performance bounds even if the rewards are heavy-tailed. 1 Introduction Exponential weighting strategies are fundamental tools in a variety of areas, including Machine Learning, Optimization, Theoretical Computer Science, and Decision Theory [3]. Within Reinforcement Learning [23, 25], exponential weighting schemes are broadly used for balancing exploration and exploitation, and are equivalently referred to as Boltzmann, Gibbs, or softmax exploration policies [22, 14, 24, 19]. In the most common version of Boltzmann exploration, the probability of choosing an arm is proportional to an exponential function of the empirical mean of the reward of that arm. Despite the popularity of this policy, very little is known about its theoretical performance, even in the simplest reinforcement learning setting of stochastic bandit problems. The variant of Boltzmann exploration we focus on in this paper is defined by pt,i eηtbµt,i, (1) where pt,i is the probability of choosing arm i in round t, bµt,i is the empirical average of the rewards obtained from arm i up until round t, and ηt > 0 is the learning rate. This variant is broadly used 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. in reinforcement learning [23, 25, 14, 26, 16, 18]. In the multiarmed bandit literature, exponentialweights algorithms are also widespread, but they typically use importance-weighted estimators for the rewards see, e.g., [6, 8] (for the nonstochastic setting), [12] (for the stochastic setting), and [20] (for both stochastic and nonstochastic regimes). The theoretical behavior of these algorithms is generally well understood. For example, in the stochastic bandit setting Seldin and Slivkins [20] show a regret bound of order K log2 T , where is the suboptimality gap (i.e., the smallest difference between the mean reward of the optimal arm and the mean reward of any other arm). In this paper, we aim to achieve a better theoretical understanding of the basic variant of the Boltzmann exploration policy that relies on the empirical mean rewards. We first show that any monotone learning-rate schedule will inevitably force the policy to either spend too much time drawing suboptimal arms or completely fail to identify the optimal arm. Then, we show that a specific non-monotone schedule of the learning rates can lead to regret bound of order K log T 2 . However, the learning schedule has to rely on full knowledge of the gap and the number of rounds T. Moreover, our negative result helps us to identify a crucial shortcoming of the Boltzmann exploration policy: it does not reason about the uncertainty of the empirical reward estimates. To alleviate this issue, we propose a variant that takes this uncertainty into account by using separate learning rates for each arm, where the learning rates account for the uncertainty of each reward estimate. We show that the resulting algorithm guarantees a distribution-dependent regret bound of order K log2 T , and a distribution-independent bound of order Our algorithm and analysis is based on the so-called Gumbel softmax trick that connects the exponential-weights distribution with the maximum of independent random variables from the Gumbel distribution. 2 The stochastic multi-armed bandit problem Consider the setting of stochastic multi-armed bandits: each arm i [K] def= {1, 2, . . . , K} yields a reward with distribution νi, mean µi, with the optimal mean reward being µ = maxi µi. Without loss of generality, we will assume that the optimal arm is unique and has index 1. The gap of arm i is defined as i = µ µi. We consider a repeated game between the learner and the environment, where in each round t = 1, 2, . . . , the following steps are repeated: 1. The learner chooses an arm It [K], 2. the environment draws a reward Xt,It νIt independently of the past, 3. the learner receives and observes the reward Xt,It. The performance of the learner is measured in terms of the pseudo-regret defined as t=1 E [Xt,It] = µ T E i=1 i E [NT,i] , (2) where we defined Nt,i = Pt s=1 I{Is=i}, that is, the number of times that arm i has been chosen until the end of round t. We aim at constructing algorithms that guarantee that the regret grows sublinearly. We will consider the above problem under various assumptions of the distribution of the rewards. For most of our results, we will assume that each νi is σ-subgaussian with a known parameter σ > 0, that is, that E h ey(X1,i E[X1,i])i eσ2y2/2 holds for all y R and i [K]. It is easy to see that any random variable bounded in an interval of length B is B2/4-subgaussian. Under this assumption, it is well known that any algorithm will suffer a regret of at least Ω P i>1 σ2 log T , as shown in the classic paper of Lai and Robbins [17]. There exist several algorithms guaranteeing matching upper bounds, even for finite horizons [7, 10, 15]. We refer to the survey of Bubeck and Cesa-Bianchi [9] for an exhaustive treatment of the topic. 3 Boltzmann exploration done wrong We now formally describe the heuristic form of Boltzmann exploration that is commonly used in the reinforcement learning literature [23, 25, 14]. This strategy works by maintaining the empirical estimates of each µi defined as bµt,i = Pt s=1 Xs,i I{Is=i} and computing the exponential-weights distribution (1) for an appropriately tuned sequence of learning rate parameters ηt > 0 (which are often referred to as the inverse temperature). As noted on several occasions in the literature, finding the right schedule for ηt can be very difficult in practice [14, 26]. Below, we quantify this difficulty by showing that natural learning-rate schedules may fail to achieve near-optimal regret guarantees. More precisely, they may draw suboptimal arms too much even after having estimated all the means correctly, or commit too early to a suboptimal arm and never recover afterwards. We partially circumvent this issue by proposing an admittedly artificial learning-rate schedule that actually guarantees near-optimal performance. However, a serious limitation of this schedule is that it relies on prior knowledge of problem parameters and T that are typically unknown at the beginning of the learning procedure. These observations lead us to the conclusion that the Boltzmann exploration policy as described by Equations (1) and (3) is no more effective for regret minimization than the simplest alternative of ε-greedy exploration [23, 7]. Before we present our own technical results, we mention that Singh et al. [21] propose a learning-rate schedule ηt for Boltzmann exploration that simultaneously guarantees that all arms will be drawn infinitely often as T goes to infinity, and that the policy becomes greedy in the limit. This property is proven by choosing a learning-rate schedule adaptively to ensure that in each round t, each arm gets drawn with probability at least 1 t , making it similar in spirit to ε-greedy exploration. While this strategy clearly leads to sublinear regret, it is easy to construct examples on which it suffers a regret of at least Ω T 1 α for any small α > 0. In this paper, we pursue a more ambitious goal: we aim to find out whether Boltzmann exploration can actually guarantee polylogarithmic regret. In the rest of this section, we present both negative and positive results concerning the standard variant of Boltzmann exploration, and then move on to providing an efficient generalization that achieves consistency in a more universal sense. 3.1 Boltzmann exploration with monotone learning rates is suboptimal In this section, we study the most natural variant of Boltzmann exploration that uses a monotone learning-rate schedule. It is easy to see that in order to achieve sublinear regret, the learning rate ηt needs to increase with t so that the suboptimal arms are drawn with less and less probability as time progresses. For the sake of clarity, we study the simplest possible setting with two arms with a gap of between their means. We first show that, in order to guarantee near-optimal (logarithmic) regret, the learning rate has to increase at least at a rate log t even when the mean rewards are perfectly known, and that any learning-rate sequence that increases at a slower logarithmic rate will lead to polynomial regret. In other words, log t is the minimal affordable learning rate. Proposition 1. Let us assume that bµt,i = µi for all t and i = 1, 2 with µ1 > µ2. Assume that for some constants k 1, α 0 and ε 1 , the learning rate satisfies ηt log(t 2) (1+α) + ε for all t k. Then, the regret grows as RT = Ω log T if α = 0, and RT = Ω T α 1+α 1 1+α if α > 0. Proof. For t k, the probability of pulling the suboptimal arm can be bounded as P [It = 2] = 1 1 + eηt e ηt 2 = Ω 2t 1 1+α by our assumption on ηt. Summing up for all t, we get that the regret is at least t=1 P [It = 2] 2t 1 1+α !! The proof is concluded by observing that the sum PT t=k t 1 1+α is of the order Ω(log T) if α = 0 and Ω T α 1+α if α > 0. This simple proposition thus implies an asymptotic lower bound on the schedule of learning rates ηt that provide near-optimal guarantees. In contrast, Theorem 1 below shows that all learning rate sequences that grow faster than 2 log t yield a linear regret, provided this schedule is adopted since the beginning of the game. This should be contrasted with Theorem 2, which exhibits a schedule achieving logarithmic regret where ηt grows faster than 2 log t only after the first τ rounds. Theorem 1. There exists a 2-armed stochastic bandit problem with rewards bounded in [0, 1] where Boltzmann exploration using any learning rate sequence ηt such that ηt > 2 log t for all t 1 has regret RT = Ω(T). Proof. Consider the case where arm 2 gives a reward deterministically equal to 1 2 whereas the optimal arm 1 has a Bernoulli distribution of parameter p = 1 2 + for some 0 < < 1 2. Note that the regret of any algorithm satisfies RT (T t0)P [ t > t0, It = 2]. Without loss of generality, assume that bµ1,1 = 0 and bµ1,2 = 1/2. Then for all t, independent of the algorithm, bµt,2 = 1/2 and pt,1 = eηt Bin(Nt 1,1,p) eηt/2 + eηt Bin(Nt 1,1,p) and pt,2 = eηt/2 eηt/2 + eηt Bin(Nt 1,1,p) . For t0 1, Let Et0 be the event that Bin(Nt0,1, p) = 0, that is, up to time t0, arm 1 gives only zero reward whenever it is sampled. Then P [ t > t0 It = 2] P [Et0] 1 P [ t > t0 It = 1 | Et0] 2 t0 1 P [ t > t0 It = 1 | Et0] . For t > t0, let At,t0 be the event that arm 1 is sampled at time t but not at any of the times t0 + 1, t0 + 2, . . . , t 1. Then, for any t0 1, P [ t > t0 It = 1 | Et0] = P [ t > t0 At,t0 | Et0] X t>t0 P [At,t0 | Et0] 1 1 + eηt/2 1 1 1 + eηs/2 t>t0 e ηt/2 . RT (T t0) 1 t>t0 e ηt/2 ! Assume ηt c log t for some c > 2 and for all t t0. Then X t>t0 e ηt/2 X whenever t0 (2a) 1 a where a = c 2 1. This implies RT = Ω(T). 3.2 A learning-rate schedule with near-optimal guarantees The above negative result is indeed heavily relying on the assumption that ηt > 2 log t holds since the beginning. If we instead start off from a constant learning rate which we keep for a logarithmic number of rounds, then a logarithmic regret bound can be shown. Arguably, this results in a rather simplistic exploration scheme, which can be essentially seen as an explore-then-commit strategy (e.g., [13]). Despite its simplicity, this strategy can be shown to achieve near-optimal performance if the parameters are tuned as a function the suboptimality gap (although its regret scales at the suboptimal rate of 1/ 2 with this parameter). The following theorem (proved in Appendix A.1) states this performance guarantee. Theorem 2. Assume the rewards of each arm are in [0, 1] and let τ = 16e K log T 2 . Then the regret of Boltzmann exploration with learning rate ηt = I{t<τ} + log(t 2) I{t τ} satisfies RT 16e K log T 4 Boltzmann exploration done right We now turn to give a variant of Boltzmann exploration that achieves near-optimal guarantees without prior knowledge of either or T. Our approach is based on the observation that the distribution pt,i exp (ηtbµt,i) can be equivalently specified by the rule It = arg maxj {ηtbµt,j + Zt,j}, where Zt,j is a standard Gumbel random variable1 drawn independently for each arm j (see, e.g., Abernethy et al. [1] and the references therein). As we saw in the previous section, this scheme fails to guarantee consistency in general, as it does not capture the uncertainty of the reward estimates. We now propose a variant that takes this uncertainty into account by choosing different scaling factors for each perturbation. In particular, we will use the simple choice βt,i = q C2 Nt,i with some constant C > 0 that will be specified later. Our algorithm operates by independently drawing perturbations Zt,i from a standard Gumbel distribution for each arm i, then choosing action It+1 = arg max i {bµt,i + βt,i Zt,i} . (4) We refer to this algorithm as Boltzmann Gumbel exploration, or, in short, BGE. Unfortunately, the probabilities pt,i no longer have a simple closed form, nevertheless the algorithm is very straightforward to implement. Our main positive result is showing the following performance guarantee about the algorithm.2 Theorem 3. Assume that the rewards of each arm are σ2-subgaussian and let c > 0 be arbitrary. Then, the regret of Boltzmann Gumbel exploration satisfies 9C2 log2 + T i/c2 c2eγ + 18C2eσ2/2C2 (1 + e γ) In particular, choosing C = σ and c = σ guarantees a regret bound of σ2 log2(T 2 i /σ2) i Notice that, unlike any other algorithm that we are aware of, Boltzmann Gumbel exploration still continues to guarantee meaningful regret bounds even if the subgaussianity constant σ is underestimated although such misspecification is penalized exponentially in the true σ2. A downside of our bound is that it shows a suboptimal dependence on the number of rounds T: it grows asymptotically as P i>1 log2(T 2 i ) i, in contrast to the standard regret bounds for the UCB algorithm of Auer et al. [7] that grow as P i>1(log T) i. However, our guarantee improves on the distribution-independent regret bounds of UCB that are of order KT log T. This is shown in the following corollary. Corollary 1. Assume that the rewards of each arm are σ2-subgaussian. Then, the regret of Boltzmann Gumbel exploration with C = σ satisfies RT 200σ Notably, this bound shows optimal dependence on the number of rounds T, but is suboptimal in terms of the number of arms. To complement this upper bound, we also show that these bounds are tight in the sense that the log K factor cannot be removed. Theorem 4. For any K and T such that p K/T log K 1, there exists a bandit problem with rewards bounded in [0, 1] where the regret of Boltzmann Gumbel exploration with C = 1 is at least RT 1 1The cumulative density function of a standard Gumbel random variable is F(x) = exp( e x+γ) where γ is the Euler-Mascheroni constant. 2We use the notation log+( ) = max{0, }. The proofs can be found in the Appendices A.5 and A.6. Note that more sophisticated policies are known to have better distribution-free bounds. The algorithm MOSS [4] achieves minimax-optimal KT distribution-free bounds, but distribution-dependent bounds of the form (K/ ) log(T 2) where is the suboptimality gap. A variant of UCB using action elimination and due to Auer and Ortner [5] has regret P i>1 log(T 2 i ) i corresponding to a p KT(log K) distribution-free bound. The same bounds are achieved by the Gaussian Thompson sampling algorithm of Agrawal and Goyal [2], given that the rewards are subgaussian. We finally provide a simple variant of our algorithm that allows to handle heavy-tailed rewards, intended here as reward distributions that are not subgaussian. We propose to use technique due to Catoni [11] based on the influence function ψ(x) = log 1 + x + x2/2 , for x 0, log 1 x + x2/2 , for x 0. Using this function, we define our estimates as bµt,i = βt,i s=1 I{Is=i}ψ Xs,i We prove the following result regarding Boltzmann Gumbel exploration run with the above estimates. Theorem 5. Assume that the second moment of the rewards of each arm are bounded uniformly as E X2 i V and let c > 0 be arbitrary. Then, the regret of Boltzmann Gumbel exploration satisfies 9C2 log2 + T i/c2 c2eγ + 18C2e V/2C2 (1 + e γ) Notably, this bound coincides with that of Theorem 3, except that σ2 is replaced by V . Thus, by following the proof of Corollary 1, we can show a distribution-independent regret bound of order Let us now present the proofs of our main results concerning Boltzmann Gumbel exploration, Theorems 3 and 5. Our analysis builds on several ideas from Agrawal and Goyal [2]. We first provide generic tools that are independent of the reward estimator and then move on to providing specifics for both estimators. We start with introducing some notation. We define eµt,i = bµt,i + βt,i Zt,i, so that the algorithm can be simply written as It = arg maxi eµt,i. Let Ft 1 be the sigma-algebra generated by the actions taken by the learner and the realized rewards up to the beginning of round t. Let us fix thresholds xi, yi satisfying µi xi yi µ1 and define qt,i = P [ eµt,1 > yi| Ft 1]. Furthermore, we define the events Ebµ t,i = {bµt,i xi} and Eeµ t,i = {eµt,i yi}. With this notation at hand, we can decompose the number of draws of any suboptimal i as follows: t=1 P h It = i, Eeµ t,i, Ebµ t,i i + t=1 P h It = i, Eeµ t,i, Ebµ t,i i + t=1 P h It = i, Ebµ t,i i . (5) It remains to choose the thresholds xi and yi in a meaningful way: we pick xi = µi + i 3 and yi = µ1 i 3 . The rest of the proof is devoted to bounding each term in Eq. (5). Intuitively, the individual terms capture the following events: The first term counts the number of times that, even though the estimated mean reward of arm i is well-concentrated and the additional perturbation Zt.i is not too large, arm i was drawn instead of the optimal arm 1. This happens when the optimal arm is poorly estimated or when the perturbation Zt,1 is not large enough. Intuitively, this term measures the interaction between the perturbations Zt,1 and the random fluctuations of the reward estimate bµt,1 around its true mean, and will be small if the perturbations tend to be large enough and the tail of the reward estimates is light enough. The second term counts the number of times that the mean reward of arm i is well-estimated, but it ends up being drawn due to a large perturbation. This term can be bounded independently of the properties of the mean estimator and is small when the tail of the perturbation distribution is not too heavy. The last term counts the number of times that the reward estimate of arm i is poorly concentrated. This term is independent of the perturbations and only depends on the properties of the reward estimator. As we will see, the first and the last terms can be bounded in terms of the moment generating function of the reward estimates, which makes subgaussian reward estimators particularly easy to treat. We begin by the most standard part of our analysis: bounding the third term on the right-hand-side of (5) in terms of the moment-generating function. Lemma 1. Let us fix any i and define τk as the k th time that arm i was drawn. We have t=1 P h It = i, Ebµ t,i i 1 + k=1 E exp bµτk,i µi Interestingly, our next key result shows that the first term can be bounded by a nearly identical expression: Lemma 2. Let us define τk as the k th time that arm 1 was drawn. For any i, we have t=1 P h It = i, Eeµ t,i, Ebµ t,i i k=0 E exp µ1 bµτk,1 It remains to bound the second term in Equation (5), which we do in the following lemma: Lemma 3. For any i = 1 and any constant c > 0, we have t=1 P h It = i, Eeµ t,i, Ebµ t,i i 9C2 log2 + T 2 i /c2 + c2eγ The proofs of these three lemmas are included in the supplementary material. 5.1 The proof of Theorem 3 For this section, we assume that the rewards are σ-subgaussian and that bµt,i is the empirical-mean estimator. Building on the results of the previous section, observe that we are left with bounding the terms appearing in Lemmas 1 and 2. To this end, let us fix a k and an i and notice that by the subgaussianity assumption on the rewards, the empirical mean eµτk,i is σ k-subgaussian (as Nτk,i = k). In other words, E h eα(bµτk,i µi)i eα2σ2/2k holds for any α. In particular, using this above formula for α = 1/βτk,i = q k C2 , we obtain E exp bµτk,i µi Thus, the sum appearing in Lemma 1 can be bounded as k=1 E exp bµτk,i µi k 3C eσ2/2C2 T 1 X k 3C 18C2eσ2/2C2 where the last step follows from the fact3 that P k=0 ec k 2 c2 holds for all c > 0. The statement of Theorem 3 now follows from applying the same argument to the bound of Lemma 2, using Lemma 3, and the standard expression for the regret in Equation (2). 3This can be easily seen by bounding the sum with an integral. 10 -2 10 0 10 2 10 -2 10 0 10 2 BE(const) BE(log) BE(sqrt) BGE UCB Figure 1: Empirical performance of Boltzmann exploration variants, Boltzmann Gumbel exploration and UCB for (a) i.i.d. initialization and (b) malicious initialization, as a function of C2. The dotted vertical line corresponds to the choice C2 = 1/4 suggested by Theorem 3. 5.2 The proof of Theorem 5 We now drop the subgaussian assumption on the rewards and consider reward distributions that are possibly heavy-tailed, but have bounded variance. The proof of Theorem 5 trivially follows from the arguments in the previous subsection and using Proposition 2.1 of Catoni [11] (with θ = 0) that guarantees the bound E exp µi bµt,i Nt,i = n exp 6 Experiments This section concludes by illustrating our theoretical results through some experiments, highlighting the limitations of Boltzmann exploration and contrasting it with the performance of Boltzmann Gumbel exploration. We consider a stochastic multi-armed bandit problem with K = 10 arms each yielding Bernoulli rewards with mean µi = 1/2 for all suboptimal arms i > 1 and µ1 = 1/2 + for the optimal arm. We set the horizon to T = 106 and the gap parameter to = 0.01. We compare three variants of Boltzmann exploration with inverse learning rate parameters βt = C2 (BE-const), βt = C2/ log t (BE-log), and t (BE-sqrt) for all t, and compare it with Boltzmann Gumbel exploration (BGE), and UCB with exploration bonus p C2 log(t)/Nt,i. We study two different scenarios: (a) all rewards drawn i.i.d. from the Bernoulli distributions with the means given above and (b) the first T0 = 5,000 rewards set to 0 for arm 1. The latter scenario simulates the situation described in the proof of Theorem 1, and in particular exposes the weakness of Boltzmann exploration with increasing learning rate parameters. The results shown on Figure 1 (a) and (b) show that while some variants of Boltzmann exploration may perform reasonably well when initial rewards take typical values and the parameters are chosen luckily, all standard versions fail to identify the optimal arm when the initial draws are not representative of the true mean (which happens with a small constant probability). On the other hand, UCB and Boltzmann Gumbel exploration continue to perform well even under this unlikely event, as predicted by their respective theoretical guarantees. Notably, Boltzmann Gumbel exploration performs comparably to UCB in this example (even slightly outperforming its competitor here), and performs notably well for the recommended parameter setting of C2 = σ2 = 1/4 (noting that Bernoulli random variables are 1/4-subgaussian). Acknowledgements Gábor Lugosi was supported by the Spanish Ministry of Economy and Competitiveness, Grant MTM2015-67304-P and FEDER, EU. Gergely Neu was supported by the UPFellows Fellowship (Marie Curie COFUND program n 600387). [1] J. Abernethy, C. Lee, A. Sinha, and A. Tewari. Online linear optimization via smoothing. In M.-F. Balcan and Cs. Szepesvári, editors, Proceedings of The 27th Conference on Learning Theory, volume 35 of JMLR Proceedings, pages 807 823. JMLR.org, 2014. [2] S. Agrawal and N. Goyal. Further optimal regret bounds for thompson sampling. In AISTATS, pages 99 107, 2013. [3] S. Arora, E. Hazan, and S. Kale. The multiplicative weights update method: A meta-algorithm and applications. Theory of Computing, 8:121 164, 2012. [4] J.-Y. Audibert and S. Bubeck. Minimax policies for bandits games. In S. Dasgupta and A. Klivans, editors, Proceedings of the 22nd Annual Conference on Learning Theory. Omnipress, June 18 21 2009. [5] P. Auer and R. Ortner. UCB revisited: Improved regret bounds for the stochastic multi-armed bandit problem. Periodica Mathematica Hungarica, 61:55 65, 2010. ISSN 0031-5303. [6] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. Gambling in a rigged casino: The adversarial multi-armed bandit problem. In Foundations of Computer Science, 1995. Proceedings., 36th Annual Symposium on, pages 322 331. IEEE, 1995. [7] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Mach. Learn., 47(2-3):235 256, May 2002. ISSN 0885-6125. doi: 10.1023/A:1013689704352. URL http://dx.doi.org/10.1023/A:1013689704352. [8] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. The nonstochastic multiarmed bandit problem. SIAM J. Comput., 32(1):48 77, 2002. ISSN 0097-5397. [9] S. Bubeck and N. Cesa-Bianchi. Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems. Now Publishers Inc, 2012. [10] O. Cappé, A. Garivier, O.-A. Maillard, R. Munos, G. Stoltz, et al. Kullback leibler upper confidence bounds for optimal sequential allocation. The Annals of Statistics, 41(3):1516 1541, 2013. [11] O. Catoni. Challenging the empirical mean and empirical variance: A deviation study. Annales de l Institut Henri Poincaré, Probabilités et Statistiques, 48(4):1148 1185, 11 2012. [12] N. Cesa-Bianchi and P. Fischer. Finite-time regret bounds for the multiarmed bandit problem. In ICML, pages 100 108, 1998. [13] A. Garivier, E. Kaufmann, and T. Lattimore. On explore-then-commit strategies. In NIPS, 2016. [14] L. P. Kaelbling, M. L. Littman, and A. W. Moore. Reinforcement learning: A survey. Journal of artificial intelligence research, 4:237 285, 1996. [15] E. Kaufmann, N. Korda, and R. Munos. Thompson sampling: An asymptotically optimal finite-time analysis. In ALT 12, pages 199 213, 2012. [16] V. Kuleshov and D. Precup. Algorithms for multi-armed bandit problems. ar Xiv preprint ar Xiv:1402.6028, 2014. [17] T. L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6:4 22, 1985. [18] I. Osband, B. Van Roy, and Z. Wen. Generalization and exploration via randomized value functions. 2016. [19] T. Perkins and D. Precup. A convergent form of approximate policy iteration. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Information Processing Systems 15, pages 1595 1602, Cambridge, MA, USA, 2003. MIT Press. [20] Y. Seldin and A. Slivkins. One practical algorithm for both stochastic and adversarial bandits. In Proceedings of the 30th International Conference on Machine Learning (ICML 2014), pages 1287 1295, 2014. [21] S. P. Singh, T. Jaakkola, M. L. Littman, and Cs. Szepesvári. Convergence results for single-step on-policy reinforcement-learning algorithms. Machine Learning, 38(3):287 308, 2000. URL citeseer.ist.psu.edu/article/singh98convergence.html. [22] R. Sutton. Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In Proceedings of the Seventh International Conference on Machine Learning, pages 216 224. San Mateo, CA, 1990. [23] R. Sutton and A. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. [24] R. S. Sutton, D. A. Mc Allester, S. P. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning with function approximation. In S. Solla, T. Leen, and K. Müller, editors, Advances in Neural Information Processing Systems 12, pages 1057 1063, Cambridge, MA, USA, 1999. MIT Press. [25] Cs. Szepesvári. Algorithms for Reinforcement Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers, 2010. [26] J. Vermorel and M. Mohri. Multi-armed bandit algorithms and empirical evaluation. In European conference on machine learning, pages 437 448. Springer, 2005.