# adaptive_monte_carlo_via_bandit_allocation__9ee06b78.pdf Adaptive Monte Carlo via Bandit Allocation James Neufeld JNEUFELD@UALBERTA.CA Andr as Gy orgy GYORGY@UALBERTA.CA Dale Schuurmans DAES@UALBERTA.CA Csaba Szepesv ari CSABA.SZEPESVARI@UALBERTA.CA Department of Computing Science, University of Alberta, Edmonton, AB, Canada T6G 2E8 We consider the problem of sequentially choosing between a set of unbiased Monte Carlo estimators to minimize the mean-squared-error (MSE) of a final combined estimate. By reducing this task to a stochastic multi-armed bandit problem, we show that well developed allocation strategies can be used to achieve an MSE that approaches that of the best estimator chosen in retrospect. We then extend these developments to a scenario where alternative estimators have different, possibly stochastic costs. The outcome is a new set of adaptive Monte Carlo strategies that provide stronger guarantees than previous approaches while offering practical advantages. 1. Introduction Monte Carlo methods are a pervasive approach to approximating complex integrals, which are widely deployed in all areas of science. Their widespread adoption has led to the development dozens of specialized Monte Carlo methods for any given task, each having their own tunable parameters. Consequently, it is usually difficult for a practitioner to know which approach and corresponding parameter setting might be most effective for a given problem. In this paper we develop algorithms for sequentially allocating calls between a set of unbiased estimators to minimize the expected squared error (MSE) of a combined estimate. In particular, we formalize a new class of adaptive estimation problem: learning to combine Monte Carlo estimators. In this scenario, one is given a set of Monte Carlo estimators that can each approximate the expectation of some function of interest. We assume initially that each estimator is unbiased but has unknown variance. In practice, such estimators could include any unbiased method and/or variance reduction technique, such as unique instantiations of importance, stratified, or rejection sampling; antithetic Proceedings of the 31 st International Conference on Machine Learning, Beijing, China, 2014. JMLR: W&CP volume 32. Copyright 2014 by the author(s). variates; or control variates (Robert & Casella, 2005). The problem is to design a sequential allocation procedure that can interleave calls to the estimators and combine their outputs to produce a combined estimate whose MSE decreases as quickly as possible. To analyze the performance of such a meta-strategy we formalize the notion of MSEregret: the time-normalized excess MSE of the combined estimate compared to the best estimator selected in hindsight, i.e., with knowledge of the distribution of estimates produced by each base estimator. Our first main contribution is to show that this meta-task can be reduced to a stochastic multi-armed bandit problem, where bandit arms are identified with base estimators and the payoff of an arm is given by the negative square of its sampled estimate. In particular, we show that the MSEregret of any meta-strategy is equal to its bandit-regret when the procedure is used to play in the corresponding bandit problem. As a consequence, we conclude that existing bandit algorithms, as well as their bounds on banditregret, can be immediately applied to achieve new results for adaptive Monte Carlo estimation. Although the underlying reduction is quite simple, the resulting adaptive allocation strategies provide novel alternatives to traditional adaptive Monte Carlo strategies, while providing strong finite-sample performance guarantees. Second, we consider a more general case where the alternative estimators require different (possibly random) costs to produce their sampled estimates. Here we develop a suitably designed bandit formulation that yields bounds on the MSE-regret for cost-aware estimation. We develop new algorithms for this generalized form of adaptive Monte Carlo, provide explicit bounds on their MSE-regret, and compare their performance to a state-of-the-art adaptive Monte Carlo method. By instantiating a set of viable base estimators and selecting from them algorithmically, rather than tuning parameters manually, we discover that both computation and experimentation time can be reduced. This work is closely related, and complementary to work on adaptive stratified sampling (Carpentier & Munos, 2011), where a strategy is designed to allocate samples between fixed strata to achieve MSE-regret bounds relative Adaptive Monte Carlo via Bandit Allocation to the best allocation proportion chosen in hindsight. Such work has since been extended to optimizing the number (Carpentier & Munos, 2012a) and structure (Carpentier & Munos, 2012b) of strata for differentiable functions. The method proposed in this paper, however, can be applied more broadly to any set of base estimation strategies and potentially even in combination with these approaches. 2. Background on Bandit Problems The multi-armed bandit (MAB) problem is a sequential allocation task where an agent must choose an action at each step to maximize its long term payoff, when only the payoff of the selected action can be observed (Cesa-Bianchi & Lugosi, 2006; Bubeck & Cesa-Bianchi, 2012). In the stochastic MAB problem (Robbins, 1952) the payoff for each action k {1, 2, . . . , K} is assumed to be generated independently and identically (i.i.d.) from a fixed but unknown distribution νk. The performance of an allocation policy can then by analyzed by defining the cumulative regret for any sequence of n actions, given by Rn .= max 1 k K E h n X t=1 XIt,TIt(t) i , (1) where Xk,t R is the random variable giving the tth payoff of action k, It {1, . . . , K} denotes the action taken by the policy at time-step t, and Tk(t) .= Pt s=1 I{Is = k} denotes the number of times action k is chosen by the policy up to time t. Here, I{p} is the indicator function, set to 1 if the predicate p is true, 0 otherwise. The objective of the agent is to maximize the total payoff, or equivalently to minimize the cumulative regret. By rearranging (1) and conditioning, the regret can be rewritten k=1 E[Tk(n)](µ µk), (2) where µk .= E[Xk,t] and µ .= maxj=1,...,K µj. The analysis of the stochastic MAB problem was pioneered by Lai & Robbins (1985) who showed that, when the payoff distributions are defined by a single parameter, the asymptotic regret of any sub-polynomially consistent policy (i.e., a policy that selects non-optimal actions only sub-polynomially many times in the time horizon) is lower bounded by Ω(log n). In particular, for Bernoulli payoffs lim inf n Rn log n X k KL(µk, µ ), (3) where k .= µ µk and KL (p, q) .= p log(p/q) + (1 p) log( 1 p 1 q ) for p, q [0, 1]. Lai & Robbins (1985) also presented an algorithm based on upper confidence bounds (UCB), which achieves a regret asymptotically matching the lower bound (for certain parametric distributions). Later, Auer et al. (2002a) proposed UCB1 (Algorithm 1), which broadens the practical use of UCB by dropping the Algorithm 1 UCB1 (Auer et al., 2002a) 1: for k {1, . . . , K} 2: Play k, observe Xk,1, set µk,1 := Xk,1; Tk(1) := 1. 3: end for 4: for t {K + 1, K + 2, . . .} 5: Play action k that maximizes µj,t 1 + q 2 log t Tj(t 1); set Tk(t) = Tk(t 1) + 1 and Tj(t) = Tj(t 1) for j = k, observe payoff Xk,Tk(t), and compute µk,t = (1 1/Tk(t)) µk,t 1 + Xk,Tk(t)/Tk(t). 6: end for requirement that payoff distributions fit a particular parametric form. Instead, one need only make the much weaker assumption that the rewards are bounded; in particular, we let Xk,t [0, 1]. Auer et al. (2002a) proved that, for any finite number of actions n, UCB1 s regret is bounded by Various improvements of the UCB1 algorithm have since been proposed. One approach of particular interest is the UCB-V algorithm (Audibert et al., 2009), which takes the empirical variances into account when constructing confidence bounds. Specifically, UCB-V uses the bound 2Vk,Tk(t 1)ETk(t 1),t Tk(t 1) + c3ETk(t 1),t where Vk,s denotes the empirical variance of arm k s payoffs after s pulls, c > 0, and Es,t is an exploration function required to be a non-decreasing function of s or t (typically Es,t = ζ log(t) for a fixed constant ζ > 0). The UCB-V procedure can then be constructed by substituting the above confidence bound into Algorithm 1, which yields a regret bound that scales with the true variance of each arm k + 2 log n. (5) Here cζ is a constant relating to ζ and c. In the worst case, when V(Xk,1) = 1/4 and k = 1/2, this bound is slightly worse than UCB1 s bound; however, it is usually better in practice, particularly if some k has small k and V(Xk,1). A more recent algorithm is KL-UCB (Capp e et al., 2013), where the confidence bound for arm k is based on solving sup n µ : KL ( µk,t, µ) f(t) for a chosen increasing function f( ), which can be solved efficiently since KL (p, ) is smooth and increasing on [p, 1]. By choosing f(t) = log(t) + 3 log log(t) for t 3 (and f(1)=f(2)=f(3)), KL-UCB achieves a regret bound k KL (µk, µ ) log n + O( p log(n)) (6) Adaptive Monte Carlo via Bandit Allocation Algorithm 2 Thompson Sampling (Agrawal & Goyal, 2012) Require: Prior parameters α and β Initialize: S1:K(0) := 0, F1:K(0) := 0 for t {1, 2, . . .} Sample θt,k B(Sk(t 1)+α, Fk(t 1)+β), k = 1...K Play k = arg maxj=1,...,K θt,j; observe Xt [0, 1] Sample ˆXt Bernoulli(Xt) if ˆXt = 1 then set Sk(t) = Sk(t 1) + 1 else set Fk(t) = Fk(t) + 1 end for for n 3, with explicit constants for the higher order terms (Capp e et al., 2013, Corollary 1). Apart from the higher order terms, this bound matches the lower bound (3). In general, KL-UCB is expected to be better than UCB-V except for large sample sizes and small variances. Note that, given any set of UCB algorithms, one can apply the tightest upper confidence from the set, via the union bound, at the price of a small additional constant in the regret. Another approach that has received significant recent interest is Thompson sampling (TS) (Thompson, 1933): a Bayesian method where actions are chosen randomly in proportion to the posterior probability that their mean payoff is optimal. TS is known to outperform UCB-variants when payoffs are Bernoulli distributed (Chapelle & Li, 2011; May & Leslie, 2011). Indeed, the finite time regret of TS under Bernoulli payoff distributions closely matches the lower bound (3) (Kaufmann et al., 2012): k(log(n) + log log(n)) KL(µk, µ ) +C(ε, µ1:K), for every ε > 0, where C is a problem-dependant constant. However, since it is not possible to have Bernoulli distributed payoffs with the same mean and different variances, this analysis is not directly applicable to our setting. Instead, we consider a more general version of Thompson sampling (Agrawal & Goyal, 2012) that converts realvalued to Bernoulli-distributed payoffs through a resampling step (Algorithm 2), which has been shown to obtain 2 log(n). (7) 3. Combining Monte Carlo Estimators We now formalize the main problem we consider in this paper. Assume we are given a finite number of Monte Carlo estimators, k = 1, . . . , K, where base estimator k produces a sequence of real-valued random variables (Xk,t)(t=1,2,...) whose mean converges to the unknown target quantity, µ R. Observations from the different estimators are assumed to be independent from each other. We assume, initially, that drawing a sample from each estimator takes constant time, hence the estimators differ only in terms of how fast their respective sample means Xk,n = 1 n Pn t=1 Xk,t converge to µ. The goal is to design a sequential estimation procedure that works in discrete time steps: For each round t = 1, 2, . . ., based on the previous observations, the procedure selects one estimator It {1, . . . , K}, whose observation is used by an outer procedure to update the estimate ˆµt R based on the values observed so far. As is common in the Monte Carlo literature, we evaluate accuracy by the mean-squared error (MSE). That is, we define the loss of the sequential method A at the end of round n by Ln(A) = E (ˆµn µ)2 . A reasonable goal is to then compare the loss, Lk,n = E (Xk,n µ)2 , of each base estimator to the loss of A. In particular, we propose to evaluate the performance of A by the (normalized) regret Rn(A) = n2 Ln(A) min 1 k K Lk,n , which measures the excess loss of A due to its initial ignorance of estimator quality. Implicit in this definition is the assumption that A s time to select the next estimator is negligible compared to the time to draw an observation. Note also that the excess loss is multiplied by n2, which ensures that, in standard settings, when Lk,n 1/n, a sublinear regret (i.e., |Rn(A|)/n 0 as n ) implies that the loss of A asymptotically matches that of the best estimator. In the next two sections we will adopt a simple strategy for combining the values returned from the base estimators: A simply returns their (unweighted) average as the estimate ˆµn of µ. A more sophisticated approach may be to weight each of these samples inversely proportional to their respective (sample) variances. However, if the adaptive procedure can quickly identify and ignore highly suboptimal arms the savings from the weighted estimator will diminish rapidly. Interestingly, this argument does not immediately translate to the nonuniform cost case considered in Section 5 as will be shown empirically in Section 6. 4. Combining Unbiased I.I.D. Estimators Our main assumption in this section will be the following: Assumption 4.1. Each estimator produces a sequence of i.i.d. random observations with common mean µ and finite variance; values from different estimators are independent. Let ψk denote the distribution of samples from estimator k. Note that Ψ = (ψk)1 k K completely determines the sequential estimation problem. Since the samples coming from estimator k are i.i.d., we have V (Xk,1) = V (Xk,t). Let Vk = V (Xk,1) and V = min1 k K Vk. Furthermore, let Lk,t = V (Xk,1) /t, hence min1 k K Lk,t = V /t. We then have the first main result of this section. Theorem 1 (Regret Identity). Consider K estimators for which Assumption 4.1 holds, and let A be an arbitrary allocation procedure. Then, for any n 1, the MSE-regret Adaptive Monte Carlo via Bandit Allocation of the estimation procedure Aavg, estimating µ using the sample-mean of the observations obtained by A, satisfies k=1 E [Tk(n)] (Vk V ) . (8) The proof follows from a simple calculation given in Appendix A. Essentially, one can rewrite the loss as Ln(A) = 1 n2 E h PK k=1 S2 k,n + 2 P k =j Sk,n Sj,n i , where Sk,n is the centered sum of observations for arm k. The cross-terms can be shown to cancel by independence, and Wald s second identity with some algebra gives the result. The tight connection between sequential estimation and bandit problems revealed by (8) allows one to reduce sequential estimation to the design of bandit strategies and vice versa. Furthermore, regret bounds transfer both ways. Theorem 2 (Reduction). Let Assumption 4.1 hold for Ψ. Define a corresponding bandit problem (νk) by assigning νk as the distribution of X2 k,1. Given an arbitrary allocation strategy A, let Bandit(A) be the bandit strategy that consults A to select the next arm after obtaining reward Yt (assumed nonpositive), based on feeding observations ( Yt)1/2 to A and copying A s choices. Then, the banditregret of Bandit(A) in bandit problem (νk) is the same as the MSE-regret of A in estimation problem Ψ. Conversely, given an arbitrary bandit strategy B, let MC(B) be the allocation strategy that consults B to select the next estimator after observing Y 2 t , based on feeding rewards Yt to B and copying B s choices. Then the MSE-regret of MC(B) in estimation problem Ψ is the same as the bandit-regret of B in bandit problem (νk) (where MC(B) uses the average of observations as its estimate). Proof of Theorem 2. The result follows from Theorem 1 since Vk = E[X2 k,1] µ2 and V = E[X2 k ,1] µ2 where k is the lowest variance estimator, hence Vk V = E[X2 k,1] min1 k K E[X2 k ,1]. Furthermore, the bandit problem (νk) ensures the regret of a procedure that chooses arm k Tk(n) times is PK k=1 E[Tk(n)] k, where k = max1 k K E[ X2 k ,1] E[ X2 k,1] = Vk V . From this theorem one can also derive a lower bound. First, let V(ψ) denote the variance of X ψ and let V (Ψ) = min1 k K V(ψk). For a family F of distributions over the reals, let Dinf(ψ, v, F) = infψ F:V(ψ ) 0 and k such that V(ψk)>V (Ψ). Then, for any Ψ F where not all variances are equal and 0 < Dinf(ψk, V (Ψ), F) < holds whenever V(ψk)>V (Ψ), we have lim inf n Rn(A, Ψ) k:V(ψk)>V (Ψ) V(ψk) V (Ψ) Dinf(ψk, V (Ψ), F). Proof. The result follows from Theorem 2 and (Burnetas & Katehakis, 1996, Proposition 1). Using Theorem 2 we can also establish bounds on the MSE-regret for the algorithms mentioned in Section 2. Theorem 4 (MSE-Regret Upper Bounds). Let Assumption 4.1 hold for Ψ = (ψk) where (for simplicity) we assume each ψk is supported on [0, 1]. Then, after n rounds, MC(B) achieves the MSE-Regret bound of: (4) when using B = UCB1; (5) when using B = UCB-V with cζ = 10; (6) when using B=UCB-KL; and (7) when using B=TS.1 Additionally, due to Theorem 2, one can also obtain bounds on the minimax MSE-regret by exploiting the lower bound for bandits in (Auer et al., 2002b). In particular, the UCB-based bandit algorithms above can all be shown to achieve the minimax rate O( Kn) up to logarithmic factors, immediately implying that the minimax MSE-regret of MC(B) for B {UCB1, UCB-V, KL-UCB} is of order Ln(MC(B)) L n = O(K1/2n (1+ 1 Appendix B provides a discussion of how alternative ranges on the observations can be handled, and how the above methods can still be applied when the payoff distribution is unbounded but satisfies moment conditions. 5. Non-uniform Estimator Costs Next, we consider the case when the base estimators can take different amounts of time to generate observations. A consequence of non-uniform estimator times, which we refer to as non-uniform costs, is that the definitions of the loss and regret must be modified accordingly. Intuitively, if an estimator takes more time to produce an observation, it is less useful than another estimator that produces observations with (say) identical variance but in less time. To develop an appropriate notion of regret for this case, we introduce some additional notation. Let Dk,m denote the time needed by estimator k to produce its mth observation, Xk,m. As before, we let Im {1, . . . , K} denote the index of the estimator that A chooses in round m. Let Jm denote the time when A observes the mth sample, 1 Note that to apply the regret bounds from Section 2, one has to feed the bandit algorithms with 1 Y 2 t instead of Y 2 t in Theorem 2. This modification has no effect on the regret. 2 O denotes the order up to logarithmic factors. To remove such factors one can exploit MOSS (Audibert & Bubeck, 2010). Adaptive Monte Carlo via Bandit Allocation Ym = XIm,TIm(m); thus, J1 = DI1,1, J2 = J1+DI2,TI2(2), and Jm+1 = Jm +DIm,TIm(m) = Pm s=1 DIs,TIs(s). For convenience, define J0 = 0. Note that round m starts at time Jm 1 with A choosing an estimator, and finishes at time Jm when the observation is received and A (instantaneously) updates it estimate. Thus, at time Jm a new estimate ˆµm becomes available: the estimate is renewed . Let ˆµ(t) denote the estimate available at time t 0. Assuming A produces a default estimate ˆµ0 before the first observation, we have ˆµ(t) = ˆµ0 on [0, J1), ˆµ(t) = ˆµ1 on [J1, J2), etc. If N(t) denotes the round index at time t (i.e., N(t)=1 on [0, J1), N(t)=2 on [J1, J2), etc.) then ˆµ(t)= ˆµN(t) 1. The MSE of A at time t 0 is L(A, t) = E (ˆµ(t) µ)2 . By comparison, the estimate for a single estimator k at time t is ˆµk(t) = I{Nk(t) > 1} PNk(t) 1 m=1 Xk,m Nk(t) 1 , where Nk(t) = 1 on [0, Dk,1), Nk(t)=2 on [Dk,1, Dk,1 + Dk,2), etc. We set ˆµk(t) = 0 on [0, Dk,1) to let ˆµk(t) be well-defined on [0, Dk,1). Thus, at time t 0 the MSE of estimator k is Lk(t) = E (ˆµk(t) µ)2 . Given these definitions, it is natural to define the regret as R(A, t) = t2 L(A, t) min 1 k K Lk(t) . As before, the t2 scaling is chosen so that, under the condition that Lk(t) 1/t, a sublinear regret implies that A is learning . Note that this definition generalizes the previous one: if Dk,m = 1 k, m, then Rn(A) = R(A, n). In this section we make the following assumption. Assumption 5.1. For each k, (Xk,m, Dk,m)(m=1,2,...) is an i.i.d. sequence such that E [Xk,1] = µ, Vk .= V (Xk,1) < , P(Dk,m > 0) = 1 and δk .= E[Dk,1] = E [Dk,m] < . Furthermore, we assume that the sequences for different k are independent of each other. Note that Assumption 5.1 allows Dk,m to be a deterministic value; a case that holds when the estimators use deterministic algorithms to produce observations. Another situation arises when Dk,m is stochastic (i.e., the estimator uses a randomized algorithm) and (Xk,m, Dk,m) are correlated. In which case ˆµk(t) may be a biased estimate of µ. However, if (Xk,m)m and (Dk,m)m are independent and P(Nk(t) > 1) = 1 then ˆµk(t) is unbiased. Indeed, in such a case, (Nk(t))t is independent of the partial sums (Pn m=1 Xk,m)n, hence E[ˆµk(t)] = E h PNk(t) 1 m=1 Xk,m Nk(t) 1 i = P n=2 P(Nk(t) = n)E h Pn 1 m=1 Xk,m n 1 Nk(t) = n i = P(Nk(t)>1) E [X], because ˆµk(t)=0 when Nk(t) 1. Using Assumption 5.1, a standard argument of renewal reward processes gives Lk(t) Vk/(t/δk) = Vkδk/t, where f(t) g(t) means limt f(t)/g(t) = 1. Intuitively, estimator k will produce approximately t/δk independent observations during [0, t); hence, the variance of their average is approximately Vk/(t/δk) (made more precise in the proof of Theorem 5). This implies min1 k K Lk(t) min1 k K δk Vk t . Thus, any allocation strategy A competing with the best estimator must draw most of its observations from k satisfying δk Vk = δ V .= min1 k K δk Vk t . For simplicity, we assume k = arg min1 k Kδk Vk is unique, with δ =δk and V = Vk . As before, we will consider adaptive strategies that estimate µ using the mean of the observations: ˆµm = Sm m such that Sm .=Pm s=1 Ys. Hence, the estimate at time t J1 is ˆµ(t) = S(t) N(t) 1, where S(t) = SN(t) 1 . (9) Our aim is to bound regret of the overall algorithm by bounding the number of times the allocation strategy chooses suboptimal estimators. We will do so by generalizing (2) to the nonuniform-cost case, but unlike the equality obtained in Theorem 1, here we provide an upper bound. Theorem 5. Let Assumption 5.1 hold and assume that (Xk,m) are bounded and k is unique. Let the estimate of A at time t be defined by the sample mean ˆµ(t). Let t>0 be such that E [N(t) 1]>0 and E [Nk (t)]>0, and assume that for any k = k , E [Tk(N(t))] f(t) for some f : (0, ) [1, ) such that f(t) cft for some cf > 0 and any t > 0. Assume furthermore that P(Dk,1 > t) CDt 2 and E Nk (t)2 CNt2 for all t > 0. Then, for any c < p t/(8δmax) where δmax = maxk δk, the regret of A at time t is bounded by R(A, t) (c + C) + C t2 P Nk (t)>E [Nk (t)] + c p E [Nk (t) 1] + C t2 P N(t)0 that depend on the problem parameters δk, Vk, the upper bound on |Xk,m|, and the constants cf, CD and CN. The proof of the theorem is given in Appendix C. Several comments are in order. First, recall that the optimal regret rate is order t in this setting, up to logarithmic factors. To obtain such a rate, one need only achieve f(t)=O( t), which can be attained by stochastic or even adversarial bandit algorithms (Bubeck & Cesa-Bianchi, 2012) receiving rewards with expectation δk Vk and a well-concentrated number of samples. The moment condition on Nk (t) is also not restrictive; for example, if the estimators are rejection samplers, their sampling times will have a geometric distribution that satisfies the polynomial tail condition. Furthermore, if Dk,m δ for some δ >0 then Nk(t)0, then N(t) 1 t/δ , hence fk(t)=gk(t/δ ) can be used. Finally, we need to ensure that the last two terms in (10) remain small, which follows if N(t) and Nk (t) concentrate around their means. In general, P(N < E[N] C p E[N] log(1/δ)) δ for some constant C, therefore c=C p log(1/δ) can be chosen to achieve t2P(N 0, follows a square root diffusion model. The price of a European caplet option with strike price K >0, nominee amount M > 0 and maturity T > 0 is then given by P = M exp( R T 0 r(t)dt) max(r(T) K, 0). The problem is to determine the expected value of P. Adaptive Monte Carlo via Bandit Allocation 0.0 0.25 0.50 0.75 1.00 Actions/Samples (millions) 0.0 0.25 0.50 0.75 1.00 Actions/Samples (millions) 0.0 0.25 0.50 0.75 1.00 Actions/Samples (millions) Figure 2. Plots showing the normalized MSE of different adaptive strategies for estimating the price of a European caplet option under the Cox-Ingersol-Ross interest rate model, using different strike prices (K). All results are statistically significant up to visual resolution. A naive approach to estimating E [P] is to simulate independent realizations of r(t) for 0 < t T. However, any simulation where the interest rate r(T) lands below the strike price can be ignored since the payoff is zero. Therefore, a common estimation strategy is to use importance sampling by introducing a drift parameter θ > 0 into the proposal density for r(t), with θ = 0 meaning no drift; this encourages simulations with higher interest rates. The importance weights for these simulations can then be efficiently calculated as a function of θ (see Appendix F). Importantly, the task of adaptively allocating trials between different importance sampling estimators has been previously studied on this problem, using an unrelated technique known as d-kernel Population Monte Carlo (PMC) (Douc et al., 2007). Space restrictions prevent us from providing a full description of the PMC method, but, roughly speaking, the method defines the proposal density as mixture over the set {θk} of drift parameters considered. At each time step, PMC samples a new drift value according to this mixture and then simulates an interest rate. After a fixed number of samples, say G (the population size), the mixture coefficient αk of each drift parameter θk is adjusted by setting it to be proportional to the sum of importance weights sampled from that parameter: αk = PG t=1 wt I{It=k} PG t=1 wt . The new proposal is then used to generate the next population. We approximated the option prices under the same parameter settings as (Douc et al., 2007), namely, ν = 0.016, κ = 0.2, r0 = 0.08, T = 1, M = 1000, σ = 0.02, and n = 100, for different strike prices K = {0.06, 0.07, 0.08} (see Appendix F). However, we consider a wider set of proposals given by θk = k/10 for k {0, 1, ..., 15}. The results averaged over 1000 simulations are given in Figure 2. These results generally indicate that the more effective bandit approaches are significantly better suited to this allocation task than the PMC approach, particularly in the longer term. Among the bandit based strategies, TS is the clear winner, which, given the conclusions from the previous experiment, is likely due to high level of variance introduced by the option pricing formula. Despite this strong showing for the bandit methods, PMC remains surprisingly competi- tive at this task, doing uniformly better than UCB, and better than all other bandit allocation strategies early on for K = 0.06. However, we believe that this advantage of PMC stems from the fact that PMC explore the entire space of mixture distribution (rather than single θk). It remains an interesting area for future work in bandit-based allocation strategies to extend the existing methods to continuously parameterized settings. 6.3. Adaptive Annealed Importance Sampling Many important applications of Monte Carlo estimation occur in Bayesian inference, where a particularly challenging problem is evaluating the model evidence of a latent variable model. Evaluating such quantities is useful for a variety of purposes, such as Bayesian model comparison and testing/training set evaluation (Robert, 2012). However, the desired quantities are notoriously difficult to estimate in many important settings, due in no small part to the fact that popular high-dimensional Monte Carlo strategies, such as Markov Chain Monte Carlo (MCMC) methods, cannot be directly applied (Neal, 2005). Nevertheless, a popular approach for approximating such values is annealed importance sampling (AIS) (Neal, 2001) (or more generally sequential Monte Carlo samplers (Del Moral et al., 2006)). In a nutshell, AIS combines the advantages of importance sampling with MCMC by defining a proposal density through a sequence of MCMC transitions applied to a sequence of annealed distributions, which slowly blend between the proposal (prior) and the target (un-normalized posterior). While such a technique can offer impressive practical advantages, it often requires considerable effort to set parameters; in particular, the practitioner must specify the number of annealing steps, the annealing rate or schedule , the underlying MCMC method (and its parameters), and the number of MCMC transitions to execute at annealing step. Even when these parameters have been appropriately tuned on preliminary data, there is no assurance that these choices will remain effective when deployed on larger or slightly different data sets. Here we consider the problem of approximating the normalization constant for a Bayesian logistic regression Adaptive Monte Carlo via Bandit Allocation Arm 3 TS UCB UCB V KL UCB 0 5 10 15 20 25 CPU Time (hours) 0 5 10 15 20 25 CPU Time (hours) 0 5 10 15 20 25 CPU Time (hours) Regret (log scale) Figure 3. Plots showing the average regret (over 20 independent runs) of bandit allocators on the Logistic Regression model. T is training sample size, Arm 1/2/3 indicates each fixed estimator; and the one missing from a figure is the best for that setting. The solid lines indicate the performance of combining observations uniformly, whereas the dashed lines indicate the performance of combining observations using inverse variance weights. model on different sized subsets of the 8-dimensional Pima Indian diabetes UCI data set (Bache & Lichman, 2013). We consider the problem of allocating resources between three AIS estimators that differ only in the number of annealing steps they use; namely, 400, 2000, and 8000 steps. In each case, we fix the annealing schedule using the power of 4 heuristic suggested by (Kuss & Rasmussen, 2005), with a single slice sampling MCMC transition used at each step (Neal, 2003) (this entails one step in each of the 8 dimensions); see Appendix G for further details. A key challenge in this scenario is that the computational costs associated with each arm differ substantially, and, because slice sampling uses an internal rejection sampler, these costs are stochastic. To account for these costs we directly use elapsed CPU-time when drawing a sample from each estimator, as reported by the JAVA VM. This choice reflects the true underlying cost and is particularly convenient since it does not require the practitioner to implement special accounting functionality. Since we do not expect this cost to correlate with the sample returns, we use the independent costs payoff formulation from Section 5: 1 4(Dk,2m + Dk,2m+1)(Xk,2m Xk,2m+1)2. The results for the different allocation strategies for training sets of size 5, 10, and 50 are shown in Figure 3. Perhaps the most striking result is the performance improvement achieved by the nonuniformly combined estimators, which are indicated by the dashed lines. These estimators do not change the underlying allocation; instead they improve the final combined estimate by weighting each observation inversely proportional to the sample variance of the estimator that produced it. This performance improvement is an artifact of the nonuniform cost setting, since arms that are very close in terms of Vkδk can still have considerably different variances, which is especially true for AIS. Also observe that no one arm is optimal for all three training set sizes, consequently, we can see that bandit allocation (Thompson sampling in particular) is able to outperform any static strategy. In practice, this implies that even after exhaus- tive parameter tuning, automated adaptation can still offer considerable benefits simply due to changes in the data. 7. Conclusion In this paper we have introduced a new sequential decision making strategy for competing with the best consistent Monte Carlo estimator in a finite pool. When each base estimator produces unbiased values at the same cost, we have shown that the sequential estimation problem maps to a corresponding bandit problem, allowing future improvements in bandit algorithms to be transfered to combining unbiased estimators. We have also shown a weaker reduction for problems where the different estimators take different (possibly random) time to produce an observation. We expect this work to inspire further research in the area. For example, one may consider combining not only finitely many, but infinitely many estimators using appropriate bandit techniques (Bubeck et al., 2011), and/or exploit the fact that the observation from one estimator may reveal information about the variance of others. This is the case for example when the samplers use importance sampling, leading to the (new) stochastic variant of the problem known as bandits with side-observations (Mannor & Shamir, 2011; Alon et al., 2013). However, much work remains to be done, such studying in detail the use variance weighted estimators, dealing with continuous families of estimators, or a more thorough empirical investigation of the alternatives available. Acknowledgements This work was supported by the Alberta Innovates Technology Futures and NSERC. Part of this work was done while Cs.Sz. was visiting Technion, Haifa and MSR, Redmond, whose support and hospitality are greatly acknowledged. Adaptive Monte Carlo via Bandit Allocation Agrawal, S. and Goyal, N. Analysis of Thompson sampling for the multi-armed bandit problem. In Conference on Learning Theory, pp. 39.1 39.26, 2012. Alon, N., Cesa-Bianchi, N., Gentile, C., and Mansour, Y. From bandits to experts: A tale of domination and independence. NIPS, pp. 1610 1618, 2013. Arouna, B. Adaptive Monte Carlo technique, a variance reduction technique. Monte Carlo Methods and Applications, 2004. Audibert, J. and Bubeck, S. Regret bounds and minimax policies under partial monitoring. Journal of Machine Learning Research, 11:2785 2836, 2010. Audibert, J., Munos, R., and Szepesv ari, Cs. Exploration exploitation tradeoff using variance estimates in multi-armed bandits. Theoretical Computer Science, 410(19):1876 1902, 2009. Auer, P., Cesa-Bianchi, N., and Fischer, P. Finite-time analysis of the multiarmed bandit problem. Machine Learning, 47:235 256, 2002a. Auer, P., Cesa-Bianchi, N., Freund, Y., and Schapire, R. The nonstochastic multiarmed bandit problem. SIAM Journal on Computing, 32:48 77, 2002b. Bache, K. and Lichman, M. UCI machine learning repository, 2013. URL http://archive.ics.uci.edu/ml. Bubeck, S. and Cesa-Bianchi, N. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. ar Xiv preprint ar Xiv:1204.5721, 2012. Bubeck, S., Munos, R., Stoltz, G., and Szepesv ari, Cs. X-armed bandits. Journal of Machine Learning Research, 12:1655 1695, June 2011. Burnetas, A. and Katehakis, M. Optimal adaptive policies for sequential allocation problems. Advances in Applied Mathematics, 17:122 142, 1996. Capp e, O., Garivier, A., Maillard, O., Munos, R., and Stoltz, G. Kullback-Leibler upper confidence bounds for optimal sequential decision making. Annals of Statistics, 41(3):1516 1541, 2013. Carpentier, A. and Munos, R. Finite time analysis of stratified sampling for Monte Carlo. In NIPS-24, pp. 1278 1286, 2011. Carpentier, A. and Munos, R. Minimax number of strata for online stratified sampling given noisy samples. In Algorithmic Learning Theory, pp. 229 244, 2012a. Carpentier, A. and Munos, R. Adaptive stratified sampling for Monte-Carlo integration of differentiable functions. In NIPS, pp. 251 259, 2012b. Cesa-Bianchi, N. and Lugosi, G. Prediction, Learning, and Games. Cambridge University Press, 2006. Chapelle, O. and Li, L. An empirical evaluation of Thompson sampling. In NIPS-24, pp. 2249 2257, 2011. Cox, J., Ingersoll Jr, J., and Ross, S. A theory of the term structure of interest rates. Econometrica: Journal of the Econometric Society, pp. 385 407, 1985. Del Moral, P., Doucet, A., and Jasra, A. Sequential Monte Carlo samplers. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68:411 436, 2006. Douc, R., Guillin, A., Marin, J., and Robert, C. Minimum variance importance sampling via population Monte Carlo. ESAIM: Probability and Statistics, 11:427 447, 2007. Kaufmann, E., Korda, N., and Munos, R. Thompson sampling: An asymptotically optimal finite time analysis. In Algorithmic Learning Theory, pp. 199 213, 2012. Kuss, M. and Rasmussen, C. Assessing approximate inference for binary gaussian process classification. The Journal of Machine Learning Research, 6:1679 1704, 2005. Lai, T. and Robbins, H. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6:4 22, 1985. Mannor, S. and Shamir, O. From bandits to experts: On the value of side-observations. In NIPS, pp. 684 692, 2011. May, B. and Leslie, D. Simulation studies in optimistic Bayesian sampling in contextual-bandit problems. Technical report, 11: 02, Statistics Group, Department of Mathematics, University of Bristol, 2011. Neal, R. Annealed importance sampling. Technical report, University of Toronto, 2001. Neal, R. Slice sampling. Annals of statistics, pp. 705 741, 2003. Neal, R. Estimating ratios of normalizing constants using linked importance sampling. Technical report, University of Toronto, 2005. Robbins, H. Some aspects of the sequential design of experiments. Bulletin of the American Mathematical Society, 58: 527 535, 1952. Robert, C. Bayesian computational methods. Springer, 2012. Robert, C. and Casella, G. Monte Carlo Statistical Methods. Springer-Verlag New York, 2005. Thompson, W. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4):285 294, 1933.