# strongly_adaptive_online_learning__d836dfea.pdf Strongly Adaptive Online Learning Amit Daniely AMIT.DANIELY@MAIL.HUJI.AC.IL Alon Gonen ALONGNN@CS.HUJI.AC.IL Shai Shalev-Shwartz SHAIS@CS.HUJI.AC.IL The Hebrew University Strongly adaptive algorithms are algorithms whose performance on every time interval is close to optimal. We present a reduction that can transform standard low-regret algorithms to strongly adaptive. As a consequence, we derive simple, yet efficient, strongly adaptive algorithms for a handful of problems. 1. Introduction Coping with changing environments and rapidly adapting to changes is a key component in many tasks. A broker is highly rewarded from rapidly adjusting to new trends. A reliable routing algorithm must respond quickly to congestion. A web advertiser should adjust himself to new ads and to changes in the taste of its users. A politician can also benefit from quickly adjusting to changes in the public opinion. And the list goes on. Most current algorithms and theoretical analysis focus on relatively stationary environments. In statistical learning, an algorithm should perform well on the training distribution. Even in online learning, an algorithm should usually compete with the best strategy (from a pool), that is fixed and does not change over time. Our main focus is to investigate to which extent such algorithms can be modified to cope with changing environments. We consider a general online learning framework that encompasses various online learning problems including prediction with expert advice, online classification, online convex optimization and more. In this framework, a learning scenario is defined by a decision set D, a context space C and a set L of real-valued loss functions defined over D. The learner sequentially observes a context ct C and Proceedings of the 32 nd International Conference on Machine Learning, Lille, France, 2015. JMLR: W&CP volume 37. Copyright 2015 by the author(s). then picks a decision xt D. Next, a loss function ℓt L is revealed and the learner suffers a loss ℓt(xt). Often, algorithms in such scenarios are evaluated by comparing their performance to the performance of the best strategy from a pool of strategies (usually, this pool is simply all strategies that play the same action all the time). Concretely, the regret, RA(T), of an algorithm A is defined as its cumulative loss minus the cumulative loss of the best strategy in the pool. The rationale behind this evaluation metric is that one of the strategies in the pool is reasonably good during the entire course of the game. However, when the environment is changing, different strategies will be good in different periods. As we do not want to make any assumption on the duration of each of these periods, we would like to guarantee that our algorithm performs well on every interval I = [q, s] [T]. Clearly, we cannot hope to have a regret bound which is better than what we have for algorithms that are tested only on I. If this barrier is met, we say that the corresponding algorithm is strongly adaptive1. Surprisingly maybe, our main result shows that for many learning problems strongly adaptive algorithms exist. Concretely, we show a simple meta-algorithm that can use any online algorithm (that was possibly designed to have just small standard regret) as a black box, and produces a new algorithm that is designed to have a small regret on every interval. We show that if the original algorithm have a regret bound of R(T), then the produced algorithm has, on every interval [q, s] of size τ := |I|, regret that is very close to R(τ) (see a precise statement in Section 1.2). Moreover, the running time of the new algorithm at round t is just O (log(t)) times larger than that of the original algorithm. As an immediate corollary we obtain strongly adaptive algorithms for a handful of online problems including prediction with expert advice, online convex optimization, and more. 1See a precise definition in Section 1.1. Also, see Section 1.3 for a weaker notion of adaptive algorithms that was studied in (Hazan & Seshadhri, 2007). Strongly Adaptive Online Learning Furthermore, we show that strong adaptivity is stronger than previously suggested adaptivity properties including the adaptivity notion of (Hazan & Seshadhri, 2007) and the tracking notion of (Herbster & Warmuth, 1998). Namely, strongly adaptive algorithms are also adaptive (in the sense of (Hazan & Seshadhri, 2007)), and have a near optimal tracking regret (in the sense of (Herbster & Warmuth, 1998)). We conclude our discussion by showing that strong adaptivity can not be achieved with bandit feedback. 1.1. Problem setting A FRAMEWORK FOR ONLINE LEARNING Many learning problems can be described as a repeated game between the learner and the environment, which we describe below. A learning scenario is determined by a triplet (D, C, L), where D is a decision space, C is a set of contexts, and L is a set of loss functions from D to [0, 1]. Extending the results to general bounded losses is straightforward. The number of rounds, denoted T, is unknown to the learner. At each time t [T], the learner sees a context ct C, and then chooses an action xt D. Simultaneously, the environment chooses a loss function ℓt L. Then, the action xt is revealed to the environment, and the loss function ℓt is revealed to the learner which suffers the loss ℓt(xt). We list below some examples of families of learning scenarios. Learning with expert advice (Cesa-Bianchi et al., 1997). Here, there is no context (formally, C consists of a single element), D is a finite set of size N (each element in this set corresponds to an expert), and L consists of all functions from D to [0, 1]. Online convex optimization (Zinkevich, 2003). Here, there is no context as well, D is a convex set, and L is a collection of convex functions from D to [0, 1]. Classification. Here, C is some set, D is a finite set, and L consists of all functions from D to {0, 1} that are indicators of a single element. Regression. Here, C is a subset of a Euclidean space, D = [0, 1], and L consists of all functions of the form ℓ(ˆy) = (y ˆy)2 for y [0, 1]. A learning problem is a quadruple P = (D, C, L, W), where W is a benchmark of strategies that is used to evaluate the performance of algorithms. Here, each strategy w W makes a prediction xt(w) D based on some rule. We assume that the prediction xt(w) of each strategy is fully determined by the game s history at the time of the prediction. I.e., by (c1, ℓ1), . . . , (ct 1, ℓt 1), ct. Usually, W consists of very simple strategies. For example, in context-less scenarios (like learning with expert advice and online convex optimization), W is often identified with D, and the strategy corresponding to x D simply predicts x at each step. In contextual problems (such as classification and regression), W is often a collection of functions from C to D (a hypothesis class), and the prediction of the strategy corresponding to h : C D at time t is simply h(ct). The cumulative loss of w W at time T is Lw(T) = PT t=1 ℓt(xt(w)) and the cumulative loss of an algorithm A is LA(T) = PT t=1 ℓt(xt). The cumulative regret of A is RA(T) = LA(T) infw W Lw(T). We define the regret, RP(T), of the learning problem P as the minimax regret bound. Namely, RP(T) is the minimal number for which there exists an algorithm A such that for every environment RA(T) RP(T). We say that an algorithm A has low regret if RA(T) = O (poly (log T) RP(T)) for every environment. We note that both the learner and the environment can make random decisions. In that case, the quantities defined above refer to the expected value of the corresponding terms. STRONGLY ADAPTIVE REGRET Let I = [q, s] := {q, q + 1, . . . , s} [T]. The loss of w W during the interval I is Lw(I) = Ps t=q ℓt(xt(w)) and the loss of an algorithm A during the interval I is LA(I) = Ps t=q ℓt(xt). The regret of A during the interval I is RA(I) = LA(I) infw W Lw(I). The strongly adaptive regret of A at time T is the function SA-Regret T A(τ) = max I=[q,q+τ 1] [T ] RA(I) We say that A is strongly adaptive if for every environment, SA-Regret T A(τ) = O (poly (log T) RP(τ)). 1.2. Our Results A STRONGLY ADAPTIVE META-ALGORITHM Achieving strongly adaptive regret seems more challenging than ensuring low regret. Nevertheless, we show that often, low-regret algorithms can be transformed into a strongly adaptive algorithms with a little extra computational cost. Concretely, fix a learning scenario (D, C, L). We derive a strongly adaptive meta-algorithm, that can use any algorithm B (that presumably have low regret w.r.t. some learning problem) as a black-box. We call our meta-algorithm Strongly Adaptive Online Learner (SAOL). The specific instantiation of SAOL that uses B as the black box is denoted SAOLB. Fix a set W of strategies and an algorithm B whose regret Strongly Adaptive Online Learning w.r.t. W satisfies RB(T) C T α, (1) where α (0, 1), and C > 0 is some scalar. The properties of SAOLB are summarized in the theorem below. The description of the algorithm and the proof of Theorem 1 are given in Section 2. 1. For every interval I = [q, s] N, RSAOLB(I) 4 2α 1C|I|α + 40 log(s + 1)|I| 1 2 . 2. In particular, if α 1 2 and B has low regret, then SAOLB is strongly adaptive. 3. The runtime of SAOL at time t is at most log(t + 1) times the runtime per-iteration of B. From part 2, we can derive strongly adaptive algorithms for many online problems. Two examples are outlined below. Prediction with N experts advice. The Multiplicative Weights (MW) algorithm has regret 2 p ln(N)T. Hence, for every I = [q, s] [T], RSAOLMW(I) = O p log(N) + log(s + 1) p Online convex optimization with G-Lipschitz loss functions over a convex set D Rd of diameter B. Online Gradient Descent (OGD) has regret 3BG T. Hence, for every I = [q, s] [T], RSAOLOGD(I) = O (BG + log(s + 1)) p COMPARISON TO (WEAK) ADAPTIVITY AND TRACKING Several alternative measures for coping with changing environment were proposed in the literature. The two that are most related to our work are tracking regret (Herbster & Warmuth, 1998) and adaptive regret (Hazan & Seshadhri, 2007) (other notions are briefly discussed in Section 1.3). Adaptivity, as defined in (Hazan & Seshadhri, 2007), is a weaker requirement than strong adaptivity. The adaptive regret of a learner A at time T is max I [T ] RA(I). An algorithm is called adaptive if its adaptive regret is O (poly (log T) RP(T)). For online convex optimization problems for which there exists an algorithm with regret bound R(T), (Hazan & Seshadhri, 2007) derived an efficient algorithm whose adaptive regret is at most R(T) log(T) + O q T log3(T) , thus establishing adap- tive algorithms for many online convex optimization problems. For the case where the loss functions are α-exp concave, they showed an algorithm with adaptive regret O( 1 α log2(T)) (we note that according to our definition this algorithm is in fact strongly adaptive). A main difference between adaptivity and strong adaptivity, is that in many problems, adaptive algorithms are not guaranteed to perform well on small intervals. For example, for many problems including online convex optimization and learning with expert advice, the best possible adaptive regret is Ω( T). Such a bound is meaningless for intervals of size O( T). We note that in many scenarios (e.g. routing, paging, news headlines promotion) it is highly desired to perform well even on very small intervals. The problem of tracking the best expert was studied in (Herbster & Warmuth, 1998) (see also, (Bousquet & Warmuth, 2003)). In that problem, originally formulated for the learning with expert advice problem, learning algorithms are compared to all strategies that shift from one expert to another a bounded number of times. They derived an efficient algorithm, named Fixed-Share, which attains nearoptimal regret bound of p Tm(log(T) + log(N)) versus the best strategy that shifts between m experts. (Interestingly, a recent work (Cesa-Bianchi et al., 2012) showed that the Fixed-Share algorithm is in fact (weakly) adaptive). As we show in Section 3, strongly adaptive algorithms enjoy near-optimal tracking regret in the experts problem, and in fact, in many other problems (e.g., online convex optimization). We note that as with (weakly) adaptive algorithms, algorithms with optimal tracking regret are not guaranteed to perform well on small intervals. STRONG ADAPTIVITY WITH BANDIT FEEDBACK In the so-called bandit setting, the loss functions ℓt is not exposed to the learner. Rather, the learner just gets to see the loss, ℓt(xt), that he has suffered. In Section 4 we prove that there are no strongly adaptive algorithms that can cope with bandit feedback. Even in the simple experts problem we show that for every ϵ > 0, there is no algorithm whose strongly adaptive regret is O |I|1 ϵ poly (log T) . Investigating possible alternative notions and/or weaker guarantees in the bandit setting is mostly left for future work. 1.3. Related Work Maybe the most relevant previous work, from which we borrow many of our techniques is (Blum & Mansour, 2007). They focused on the expert setting and proposed a strengthened notion of regret using time selection functions, which are functions from the time interval [T] to [0, 1]. The regret of a learner A with respect Strongly Adaptive Online Learning to a time selection function I is defined by RI A(T) = maxi [N] PT t=1 I(t)ℓt(xt) PT t=1 I(t)ℓt(i) , where ℓt(i) is the loss of expert i at time t. This setting can be viewed as a generalization of the sleeping expert setting (Freund et al., 1997). For a fixed set I consisting of M time selection functions, they proved a regret bound of O( p Lmin,I log(NM)) + log(NM)) with2 respect to each time selection function I I. We observe that if we let I be the set of all indicator functions of intervals (note that |I| = T 2 = Θ(T 2)), we obtain a strongly adaptive algorithm for learning with expert advice. However, the (multiplicative) computational overhead of our algorithm (w.r.t. the standard MW algorithm) at time t is Θ(log(t)), whereas the computational overhead of their algorithm is Θ(T 2). Furthermore, our setting is much more general than the expert setting. Another related, but somewhat orthogonal line of work (Zinkevich, 2003; Hall & Willett, 2013; Rakhlin & Sridharan, 2013; Jadbabaie et al., 2015) studies drifting environments. The focus of those papers is on scenarios where the environment is changing slowly over time. 2. Reducing Adaptive Regret to Standard Regret In this section we present our strongly adaptive metaalgorithm, named Strongly Adaptive Online Learner (SAOL). For the rest of this section we fix a learning scenario (D, C, L) and an algorithm B that operates in this scenario (think of B as a low regret algorithm). We first give a high level description of SAOL. The basic idea is to run an instance of B on each interval I from an appropriately chosen set of intervals, denoted I. The instance corresponding to I is denoted BI, and can be thought as an expert that gives his advice for the best action at each time slot in I. The algorithm weights the various BI s according to their performance in the past, in a way that instances with better performance get more weight. The exact weighting is a variant of the multiplicative weights rule. At each step, SAOL picks at random one of the BI s and follows his advice. The probability of choosing each BI is proportional to its weight. Next, we give more details. The choice of I. As in the MW algorithm, the weighting procedure is used to ensure that SAOL performs optimally for every I I. Therefore, the choice of I exhibits the following tradeoff. On one hand, I should be large, since we want that optimal performance on intervals in I will result in an optimal performance on every interval. On the other hand, we would like to keep I small, since running many instances of B in parallel will result with a large computa- 2where Lmin,I = mini PT t=1 I(t)ℓt(i) tional cost. To balance these desires, we let k N {0} Ik , where for all k N {0}, Ik = {[i 2k, (i + 1) 2k 1] : i N}. That is, each Ik is a partition of N \ {1, . . . , 2k} to consecutive intervals of length 2k. We denote by ACTIVE(t) := {I I : t I} , the set of active intervals at time t. By the definition of Ik, for every t 2k we have that no interval in Ik contains t, while for every t > 2k we have that a single interval in Ik contains t. Therefore, |ACTIVE(t)| = log(t) + 1 . It follows that the running time of SAOL at time t is at most (log(t)+1) times larger than the running time of B. On the other hand, as we show in the proof, we can cover every interval by intervals from I, in a way that will guarantee small regret on the covered interval, provided that we have small regret on the covering intervals. The weighting method. Let xt = xt(I) be the action taken by BI at time t. The instantaneous regret of SAOL w.r.t. BI at time t is rt(I) = ℓt(xt) ℓt(xt(I)). As explained above, SAOL maintains weights over the BI s. For I = [q, s], the weight of BI at time t is denoted wt(I). For t < q, BI is not active yet, so we let wt(I) = 0. At the entry time, t = q, we set wt(I) = ηI where ηI := min n 1/2, 1/ p The weight at time t (q, s] is the previous weight times (1 + ηI rt 1(I)). Overall, we have 0 t / I ηI t = q wt 1(I)(1 + ηI rt 1(I)) t (q, s] (2) Note that the regret is always between [ 1, 1], and ηI (0, 1), therefore weights are always positive during the lifetime of the corresponding expert. Also, the weight of BI decreases (increases) if its loss is higher (lower) than the predicted loss. The overall weight at time t is defined by I I wt(I) = X I ACTIVE(t) wt(I). Finally, a probability distribution over the experts at time t is defined by pt(I) = wt(I) Strongly Adaptive Online Learning Note that the probability mass assigned to any inactive instance is zero. The probability distribution pt determines the action of SAOL at time t. Namely, we have xt = xt(I) with probability pt(I). A pseudo-code of SAOL is detailed in Algorithm 1. Algorithm 1 Strongly Adaptive Online Learner (with blackbox algorithm B) Initialize: w1(I) = ( 1/2 I = [1, 1] 0 o.w. for t = 1 to T do Let Wt = P I ACTIVE(t) wt(I) Choose I ACTIVE(t) w.p. pt(I) = wt(I) Wt Predict xt(I) Update weights according to Equation (2) end for 2.1. Proof Sketch of Theorem 1 In this section we sketch the proof of Theorem 1. A full proof is detailed in Appendix 1. The analysis of SAOL is divided into two parts. The first challenge is to prove the theorem for the intervals in I (see Lemma 2). Then, the theorem should be extended to any interval (end of Appendix 1). Let us start with the first task. Our first observation is that for every interval I, the regret of SAOL during the interval I is equal to (SAOL s regret relatively to BI + the regret of BI) (3) (during the interval I). Since the regret of BI during the interval I is already guaranteed to be small (Equation (1)), the problem of ensuring low regret during each of the intervals in I is reduced to the problem of ensuring low regret with respect to each of the BI s. We next prove that the regret of SAOL with respect to the BI s is small. Our analysis is similar to the proof of (Blum & Mansour, 2007)[Theorem 16]. Both of these proofs are similar to the analysis of the Multiplicative Weights Update (MW) method. The main idea is to define a potential function and relate it both to the loss of the learner and the loss of the best expert. To this end, we start by defining pseudo-weights over the experts (the BI s). With a slight abuse of notation, we define I(t) = 1[t I]. For any I = [q, s] I, the pseudoweight of BI is defined by: 0 t < q 1 t = q wt 1(I) (1 + ηI rt 1(I)) q < t s + 1 ws(I) t > s + 1 Note that wt(I) = ηI I(t) wt(I) . The potential function we consider is the overall pseudoweight at time t, Wt = P I I wt(I). The following lemma, whose proof is given in the appendix, is a useful consequence of our definitions. Lemma 1 For every t 1, Wt t(log(t) + 1) . Through straightforward calculations, we conclude the proof of Theorem 1 for any interval in I. Lemma 2 For every I = [q, s] I, t=q rt(I) 5 log(s + 1) p Hence, according to Equation (3), RSAOLB(I) C |I|α + 5 log(s + 1) p The proof is given in the appendix. The extension of the theorem to any interval relies on some useful properties of the set I (see Lemma 1.1 in the appendix). Roughly speaking, any interval I [T] can be partitioned into two sequences of intervals from I, such that the lengths of the intervals in each sequence decay at an exponential rate (Lemma 1.2 in the appendix). The theorem now follows by bounding the regret during the interval I by the sum of the regrets during the intervals in the above two sequences, and by using the fact that the lengths decay exponentially. 3. Strongly Adaptive Regret Is Stronger Than Tracking Regret In this section we relate the notion of strong adaptivity to that of tracking regret, and show that algorithms with small strongly adaptive regret also have small tracking regret. Let us briefly review the problem of tracking. For simplicity, we focus on context-less learning problems, and on the case where the set of strategies coincides with the decision space (though the result can be straightforwardly generalized). Fix a decision space D and a family L of loss functions. A compound action is a sequence σ = (σ1, . . . , σT ) DT . Since there is no hope in competing w.r.t. all sequences3, a typical restriction of the problem is to bound the number of switches in each sequence. For a positive integer m, 3It is easy to prove a lower bound of order T for this problem Strongly Adaptive Online Learning the class of compound actions with at most m switches is defined by σ DT : s(σ) := t=1 1[σt+1 =σt] m The notions of loss and regret naturally extend to this setting. For example, the cumulative loss of a compound action σ Bm is defined by Lσ(T) = PT t=1 ℓt(σt). The tracking regret of an algorithm A w.r.t. the class Bm is defined by Tracking-Regretm A(T) = LA(T) inf σ Bm Lσ(T) . The following theorem bounds the tracking regret of algorithms with bounds on the strongly adaptive regret. In particular, of SAOL. Theorem 2 Let A be a learning algorithm with SA-Regret A(τ) Cτ α. Then, Tracking-Regretm A(T) CT αm1 α Proof Let σ Bm. Let I1, . . . , Im be the intervals that correspond to σ. Clearly, the tracking regret w.r.t. σ is bounded by the sum of the regrets of during the intervals I1, . . . , Im. Hence, and using H older s inequality, we have LA(T) Lσ(T) i=1 1 1 1 α Recall that for the problem of prediction with expert advice, the strongly adaptive regret of SAOL (with, say, Multiplicative Weights as a black box) is O ( p ln(N) + log(T)) τ . Hence, we obtain a track- ing bound of O ( p ln(N) + log(T)) m T . Up to a p log(T) factor, this bound is asymptotically equivalent to the bound of the Fixed-Share Algorithm of (Herbster & Warmuth, 1998)4. Also, up to log(T) factor, the bound is optimal. One advantage of SAOL over Fixed-Share is that SAOL is parameter-free. In particular, SAOL does not need to know5 m. 4For the comparison, we rely on a simplified form of the bound of the Fixed-Share algorithm. This simplified form can be found, for example, in http://web.eecs.umich. edu/ jabernet/eecs598course/web/notes/lec5_ 091813.pdf 5The parameters of Fixed-Share do depend on m 4. Strongly Adaptive Regret in The Bandit Setting In this section we consider the challenge of achieving adaptivity in the bandit setting. Following our notation, in the bandit setting, only the loss incured by the learner, ℓt(xt), is revealed at the end of each round (rather than the loss function, ℓt). For many online learning problems for which there exists an efficient low-regret algorithm in the full information model, a simple reduction from the bandit setting to the full information setting (for example, see (Shalev-Shwartz, 2011)[Theorem 4.1]) yields an efficient low-regret bandit algorithm. Furthermore, it is often the case that the dependence of the regret on T is not affected by the lack of information. For example, for the Multiarmed bandit (MAB) problem (Auer et al., 2002) (which is the bandit version of the the problem of prediction with expert advice), the above reduction yields an algorithm with near optimal regret bound of 2 TN log N. A natural question is whether adaptivity can be achieved with bandit feedback. Few positive results are known. For example, applying the aforementioned reduction to the Fixed-Share algorithm results with an efficient bandit learner whose tracking regret is O p Tm(ln(N) + ln(T))N . The next theorem shows that with bandit feedback there are no algorithms with non-trivial bounds on the strongly adaptive regret. We focus on the MAB problem with two arms (experts) but it is easy to generalize the result to any nondegenerate online problem. Recall that for this problem we do not have a context, W = D = {e1, e2} and L = [0, 1]D. Theorem 3 For all ϵ > 0, there is no algorithm for MAB with strongly adaptive regret of O τ 1 ϵpoly (log T) . The idea of the proof is simple. Suppose toward a contradiction that A is an algorithm with strongly adaptive regret of O τ 1 ϵpoly (log T) . This means that the regret of A on every interval I of length T ϵ 2 is non trivial (i.e. o(|I|)). Intuitively, this means that both arms must be inspected at least once during I. Suppose now that one of the arms is always superior to the second (say, has loss zero while the other has loss one). By the above argument, the algorithm will still inspect the bad arm at least once in every T ϵ 2 time slots. Those inspections will result in a regret of T T ϵ 2 = T 1 ϵ 2 . This, however, is a contradiction, since the strongly adaptive regret bound implies that the standard regret of A is o T 1 ϵ This idea is formalized in the following lemma. It implies Theorem 3 as for A with strongly adaptive regret of O τ 1 ϵpoly (log T) we can take k = O T 1 ϵ 2 and reach a contradiction as the lemma implies that on some Strongly Adaptive Online Learning segment I of size T k = Ω T ϵ 2 , the regret of A is Ω T ϵ 2 which grows faster than |I|1 ϵpoly(log T) Lemma 3 Let A be an algorithm with regret bounded RA(T) k = k(T) , Then, there exists an interval I [T] of size Ω(T/k) with RA(I) = Ω(|I|) . Proof Assume for simplicity that 4k divides T. Consider the environment E0 , in which t, ℓt(e1) = 0.5, ℓt(e2) = 1. Let U [T] be the (possibly random) set of time slots in which the algorithm chooses e2 when the environment is E0. Since the regret is at most k, we have E[|U|] 2k. It follows that for some segment I [T] of size T 4k we have E[|U I|] 1 2. Indeed, otherwise, if [T] = I1 . . . I4k is the partition of the interval [T] into 4k disjoint and consecutive intervals of size T 4k we will have E[|U|] = P4k j=1 E[|U Ij|] > 2k. Now, since |U I| is a non-negative integer, w.p. 1 2 we have |U I| = 0. Namely, w.p. 1 2 A does not inspect e2 during the interval I when it runs against E0. Consider now the environment E that is identical to E0, besides that t I, lt(e2) = 0. By the argument above, w.p. 1 2, the operation of A on E is identical to its operation on E0. In particular, the regret on I when A plays against E is, w.p. 1 2 , and in total, 1 Acknowledgments We thank Yishay Mansour and Sergiu Hart for helpful discussions. This work is supported by the Intel Collaborative Research Institute for Computational Intelligence (ICRICI). A. Daniely is supported by the Google Europe Fellowship in Learning Theory. Auer, Peter, Cesa-Bianchi, Nicolo, Freund, Yoav, and Schapire, Robert E. The nonstochastic multiarmed bandit problem. SIAM Journal on Computing, 32(1):48 77, 2002. Blum, Avrim and Mansour, Yishay. From external to internal regret. Journal of Machine Learning, 2007. Bousquet, Olivier and Warmuth, Manfred K. Tracking a small set of experts by mixing past posteriors. The Journal of Machine Learning Research, 3:363 396, 2003. Cesa-Bianchi, Nicolo, Freund, Yoav, Haussler, David, Helmbold, David P, Schapire, Robert E, and Warmuth, Manfred K. How to use expert advice. Journal of the ACM (JACM), 44(3):427 485, 1997. Cesa-Bianchi, Nicolo, Gaillard, Pierre, Lugosi, G abor, and Stoltz, Gilles. A new look at shifting regret. Co RR, abs/1202.3323, 2012. Freund, Yoav, Schapire, Robert E, Singer, Yoram, and Warmuth, Manfred K. Using and combining predictors that specialize. In Proceedings of the twenty-ninth annual ACM symposium on Theory of computing, pp. 334 343. ACM, 1997. Hall, Eric C and Willett, Rebecca M. Online optimization in dynamic environments. ar Xiv preprint ar Xiv:1307.5944, 2013. Hazan, Elad and Seshadhri, C. Adaptive algorithms for online decision problems. In Electronic Colloquium on Computational Complexity (ECCC), volume 14, 2007. Herbster, Mark and Warmuth, Manfred K. Tracking the best expert. Machine Learning, 32(2):151 178, 1998. Jadbabaie, Ali, Rakhlin, Alexander, Shahrampour, Shahin, and Sridharan, Karthik. Online optimization: Competing with dynamic comparators. ar Xiv preprint ar Xiv:1501.06225, 2015. Rakhlin, Sasha and Sridharan, Karthik. Optimization, learning, and games with predictable sequences. In Advances in Neural Information Processing Systems, pp. 3066 3074, 2013. Shalev-Shwartz, Shai. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107 194, 2011. Zinkevich, Martin. Online convex programming and generalized infinitesimal gradient ascent. 2003.