# adaptive_online_learning__b9d2984a.pdf Adaptive Online Learning Dylan J. Foster Cornell University Alexander Rakhlin University of Pennsylvania Karthik Sridharan Cornell University We propose a general framework for studying adaptive regret bounds in the online learning setting, subsuming model selection and data-dependent bounds. Given a dataor model-dependent bound we ask, Does there exist some algorithm achieving this bound? We show that modifications to recently introduced sequential complexity measures can be used to answer this question by providing sufficient conditions under which adaptive rates can be achieved. In particular each adaptive rate induces a set of so-called offset complexity measures, and obtaining small upper bounds on these quantities is sufficient to demonstrate achievability. A cornerstone of our analysis technique is the use of one-sided tail inequalities to bound suprema of offset random processes. Our framework recovers and improves a wide variety of adaptive bounds including quantile bounds, second order data-dependent bounds, and small loss bounds. In addition we derive a new type of adaptive bound for online linear optimization based on the spectral norm, as well as a new online PAC-Bayes theorem. 1 Introduction Some of the recent progress on the theoretical foundations of online learning has been motivated by the parallel developments in the realm of statistical learning. In particular, this motivation has led to martingale extensions of empirical process theory, which were shown to be the right notions for online learnability. Two topics, however, have remained elusive thus far: obtaining data-dependent bounds and establishing model selection (or, oracle-type) inequalities for online learning problems. In this paper we develop new techniques for addressing both these questions. Oracle inequalities and model selection have been topics of intense research in statistics in the last two decades [1, 2, 3]. Given a sequence of models M1,M2,... whose union is M, one aims to derive a procedure that selects, given an i.i.d. sample of size n, an estimator ˆf from a model M ˆm that trades off bias and variance. Roughly speaking the desired oracle bound takes the form err( ˆf) inf err(f) + penn(m) , where penn(m) is a penalty for the model m. Such oracle inequalities are attractive because they can be shown to hold even if the overall model M is too large. A central idea in the proofs of such statements (and an idea that will appear throughout the present paper) is that penn(m) should be slightly larger than the fluctuations of the empirical process for the model m. It is therefore not surprising that concentration inequalities and particularly Talagrand s celebrated inequality for the supremum of the empirical process have played an important role in attaining oracle bounds. In order to select a good model in a data-driven manner, one establishes non-asymptotic data-dependent bounds on the fluctuations of an empirical process indexed by elements in each model [4]. Deptartment of Computer Science Deptartment of Statistics Lifting the ideas of oracle inequalities and data-dependent bounds from statistical to online learning is not an obvious task. For one, there is no concentration inequality available, even for the simple case of sequential Rademacher complexity. (For the reader already familiar with this complexity: a change of the value of one Rademacher variable results in a change of the remaining path, and hence an attempt to use a version of a bounded difference inequality grossly fails). Luckily, as we show in this paper, the concentration machinery is not needed and one only requires a one-sided tail inequality. This realization is motivated by the recent work of [5, 6, 7]. At a high level, our approach will be to develop one-sided inequalities for the suprema of certain offset processes [7], where the offset is chosen to be slightly larger than the complexity of the corresponding model. We then show that these offset processes determine which data-dependent adaptive rates are achievable for online learning problems, drawing strong connections to the ideas of statistical learning described earlier. 1.1 Framework Let X be the set of observations, D the space of decisions, and Y the set of outcomes. Let (S) denote the set of distributions on a set S. Let D Y R be a loss function. The online learning framework is defined by the following process: For t = 1,...,n, Nature provides input instance xt X; Learner selects prediction distribution qt (D); Nature provides label yt Y, while the learner draws prediction ˆyt qt and suffers loss (ˆyt,yt). Two important settings are supervised learning (Y R, D R) and online linear optimization (X = {0} is a singleton set, Y and D are balls in dual Banach spaces and (ˆy,y) = ˆy,y ). For a class F DX , we define the learner s cumulative regret to F as (ˆyt,yt) inf (f(xt),yt). A uniform regret bound Bn is achievable if there exists a randomized algorithm selecting ˆyt such that (ˆyt,yt) inf (f(xt),yt) Bn x1 n,y1 n, (1) where a1 n stands for {a1,...,an}. Achievable rates Bn depend on complexity of the function class F. For example, sequential Rademacher complexity of F is one of the tightest achievable uniform rates for a variety of loss functions [8, 7]. An adaptive regret bound has the form Bn(f;x1 n,y1 n) and is said to be achievable if there exists a randomized algorithm for selecting ˆyt such that (f(xt),yt) Bn(f;x1 n,y1 n) x1 n,y1 n, f F. (2) We distinguish three types of adaptive bounds, according to whether Bn(f;x1 n,y1 n) depends only on f, only on (x1 n,y1 n), or on both quantities. Whenever Bn depends on f, an adaptive regret can be viewed as an oracle inequality which penalizes each f according to a measure of its complexity (e.g. the complexity of the smallest model to which it belongs). As in statistical learning, an oracle inequality (2) may be proved for certain functions Bn(f;x1 n,y1 n) even if a uniform bound (1) cannot hold for any nontrivial Bn. 1.2 Related Work The case when Bn(f;x1 n,y1 n) = Bn(x1 n,y1 n) does not depend on f has received most of the attention in the literature. The focus is on bounds that can be tighter for nice sequences, yet maintain near-optimal worst-case guarantees. An incomplete list of prior work includes [9, 10, 11, 12], couched in the setting of online linear/convex optimization, and [13] in the experts setting. A bound of type Bn(f) was studied in [14], which presented an algorithm that competes with all experts simultaneously, but with varied regret with respect to each of them depending on the quantile of the expert. Another bound of this type was given by [15], who consider online linear optimization with an unbounded set and provide oracle inequalities with an appropriately chosen function Bn(f). Finally, the third category of adaptive bounds are those that depend on both the hypothesis f F and the data. The bounds that depend on the loss of the best function (so-called small-loss bounds, [16, Sec. 2.4], [17, 13]) fall in this category trivially, since one may overbound the loss of the best function by the performance of f. We draw attention to the recent result of [18] who show an adaptive bound in terms of both the loss of comparator and the KL divergence between the comparator and some pre-fixed prior distribution over experts. An MDL-style bound in terms of the variance of the loss of the comparator (under the distribution induced by the algorithm) was recently given in [19]. Our study was also partly inspired by Cover [20] who characterized necessary and sufficient conditions for achievable bounds in prediction of binary sequences. The methods in [20], however, rely on the structure of the binary prediction problem and do not readily generalize to other settings. The framework we propose recovers the vast majority of known adaptive rates in literature, including variance bounds, quantile bounds, localization-based bounds, and fast rates for small losses. It should be noted that while existing literature on adaptive online learning has focused on simple hypothesis classes such as finite experts and finite-dimensional p-norm balls, our results extend to general hypothesis classes, including large nonparametric ones discussed in [7]. 2 Adaptive Rates and Achievability: General Setup The first step in building a general theory for adaptive online learning is to identify what adaptive regret bounds are possible to achieve. Recall that an adaptive regret bound of Bn F X n Yn R is said to be achievable if there exists an online learning algorithm such that, (2) holds. In the rest of this work, we use the notation ... n t=1 to denote the interleaved application of the operators inside the brackets, repeated over t = 1,...,n rounds (see [21]). Achievability of an adaptive rate can be formalized by the following minimax quantity. Definition 1. Given an adaptive rate Bn we define the offset minimax value: An(F, Bn) sup (ˆyt, yt) inf (f(xt), yt) + Bn(f; x1 n, y1 n) . An(F,Bn) quantifies how n t=1 (ˆyt,yt) inff F { n t=1 (f(xt),yt) + Bn(f;x1 n,y1 n)} behaves when the optimal learning algorithm that minimizes this difference is used against Nature trying to maximize it. Directly from this definition, An adaptive rate Bn is achievable if and only if An(F,Bn) 0. If Bn is a uniform rate, i.e., Bn(f;x1 n,y1 n) = Bn, achievability reduces to the minimax analysis explored in [8]. The uniform rate Bn is achievable if and only if Bn Vn(F), where Vn(F) is the minimax value of the online learning game. We now focus on understanding the minimax value An(F,Bn) for general adaptive rates. We first show that the minimax value is bounded by an offset version of the sequential Rademacher complexity studied in [8]. The symmetrization Lemma 1 below provides us with the first step towards a probabilistic analysis of achievable rates. Before stating the lemma, we need to define the notion of a tree and the notion of sequential Rademacher complexity. Given a set Z, a Z-valued tree z of depth n is a sequence (zt)n t=1 of functions zt { 1}t 1 Z. One may view z as a complete binary tree decorated by elements of Z. Let = ( t)n t=1 be a sequence of independent Rademacher random variables. Then (zt( )) may be viewed as a predictable process with respect to the filtration St = σ( 1,..., t). For a tree z, the sequential Rademacher complexity of a function class G RZ on z is defined as Rn(G,z) E sup tg(zt( )) and Rn(G) sup z Rn(G,z) . Lemma 1. For any lower semi-continuous loss , and any adaptive rate Bn that only depends on outcomes (i.e. Bn(f;x1 n,y1 n) = Bn(y1 n)), we have that t (f(xt( )),yt( )) Bn(y1 n( )) . (3) Further, for any general adaptive rate Bn, x,y,y E sup t (f(xt( )),yt( )) Bn(f;x1 n( ),y 2 n+1( )) . (4) Finally, if one considers the supervised learning problem where F X R, Y R and R R R is a loss that is convex and L-Lipschitz in its first argument, then for any adaptive rate Bn, tf(xt( )) Bn(f;x1 n( ),y1 n( )) . (5) The above lemma tells us that to check whether an adaptive rate is achievable, it is sufficient to check that the corresponding adaptive sequential complexity measures are non-positive. We remark that if the above complexities are bounded by some positive quantity of a smaller order, one can form a new achievable rate B n by adding the positive quantity to Bn. 3 Probabilistic Tools As mentioned in the introduction, our technique rests on certain one-sided probabilistic inequalities. We now state the first building block: a rather straightforward maximal inequality. Proposition 2. Let I = {1,...,N}, N , be a set of indices and let (Xi)i I be a sequence of random variables satisfying the following tail condition: for any > 0, P(Xi Bi > ) C1 exp 2 (2σ2 i ) + C2 exp( si) (6) for some positive sequence (Bi), nonnegative sequence (σi) and nonnegative sequence (si) of numbers, and for constants C1,C2 0. Then for any σ σ1, s s1, and 2log(σi σ) + 4log(i),(Bisi) 1 log i2( s si) + 1, it holds that Esup {Xi Bi i} 3C1 σ + 2C2( s) 1. (7) We remark that Bi need not be the expected value of Xi, as we are not interested in two-sided deviations around the mean. One of the approaches to obtaining oracle-type inequalities is to split a large class into smaller ones according to a complexity radius and control a certain stochastic process separately on each subset (also known as the peeling technique). In the applications below, Xi will often stand for the (random) supremum of this process on subset i, and Bi will be an upper bound on its typical size. Given deviation bounds for Xi above Bi, the dilated size Bi i then allows one to pass to maximal inequalities (7) and thus verify achievability in Lemma 1. The same strategy works for obtaining data-dependent bounds, where we first prove tail bounds for the given size of the data-dependent quantity, then appeal to (7). A simple yet powerful example for the control of the supremum of a stochastic process is an inequality due to Pinelis [22] for the norm (which is a supremum over the dual ball) of a martingale in a 2-smooth Banach space. Here we state a version of this result that can be found in [23, Appendix A]. Lemma 3. Let Z be a unit ball in a separable (2,D)-smooth Banach space H. For any Z-valued tree z, and any n > 4D2 tzt( ) 2 exp 2 When the class of functions is not linear, we may no longer appeal to the above lemma. Instead, we make use of a result from [24] that extends Lemma 3 at a price of a poly-logarithmic factor. Before stating this lemma, we briefly define the relevant complexity measures (see [24] for more details). First, a set V of R-valued trees is called an -cover of G RZ on z with respect to p if g G, { 1}n, v V s.t. (g(zt( )) vt( ))p n p. The size of the smallest -cover is denoted by Np(G, ,z), and Np(G, ,n) supz Np(G, ,z). The set V is an -cover of G on z with respect to if g G, { 1}, v V s.t. g(zt( )) vt( ) t [n]. We let N (G, ,z) be the smallest such cover and set N (G, ,n) = supz N (G, ,z). Lemma 4 ([24]). Let G [ 1,1]Z. Suppose Rn(G) n 0 with n and that the following mild assumptions hold: Rn(G) 1 n, N (G,2 1,n) 4, and there exists a constant Γ such that Γ j=1 N (G,2 j,n) 1. Then for any > 12 n, for any Z-valued tree z of depth n, tg(zt( )) > 8 1 + 8nlog3(en2) Rn(G) tg(zt( )) > n inf log N (G,δ,n)dδ 2Γe n 2 The above lemma yields a one-sided control on the size of the supremum of the sequential Rademacher process, as required for our oracle-type inequalities. Next, we turn our attention to an offset Rademacher process, where the supremum is taken over a collection of negative-mean random variables. The behavior of this offset process was shown to govern the optimal rates of convergence for online nonparametric regression [7]. Such a one-sided control of the supremum will be necessary for some of the data-dependent upper bounds we develop. Lemma 5. Let z be a Z-valued tree of depth n, and let G RZ. For any γ 1 n and > 0, tg(zt( )) 2 g2(zt( )) log N2(G, γ, z) n log N2(G, δ, z)dδ 1 > log2(2nγ) j=1 N2(G,2 jγ,z) 2 and σ = 12 nlog N2(G,δ,z)dδ. We observe that the probability of deviation has both subgaussian and subexponential components. Using the above result and Proposition 2 leads to useful bounds on the quantities in Lemma 1 for specific types of adaptive rates. Given a tree z, we obtain a bound on the expected size of the sequential Rademacher process when we subtract off the data-dependent 2-norm of the function on the tree z, adjusted by logarithmic terms. Corollary 6. Suppose G [ 1,1]Z, and let z be any Z-valued tree of depth n. Assume log N2(G,δ,n) δ p for some p < 2. Then tg(zt( )) 4 2(log n) log N2(G, γ 2, z) g2(zt( )) + 1 n log N2(G, δ, z)dδ 7 + 2 log n . The next corollary yields slightly faster rates than Corollary 6 when G < . Corollary 7. Suppose G [ 1,1]Z with G = N, and let z be any Z-valued tree of depth n. Then tg(zt( )) 2 log log N g2(z( )) + e g2(z( )) + e 4 Achievable Bounds In this section we use Lemma 1 along with the probabilistic tools from the previous section to obtain an array of achievable adaptive bounds for various online learning problems. We subdivide the section into one subsection for each category of adaptive bound described in Section 1.1. 4.1 Adapting to Data Here we consider adaptive rates of the form Bn(x1 n,y1 n), uniform over f F. We show the power of the developed tools on the following example. Example 4.1 (Online Linear Optimization in Rd). Consider the problem of online linear optimization where F = {f Rd f 2 1}, Y = {y y 2 1}, X = {0}, and (ˆy,y) = ˆy,y . The following adaptive rate is achievable: Bn(y1 n) = 16 where σ is the spectral norm. Let us deduce this result from Corollary 6. First, observe that σ = sup f f 2 1 1 2 f = sup f f 2 1 t=1 2(f, yt). The linear function class F can be covered point-wise at any scale δ with (3 δ)d balls and thus N( F,1 (2n),z) (6n)d for any Y-valued tree z. We apply Corollary 6 with γ = 1 n and the integral term in the corollary vanishes, yielding the claimed statement. 4.2 Model Adaptation In this subsection we focus on achievable rates for oracle inequalities and model selection, but without dependence on data. The form of the rate is therefore Bn(f). Assume we have a class F = R 1 F(R), with the property that F(R) F(R ) for any R R . If we are told by an oracle that regret will be measured with respect to those hypotheses f F with R(f) inf{R f F(R)} R , then using the minimax algorithm one can guarantee a regret bound of at most the sequential Rademacher complexity Rn(F(R )). On the other hand, given the optimality of the sequential Rademacher complexity for online learning problems for commonly encountered losses, we can argue that for any f F chosen in hindsight, one cannot expect a regret better than order Rn(F(R(f))). In this section we show that simultaneously for all f F, one can attain an adaptive upper bound of O Rn(F(R(f))) log (Rn(F(R(f))))log3 2 n . That is, we may predict as if we knew the optimal radius, at the price of a logarithmic factor. This is the price of adaptation. Corollary 8. For any class of predictors F with F(1) non-empty, if one considers the supervised learning problem with 1-Lipschitz loss , the following rate is achievable: Bn(f) = log3 2 n K1Rn(F(2R(f))) log log(2R(f)) Rn(F(2R(f))) + K2ΓRn(F(1)) for absolute constants K1,K2, and Γ defined in Lemma 4. In fact, this statement is true more generally with F(2R(f)) replaced by F(2R(f)). It is tempting to attempt to prove the above statement with the exponential weights algorithm running as an aggregation procedure over the solutions for each R. In general, this approach will fail for two reasons. First, if function values grow with R, the exponential weights bound will scale linearly with this value. Second, an experts bound yields only a slower n rate. As a special case of the above lemma, we obtain an online PAC-Bayesian theorem. We postpone this example to the next sub-section where we get a data-dependent version of this result. We now provide a bound for online linear optimization in 2-smooth Banach spaces that automatically adapts to the norm of the comparator. To prove it, we use the concentration bound from [22] (Lemma 3) within the proof of the above corollary to remove the extra logarithmic factors. Example 4.2 (Unconstrained Linear Optimization). Consider linear optimization with Y being the unit ball of some reflexive Banach space with norm . Let F = D be the dual space and the loss (ˆy,y) = ˆy,y (where we are using , to represent the linear functional in the first argument to the second argument). Define F(R) = {f f R} where is the norm dual to . If the unit ball of Y is (2,D)-smooth, then the following rate is achievable for all f with f 1: B(f) = D n 8 f 1 + log(2 f ) + log log(2 f ) + 12 . For the case of a Hilbert space, the above bound was achieved by [15]. 4.3 Adapting to Data and Model Simultaneously We now study achievable bounds that perform online model selection in a data-adaptive way. Of specific interest is our online optimistic PAC-Bayesian bound. This bound should be compared to [18, 19], with the reader noting that it is independent of the number of experts, is algorithmindependent, and depends quadratically on the expected loss of the expert we compare against. Example 4.3 (Generalized Predictable Sequences (Supervised Learning)). Consider an online supervised learning problem with a convex 1-Lipschitz loss. Let (Mt)t 1 be any predictable sequence that the learner can compute at round t based on information provided so far, including xt (One can think of the predictable sequence Mt as a prior guess for the hypothesis we would compare with in hindsight). Then the following adaptive rate is achievable: Bn(f; x1 n) = inf log n log N2(F, γ 2, n) (f(xt) Mt)2 + 1 n log N2(F, δ, n)dδ + 2 log n+7 for constants K1 = 4 2 from Corollary 6. The achievability is a direct consequence of Eq. (5) in Lemma 1, followed by Corollary 6 (one can include any predictable sequence in the Rademacher average part because t Mt t is zero mean). Particularly, if we assume that the sequential covering of class F grows as log N2(F, ,n) p for some p < 2, we get that t=1 (f(xt) Mt)2 + 1 As p gets closer to 0, we get full adaptivity and replace n by n t=1 (f(xt) Mt)2 + 1. On the other hand, as p gets closer to 2 (i.e. more complex function classes), we do not adapt and get a uniform bound in terms of n. For p (0,2), we attain a natural interpolation. Example 4.4 (Regret to Fixed Vs Regret to Best (Supervised Learning)). Consider an online supervised learning problem with a convex 1-Lipschitz loss and let F = N. Let f F be a fixed expert chosen in advance. The following bound is achievable: Bn(f, x1 n) = 4 log log N (f(xt) f (xt))2 + e (f(xt) f (xt))2 + e + 2. In particular, against f we have Bn(f ,x1 n) = O(1), and against an arbitrary expert we have Bn(f,x1 n) = O nlog N(log (n log N)) . This bound follows from Eq. (5) in Lemma 1 followed by Corollary 7. This extends the study of [25] to supervised learning and general class of experts F. Example 4.5 (Optimistic PAC-Bayes). Assume that we have a countable set of experts and that the loss for each expert on any round is non-negative and bounded by 1. The function class F is the set of all distributions over these experts, and X = {0}. This setting can be formulated as online linear optimization where the loss of mixture f over experts, given instance y, is f,y , the expected loss under the mixture. The following adaptive bound is achievable: Bn(f; y1 n) = 50 (KL(f ) + log(n)) Ei f ei, yt 2 + 50 (KL(f ) + log(n)) + 10. This adaptive bound is an online PAC-Bayesian bound. The rate adapts not only to the KL di- vergence of f with fixed prior but also replaces n with n t=1 Ei f ei,yt 2. Note that we have n t=1 Ei f ei,yt 2 n t=1 f,yt , yielding the small-loss type bound described earlier. This is an improvement over the bound in [18] in that the bound is independent of number of experts, and so holds even for countably infinite sets of experts. The KL term in our bound may be compared to the MDL-style term in the bound of [19]. If we have a large (but finite) number of experts and take to be uniform, the above bound provides an improvement over both [14]1 and [18]. Evaluating the above bound with a distribution f that places all its weight on any one expert appears to address the open question posed by [13] of obtaining algorithm-independent oracle-type variance bounds for experts. The proof of achievability of the above rate is shown in the appendix because it requires a slight variation on the symmetrization lemma specific to the problem. 1See [18] for a comparison of KL-based bounds and quantile bounds. 5 Relaxations for Adaptive Learning To design algorithms for achievable rates, we extend the framework of online relaxations from [26]. A relaxation Reln n t=0 X t Yt R that satisfies the initial condition, Reln(x1 n, y1 n) inf (f(xt), yt) + Bn(f; x1 n, y1 n) , (8) and the recursive condition, Reln(x1 t 1, y1 t 1) sup Eˆy qt[ (ˆyt, yt) + Reln(x1 t, y1 t)], (9) is said to be admissible for the adaptive rate Bn. The relaxation s corresponding strategy is ˆqt = arg minqt (D) supyt Y Eˆy qt[ (ˆyt,yt) + Reln(x1 t,y1 t)], which enjoys the adaptive bound (ˆyt, yt) inf (f(xt), yt) + Bn(f; x1 n, y1 n) Reln( ) x1 n, y1 n. It follows immediately that the strategy achieves the rate Bn(f;x1 n,y1 n) + Reln( ). Our goal is then to find relaxations for which the strategy is computationally tractable and Reln( ) 0 or at least has smaller order than Bn. Similar to [26], conditional versions of the offset minimax values An yield admissible relaxations, but solving these relaxations may not be computationally tractable. Example 5.1 (Online PAC-Bayes). Consider the experts setting in Example 4.5 with: 2nmax{KL(f ),1} + 4 n. Let Ri = 2i 1 and let q R t (y) denote the exponential weights distribution with learning rate R n: q R(y1 t)k k exp s=1 yt)k . The following is an admissible relaxation achieving Bn: Reln(y1 t) = inf q Ri(y1 s 1), ys + n Ri + 2λ(n t) . t be a distribution with (q t )i exp 1 n t 1 s=1 q Ri(y1 s 1),ys n Ri . We predict by drawing i according to q t , then drawing an expert according to q Ri(y1 t 1). While in general the problem of obtaining an efficient adaptive relaxation might be hard, one can ask the question, If and efficient relaxation Rel R n is available for each F(R), can one obtain an adaptive model selection algorithm for all of F? . To this end for supervised learning problem with convex Lipschitz loss we delineate a meta approach which utilizes existing relaxations for each F(R). Lemma 9. Let q R t (y1,...,yt 1) be the randomized strategy corresponding to Rel R n , obtained after observing outcomes y1,...,yt 1, and let R R be nonnegative. The following relaxation is admissible for the rate Bn(R) = Rel R n ( ) (Rel R Adan(x1 t, y1 t) = sup x,y,y E t+1 n sup n (x1 t, y1 t) Rel R n ( ) (Rel R s Eˆys q R s (y1 t,y t+1 s 1( )) (ˆys, ys( )) . Playing according to the strategy for Adan will guarantee a regret bound of Bn(R) + Adan( ), and Adan( ) can be bounded using Proposition 2 when the form of is as in that proposition. We remark that the above strategy is not necessarily obtained by running a high-level experts algorithm over the discretized values of R. It is an interesting question to determine the cases when such a strategy is optimal. More generally, when the adaptive rate Bn depends on data, it is not possible to obtain the rates we show non-constructively in this paper using the exponential weights algorithm with meta-experts as the required weighting over experts would be data dependent (and hence is not a prior over experts). Further, the bounds from exponential-weights-type algorithms are akin to having sub-exponential tails in Proposition 2, but for many problems we may have sub-gaussian tails. Obtaining computationally efficient methods from the proposed framework is an interesting research direction. Proposition 2 provides a useful non-constructive tool to establish achievable adaptive bounds, and a natural question to ask is if one can obtain a constructive counterpart for the proposition. [1] Lucien Birg e, Pascal Massart, et al. Minimum contrast estimators on sieves: exponential bounds and rates of convergence. Bernoulli, 4(3):329 375, 1998. [2] G abor Lugosi and Andrew B Nobel. Adaptive model selection using empirical complexities. Annals of Statistics, pages 1830 1864, 1999. [3] Peter L. Bartlett, St ephane Boucheron, and G abor Lugosi. Model selection and error estimation. Machine Learning, 48(1-3):85 113, 2002. [4] Pascal Massart. Concentration inequalities and model selection, volume 10. Springer, 2007. [5] Shahar Mendelson. Learning without Concentration. In Conference on Learning Theory, 2014. [6] Tengyuan Liang, Alexander Rakhlin, and Karthik Sridharan. Learning with square loss: Localization through offset rademacher complexity. Proceedings of The 28th Conference on Learning Theory, 2015. [7] Alexander Rakhlin and Karthik Sridharan. Online nonparametric regression. Proceedings of The 27th Conference on Learning Theory, 2014. [8] Alexander Rakhlin, Karthik Sridharan, and Ambuj Tewari. Online learning: Random averages, combinato- rial parameters, and learnability. In Advances in Neural Information Processing Systems 23. 2010. [9] Elad Hazan and Satyen Kale. Extracting certainty from uncertainty: Regret bounded by variation in costs. Machine learning, 80(2):165 188, 2010. [10] Chao-Kai Chiang, Tianbao Yang, Chia-Jung Lee, Mehrdad Mahdavi, Chi-Jen Lu, Rong Jin, and Shenghuo Zhu. Online optimization with gradual variations. In COLT, 2012. [11] Alexander Rakhlin and Karthik Sridharan. Online learning with predictable sequences. In Proceedings of the 26th Annual Conference on Learning Theory (COLT), 2013. [12] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121 2159, 2011. [13] Nicolo Cesa-Bianchi, Yishay Mansour, and Gilles Stoltz. Improved second-order bounds for prediction with expert advice. Machine Learning, 66(2-3):321 352, 2007. [14] Kamalika Chaudhuri, Yoav Freund, and Daniel J Hsu. A parameter-free hedging algorithm. In Advances in neural information processing systems, pages 297 305, 2009. [15] H. Brendan Mc Mahan and Francesco Orabona. Unconstrained online linear learning in hilbert spaces: Minimax algorithms and normal approximations. Proceedings of The 27th Conference on Learning Theory, 2014. [16] Nicolo Cesa-Bianchi and G abor Lugosi. Prediction, Learning, and Games. Cambridge University Press, [17] Nathan Srebro, Karthik Sridharan, and Ambuj Tewari. Smoothness, low noise and fast rates. In Advances in neural information processing systems, pages 2199 2207, 2010. [18] Haipeng Luo and Robert E. Schapire. Achieving all with no parameters: Adaptive normalhedge. Co RR, abs/1502.05934, 2015. [19] Wouter M. Koolen and Tim van Erven. Second-order quantile methods for experts and combinatorial games. In Proceedings of the 28th Annual Conference on Learning Theory (COLT), pages 1155 1175, 2015. [20] Thomas M. Cover. Behavior of sequential predictors of binary sequences. In in Trans. 4th Prague Conference on Information Theory, Statistical Decision Functions, Random Processes, pages 263 272. Publishing House of the Czechoslovak Academy of Sciences, 1967. [21] Alexander Rakhlin and Karthik Sridharan. Statistical learning theory and sequential prediction, 2012. Available at http://stat.wharton.upenn.edu/ rakhlin/book_draft.pdf. [22] Iosif Pinelis. Optimum bounds for the distributions of martingales in banach spaces. The Annals of Probability, 22(4):1679 1706, 10 1994. [23] Alexander Rakhlin, Karthik Sridharan, and Ambuj Tewari. Online learning: Beyond regret. ar Xiv preprint ar Xiv:1011.3168, 2010. [24] Alexander Rakhlin, Karthik Sridharan, and Ambuj Tewari. Sequential complexities and uniform martingale laws of large numbers. Probability Theory and Related Fields, 2014. [25] Eyal Even-Dar, Michael Kearns, Yishay Mansour, and Jennifer Wortman. Regret to the best vs. regret to the average. Machine Learning, 72(1-2):21 37, 2008. [26] Alexander Rakhlin, Ohad Shamir, and Karthik Sridharan. Relax and randomize: From value to algorithms. Advances in Neural Information Processing Systems 25, pages 2150 2158, 2012.