# improved_dynamic_regret_for_nondegenerate_functions__30c34604.pdf Improved Dynamic Regret for Non-degenerate Functions Lijun Zhang , Tianbao Yang , Jinfeng Yi , Rong Jin , Zhi-Hua Zhou National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China Department of Computer Science, The University of Iowa, Iowa City, USA AI Foundations Lab, IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA Alibaba Group, Seattle, USA zhanglj@lamda.nju.edu.cn, tianbao-yang@uiowa.edu, jinfengyi@tencent.com jinrong.jr@alibaba-inc.com, zhouzh@lamda.nju.edu.cn Recently, there has been a growing research interest in the analysis of dynamic regret, which measures the performance of an online learner against a sequence of local minimizers. By exploiting the strong convexity, previous studies have shown that the dynamic regret can be upper bounded by the path-length of the comparator sequence. In this paper, we illustrate that the dynamic regret can be further improved by allowing the learner to query the gradient of the function multiple times, and meanwhile the strong convexity can be weakened to other non-degenerate conditions. Specifically, we introduce the squared path-length, which could be much smaller than the path-length, as a new regularity of the comparator sequence. When multiple gradients are accessible to the learner, we first demonstrate that the dynamic regret of strongly convex functions can be upper bounded by the minimum of the path-length and the squared path-length. We then extend our theoretical guarantee to functions that are semi-strongly convex or selfconcordant. To the best of our knowledge, this is the first time that semi-strong convexity and self-concordance are utilized to tighten the dynamic regret. 1 Introduction Online convex optimization is a fundamental tool for solving a wide variety of machine learning problems [Shalev-Shwartz, 2011]. It can be formulated as a repeated game between a learner and an adversary. On the t-th round of the game, the learner selects a point xt from a convex set X and the adversary chooses a convex function ft : X 7 R. Then, the function is revealed to the learner, who incurs loss ft(xt). The standard performance measure is the regret, defined as the difference between the learner s cumulative loss and the cumulative loss of the optimal fixed vector in hindsight: T X t=1 ft(xt) min x X t=1 ft(x). (1) Over the past decades, various online algorithms, such as the online gradient descent [Zinkevich, 2003], have been proposed to yield sub-linear regret under different scenarios [Hazan et al., 2007, Shalev-Shwartz et al., 2007]. Though equipped with rich theories, the notion of regret fails to illustrate the performance of online algorithms in dynamic setting, as a static comparator is used in (1). To overcome this limitation, there has been a recent surge of interest in analyzing a more stringent metric dynamic regret [Hall and Willett, 2013, Besbes et al., 2015, Jadbabaie et al., 2015, Mokhtari et al., 2016, Yang et al., 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 2016], in which the cumulative loss of the learner is compared against a sequence of local minimizers, i.e., R T :=R(x 1, . . . , x T ) = t=1 ft(x t ) = t=1 min x X ft(x) (2) where x t argminx X ft(x). A more general definition of dynamic regret is to evaluate the difference of the cumulative loss with respect to any sequence of comparators u1, . . . , u T X [Zinkevich, 2003]. It is well-known that in the worst-case, it is impossible to achieve a sub-linear dynamic regret bound, due to the arbitrary fluctuation in the functions. However, it is possible to upper bound the dynamic regret in terms of certain regularity of the comparator sequence or the function sequence. A natural regularity is the path-length of the comparator sequence, defined as P T := P(x 1, . . . , x T ) = t=2 x t x t 1 (3) that captures the cumulative Euclidean norm of the difference between successive comparators. For convex functions, the dynamic regret of online gradient descent can be upper bounded by O( TP T ) [Zinkevich, 2003]. And when all the functions are strongly convex and smooth, the upper bound can be improved to O(P T ) [Mokhtari et al., 2016]. In the aforementioned results, the learner uses the gradient of each function only once, and performs one step of gradient descent to update the intermediate solution. In this paper, we examine an interesting question: is it possible to improve the dynamic regret when the learner is allowed to query the gradient multiple times? Note that the answer to this question is no if one aims to promote the static regret in (1), according to the results on the minimax regret bound [Abernethy et al., 2008a]. We however show that when coming to the dynamic regret, multiple gradients can reduce the upper bound significantly. To this end, we introduce a new regularity the squared path-length: S T := S(x 1, . . . , x T ) = t=2 x t x t 1 2 (4) which could be much smaller than P T when the local variations are small. For example, when x t x t 1 = Ω(1/ T) for all t [T], we have P T = Ω( T) but S T = Ω(1). We advance the analysis of dynamic regret in the following aspects. When all the functions are strongly convex and smooth, we propose to apply gradient descent multiple times in each round, and demonstrate that the dynamic regret is reduced from O(P T ) to O(min(P T , S T )), provided the gradients of minimizers are small. We further present a matching lower bound which implies our result cannot be improved in general. When all the functions are semi-strongly convex and smooth, we show that the standard online gradient descent still achieves the O(P T ) dynamic regret. And if we apply gradient descent multiple times in each round, the upper bound can also be improved to O(min(P T , S T )), under the same condition as strongly convex functions. When all the functions are self-concordant, we establish a similar guarantee if both the gradient and Hessian of the function can be queried multiple times. Specifically, we propose to apply the damped Newton method multiple times in each round, and prove an O(min(P T , S T )) bound of the dynamic regret under appropriate conditions.1 Application to Statistical Learning Most studies of dynamic regret, including this paper do not make stochastic assumptions on the function sequence. In the following, we discuss how to interpret our results when facing the problem of statistical learning. In this case, the learner receives a sequence of losses ℓ(x z1, y1), ℓ(x z2, y2), . . ., where (zi, yi) s are instance-label pairs sampled from a unknown distribution, and ℓ( , ) measures the prediction error. To avoid the random fluctuation caused by sampling, we can set ft as the loss averaged over a mini-batch of instance-label pairs. As a result, when the underlying distribution is stationary or drifts slowly, successive functions will be close to each other, and thus the path-length and the squared path-length are expected to be small. 1P T and S T are modified slightly when functions are semi-strongly convex or self-concordant. 2 Related Work The static regret in (1) has been extensively studied in the literature [Shalev-Shwartz, 2011]. It has been established that the static regret can be upper bounded by O( T), O(log T), and O(log T) for convex functions, strongly convex functions, and exponentially concave functions, respectively [Zinkevich, 2003, Hazan et al., 2007]. Furthermore, those upper bounds are proved to be minimax optimal [Abernethy et al., 2008a, Hazan and Kale, 2011]. The notion of dynamic regret is introduced by Zinkevich [2003]. If we choose the online gradient descent as the learner, the dynamic regret with respect to any comparator sequence u1, . . . , u T , i.e., R(u1, . . . , u T ), is on the order of TP(u1, . . . , u T ). When a prior knowledge of P T is available, the dynamic regret R T can be upper bounded by O( p TP T ) [Yang et al., 2016]. If all the functions are strongly convex and smooth, the upper bound of R T can be improved to O(P T ) [Mokhtari et al., 2016]. The O(P T ) rate is also achievable when all the functions are convex and smooth, and all the minimizers x t s lie in the interior of X [Yang et al., 2016]. Another regularity of the comparator sequence, which is similar to the path-length, is defined as P (u1, . . . , u T ) = t=2 ut Φt(ut 1) where Φt( ) is a dynamic model that predicts a reference point for the t-th round. The advantage of this measure is that when the comparator sequence follows the dynamical model closely, it can be much smaller than the path-length P(u1, . . . , u T ). A novel algorithm named dynamic mirror descent is proposed to take Φt(ut 1) into account, and the dynamic regret R(u1, . . . , u T ) is on the order of TP (u1, . . . , u T ) [Hall and Willett, 2013]. There are also some regularities defined in terms of the function sequence, such as the functional variation [Besbes et al., 2015] FT := F(f1, . . . , f T ) = t=2 max x X |ft(x) ft 1(x)| (5) or the gradient variation [Chiang et al., 2012] GT := G(f1, . . . , f T ) = t=2 max x X ft(x) ft 1(x) 2. (6) Under the condition that FT FT and Ft is given beforehand, a restarted online gradient descent is developed by Besbes et al. [2015], and the dynamic regret is upper bounded by O(T 2/3F 1/3 T ) and O(log T TFT ) for convex functions and strongly convex functions, respectively. The regularities mentioned above reflect different aspects of the learning problem, and are not directly comparable in general. Thus, it is appealing to develop an algorithm that adapts to the smaller regularity of the problem. Jadbabaie et al. [2015] propose an adaptive algorithm based on the optimistic mirror descent [Rakhlin and Sridharan, 2013], such that the dynamic regret is given in terms of all the three regularities (P T , FT , and GT ). However, it relies on the assumption that the learner can calculate each regularity incrementally. In the setting of prediction with expert advice, the dynamic regret is also referred to as tracking regret or shifting regret [Herbster and Warmuth, 1998, Cesa-bianchi et al., 2012]. The path-length of the comparator sequence is named as shift, which is just the number of times the expert changes. Another related performance measure is the adaptive regret, which aims to minimize the static regret over any interval [Hazan and Seshadhri, 2007, Daniely et al., 2015]. Finally, we note that the study of dynamic regret is similar to the competitive analysis in the sense that both of them compete against an optimal offline policy, but with significant differences in their assumptions and techniques [Buchbinder et al., 2012]. 3 Online Learning with Multiple Gradients In this section, we discuss how to improve the dynamic regret by allowing the learner to query the gradient multiple times. We start with strongly convex functions, and then proceed to semi-strongly convex functions, and finally investigate self-concordant functions. Algorithm 1 Online Multiple Gradient Descent (OMGD) Require: The number of inner iterations K and the step size η 1: Let x1 be any point in X 2: for t = 1, . . . , T do 3: Submit xt X and the receive loss ft : X 7 R 4: z1 t = xt 5: for j = 1, . . . , K do 6: zj+1 t = ΠX zj t η ft(zj t) 7: end for 8: xt+1 = z K+1 t 9: end for 3.1 Strongly Convex and Smooth Functions To be self-contained, we provide the definitions of strong convexity and smoothness. Definition 1. A function f : X 7 R is λ-strongly convex, if f(y) f(x) + f(x), y x + λ 2 y x 2, x, y X. Definition 2. A function f : X 7 R is L-smooth, if f(y) f(x) + f(x), y x + L 2 y x 2, x, y X. Example 1. The following functions are both strongly convex and smooth. 1. A quadratic form f(x) = x Ax 2b x + c where a I A b I, a > 0 and b < ; 2. The regularized logistic loss f(x) = log(1 + exp(b x)) + λ 2 x 2, where λ > 0. Following previous studies [Mokhtari et al., 2016], we make the following assumptions. Assumption 1. Suppose the following conditions hold for each ft : X 7 R. 1. ft is λ-strongly convex and L-smooth over X; 2. ft(x) G, x X. When the learner can query the gradient of each function only once, the most popular learning algorithm is the online gradient descent: xt+1 = ΠX (xt η ft(xt)) where ΠX ( ) denotes the projection onto the nearest point in X. Mokhtari et al. [2016] have established an O(P T ) bound of dynamic regret, as stated below. Theorem 1. Suppose Assumption 1 is true. By setting η 1/L in online gradient descent, we have t=1 ft(xt) ft(x t ) 1 1 γ GP T + 1 1 γ G x1 x 1 where γ = q 1 2λ 1/η+λ. We now consider the setting that the learner can access the gradient of each function multiple times. The algorithm is a natural extension of online gradient descent by performing gradient descent multiple times in each round. Specifically, in the t-th round, given the current solution xt, we generate a sequence of solutions, denoted by z1 t, . . . , z K+1 t , where K is a constant independent from T, as follows: z1 t = xt, zj+1 t = ΠX zj t η ft(zj t) , j = 1, . . . , K. Then, we set xt+1 = z K+1 t . The procedure is named as Online Multiple Gradient Descent (OMGD) and is summarized in Algorithm 1. By applying gradient descent multiple times, we are able to extract more information from each function and therefore are more likely to obtain a tight bound for the dynamic regret. The following theorem shows that the multiple accesses of the gradient indeed help improve the dynamic regret. Theorem 2. Suppose Assumption 1 is true. By setting η 1/L and K = 1/η+λ 2λ ln 4 in Algorithm 1, for any constant α > 0, we have t=1 ft(xt) ft(x t ) min 2GP T + 2G x1 x 1 , PT t=1 ft(x t ) 2 2α + 2(L + α)S T + (L + α) x1 x 1 2. When PT t=1 ft(x t ) 2 is small, Theorem 2 can be simplified as follows. Corollary 3. Suppose PT t=1 ft(x t ) 2 = O(S T ), from Theorem 2, we have t=1 ft(xt) ft(x t ) = O (min(P T , S T )) . In particular, if x t belongs to the relative interior of X (i.e., ft(x t ) = 0) for all t [T], Theorem 2, as α 0, implies t=1 ft(xt) ft(x t ) min 2GP T + 2G x1 x 1 , 2LS T + L x1 x 1 2 . Compared to Theorem 1, the proposed OMGD improves the dynamic regret from O(P T ) to O (min (P T , S T )), when the gradients of minimizers are small. Recall the definitions of P T and S T in (3) and (4), respectively. We can see that S T introduces a square when measuring the difference between x t and x t 1. In this way, if the local variations ( x t x t 1 s) are small, S T can be significantly smaller than P T , as indicated below. Example 2. Suppose x t x t 1 = T τ for all t 1 and τ > 0, we have S T +1 = T 1 2τ P T +1 = T 1 τ. In particular, when τ = 1/2, we have S T +1 = 1 P T +1 = S T is also closely related to the gradient variation in (6). When all the x t s belong to the relative interior of X, we have ft(x t ) = 0 for all t [T] and therefore t=2 ft(x t 1) ft 1(x t 1) 2 = t=2 ft(x t 1) ft(x t ) 2 λ2S T (7) where the last inequality follows from the property of strongly convex functions [Nesterov, 2004]. The following corollary is an immediate consequence of Theorem 2 and the inequality in (7). Corollary 4. Suppose Assumption 1 is true, and further assume all the x t s belong to the relative interior of X . By setting η 1/L and K = 1/η+λ 2λ ln 4 in Algorithm 1, we have t=1 ft(xt) ft(x t ) min 2GP T + 2G x1 x 1 , 2LGT λ2 + L x1 x 1 2 . In Theorem 2, the number of accesses of gradients K is set to be a constant depending on the condition number of the function. One may ask whether we can obtain a tighter bound by using a larger K. Unfortunately, according to our analysis, even if we take K = , which means ft( ) is minimized exactly, the upper bound can only be improved by a constant factor and the order remains the same. A related question is whether we can reduce the value of K by adopting more advanced optimization techniques, such as the accelerated gradient descent [Nesterov, 2004]. This is an open problem to us, and will be investigated as a future work. Finally, we prove that the O(S T ) bound is optimal for strongly convex and smooth functions. Theorem 5. For any online learning algorithm A, there always exists a sequence of strongly convex and smooth functions f1, . . . , f T , such that t=1 ft(xt) ft(x t ) = Ω(S T ) where x1, . . . , x T is the solutions generated by A. Thus, the upper bound in Theorem 2 cannot be improved in general. 3.2 Semi-strongly Convex and Smooth Functions During the analysis of Theorems 1 and 2, we realize that the proof is built upon the fact that when the function is strongly convex and smooth, gradient descent can reduce the distance to the optimal solution by a constant factor [Mokhtari et al., 2016, Proposition 2]. From the recent developments in convex optimization, we know that a similar behavior also happens when the function is semistrongly convex and smooth [Necoara et al., 2015, Theorem 5.2], which motivates the study in this section. We first introduce the definition of semi-strong convexity [Gong and Ye, 2014]. Definition 3. A function f : X 7 R is semi-strongly convex over X, if there exists a constant β > 0 such that for any x X f(x) min x X f(x) β 2 x ΠX (x) 2 (8) where X = {x X : f(x) minx X f(x)} is the set of minimizers of f over X. The semi-strong convexity generalizes several non-strongly convex conditions, such as the quadratic approximation property and the error bound property [Wang and Lin, 2014, Necoara et al., 2015]. A class of functions that satisfy the semi-strongly convexity is provided below [Gong and Ye, 2014]. Example 3. Consider the following constrained optimization problem min x X Rd f(x) = g(Ex) + b x where g( ) is strongly convex and smooth, and X is either Rd or a polyhedral set. Then, f : X 7 R is semi-strongly convex over X with some constant β > 0. Based on the semi-strong convexity, we assume the functions satisfy the following conditions. Assumption 2. Suppose the following conditions hold for each ft : X 7 R. 1. ft is semi-strongly convex over X with parameter β > 0, and L-smooth; 2. ft(x) G, x X. When the function is semi-strongly convex, the optimal solution may not be unique. Thus, we need to redefine P T and S T to account for this freedom. We define t=2 max x X ΠX t (x) ΠX t 1(x) , and S T := t=2 max x X ΠX t (x) ΠX t 1(x) 2 where X t = {x X : ft(x) minx X ft(x)} is the set of minimizers of ft over X. In this case, we will use the standard online gradient descent when the learner can query the gradient only once, and apply the online multiple gradient descent (OMGD) in Algorithm 1, when the learner can access the gradient multiple times. Using similar analysis as Theorems 1 and 2, we obtain the following dynamic regret bounds for functions that are semi-strongly convex and smooth. Theorem 6. Suppose Assumption 2 is true. By setting η 1/L in online gradient descent, we have t=1 min x X ft(x) GP T 1 γ + G x1 x1 where γ = q 1 β 1/η+β , and x1 = ΠX 1 (x1). Thus, online gradient descent still achieves an O(P T ) bound of the dynamic regret. Theorem 7. Suppose Assumption 2 is true. By setting η 1/L and K = 1/η+β β ln 4 in Algorithm 1, for any constant α > 0, we have t=1 min x X ft(x) min 2GP T + 2G x1 x1 G T 2α + 2(L + α)S T + (L + α) x1 x1 2 where G T = max{x t X t }T t=1 PT t=1 ft(x t ) 2, and x1 = ΠX 1 (x1). Again, when the gradients of minimizers are small, in other words, G T = O(S T ), the proposed OMGD improves the dynamic regret form O(P T ) to O(min(P T , S T )). 3.3 Self-concordant Functions We extend our previous results to self-concordant functions, which could be non-strongly convex and even non-smooth. Self-concordant functions play an important role in interior-point methods for solving convex optimization problems. We note that in the study of bandit linear optimization [Abernethy et al., 2008b], self-concordant functions have been used as barriers for constraints. However, to the best of our knowledge, this is the first time that losses themselves are self-concordant. The definition of self-concordant functions is given below [Nemirovski, 2004]. Definition 4. Let X be a nonempty open convex set in Rd and f be a C3 convex function defined on X. f is called self-concordant on X, if it possesses the following two properties: 1. f(xi) along every sequence {xi X} converging, as i , to a boundary point of X; 2. f satisfies the differential inequality |D3f(x)[h, h, h]| 2 h 2f(x)h 3/2 for all x X and all h Rd, where D3f(x)[h1, h2, h3] = 3 t1 t2 t3 |t1=t2=t3=0f(x + t1h1 + t2h2 + t3h3) . Example 4. We provide some examples of self-concordant functions below [Boyd and Vandenberghe, 2004, Nemirovski, 2004]. 1. The function f(x) = log x is self-concordant on (0, ). 2. A convex quadratic form f(x) = x Ax 2b x+c where A Rd d, b Rd, and c R, is self-concordant on Rd. 3. If f : Rd 7 R is self-concordant, and A Rd k, b Rd, then f(Ax + b) is selfconcordant. Using the concept of self-concordance, we make the following assumptions. Assumption 3. Suppose the following conditions hold for each ft : Xt 7 R. 1. ft is self-concordant on domain Xt; 2. ft is non-degenerate on Xt, i.e., 2ft(x) 0, x Xt; 3. ft attains its minimum on Xt, and denote x t = argminx Xt ft(x). Our approach is similar to previous cases except for the updating rule of xt. Since we do not assume functions are strongly convex, we need to take into account the second order structure when updating the current solution xt. Thus, we assume the learner can query both the gradient and Hessian of each function multiple times. Specifically, we apply the damped Newton method [Nemirovski, 2004] to update xt, as follows: z1 t = xt, zj+1 t = zj t 1 1 + λt(zj t) h 2ft(zj t) i 1 ft(zj t), j = 1, . . . , K ft(zj t) h 2ft(zj t) i 1 ft(zj t). (9) Algorithm 2 Online Multiple Newton Update (OMNU) Require: The number of inner iterations K in each round 1: Let x1 be any point in X1 2: for t = 1, . . . , T do 3: Submit xt X and the receive loss ft : X 7 R 4: z1 t = xt 5: for j = 1, . . . , K do 6: zj+1 t = zj t 1 1 + λt(zj t) h 2ft(zj t) i 1 ft(zj t) where λt(zj t) is given in (9) 7: end for 8: xt+1 = z K+1 t 9: end for Then, we set xt+1 = z K+1 t . Since the damped Newton method needs to calculate the inverse of the Hessian matrix, its complexity is higher than gradient descent. The procedure is named as Online Multiple Newton Update (OMNU) and is summarized in Algorithm 2. To analyze the dynamic regret of OMNU, we redefine the two regularities P T and S T as follows: t=2 x t x t 1 t = (x t x t 1) 2ft(x t )(x t x t 1) t=2 x t x t 1 2 t = t=2 (x t x t 1) 2ft(x t )(x t x t 1) where h t = p h 2ft(x t )h. Compared to the definitions in (3) and (4), we introduce 2ft(x t ) when measuring the distance between x t and x t 1. When functions are strongly convex and smooth, these definitions are equivalent up to constant factors. We then define a quantity to compare the second order structure of consecutive functions: µ = max t=2,...,T n λmax 2ft 1(x t 1) 1/2 2ft(x t ) 2ft 1(x t 1) 1/2 o (10) where λmax( ) computes the maximum eigenvalue of its argument. When all the functions are λstrongly convex and L-smooth, µ L/λ. Then, we have the following theorem regarding the dynamic regret of the proposed OMNU algorithm. Theorem 8. Suppose Assumption 3 is true, and further assume x t 1 x t 2 t 1 144, t 2. (11) When t = 1, we choose K = O(1)(f1(x1) f1(x 1) + log log µ) in OMNU such that x2 x 1 2 1 1 144µ. (12) For t 2, we set K = log4(16µ) in OMNU, then t=1 ft(xt) ft(x t ) min 1 3P T , 4S T + f1(x1) f1(x 1) + 1 The above theorem again implies the dynamic regret can be upper bounded by O(min(P T , S T )) when the learner can access the gradient and Hessian multiple times. From the first property of self-concordant functions in Definition 4, we know that x t must lie in the interior of Xt, and thus ft(x t ) = 0 for all t [T]. As a result, we do not need the additional assumption that the gradients of minimizers are small, which has been used before to simplify Theorems 2 and 7. Compared to Theorems 2 and 7, Theorem 8 introduces an additional condition in (11). This condition is required to ensure that xt lies in the feasible region of ft( ), otherwise, ft(xt) can be infinity and it is impossible to bound the dynamic regret. The multiple applications of damped Newton method can enforce xt to be close to x t 1. Combined with (11), we conclude that xt is also close to x t . Then, based on the property of the Dikin ellipsoid of self-concordant functions [Nemirovski, 2004], we can guarantee that xt is feasible for ft( ). 4 Conclusion and Future Work In this paper, we discuss how to reduce the dynamic regret of online learning by allowing the learner to query the gradient/Hessian of each function multiple times. By applying gradient descent multiple times in each round, we show that the dynamic regret can be upper bounded by the minimum of the path-length and the squared path-length, when functions are strongly convex and smooth. We then extend this theoretical guarantee to functions that are semi-strongly convex and smooth. We finally demonstrate that for self-concordant functions, applying the damped Newton method multiple times achieves a similar result. In the current study, we upper bound the dynamic regret in terms of the path-length or the squared path-length of the comparator sequence. As we mentioned before, there also exist some regularities defined in terms of the function sequence, e.g., the functional variation [Besbes et al., 2015]. In the future, we will investigate whether multiple accesses of gradient/Hessian can improve the dynamic regret when measured by certain regularities of the function sequence. Another future work is to extend our results to the more general dynamic regret R(u1, . . . , u T ) = where u1, . . . , u T X is an arbitrary sequence of comparators [Zinkevich, 2003]. Acknowledgments This work was partially supported by the NSFC (61603177, 61333014), Jiangsu SF (BK20160658), YESS (2017QNRC001), NSF (IIS-1545995), and the Collaborative Innovation Center of Novel Software Technology and Industrialization. Jinfeng Yi is now at Tencent AI Lab, Bellevue, WA, USA. J. Abernethy, P. L. Bartlett, A. Rakhlin, and A. Tewari. Optimal stragies and minimax lower bounds for online convex games. In Proceedings of the 21st Annual Conference on Learning Theory, 2008a. J. Abernethy, E. Hazan, and A. Rakhlin. Competing in the dark: An efficient algorithm for bandit linear optimization. In Proceedings of the 21st Annual Conference on Learning, pages 263 274, 2008b. O. Besbes, Y. Gur, and A. Zeevi. Non-stationary stochastic optimization. Operations Research, 63 (5):1227 1244, 2015. S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. N. Buchbinder, S. Chen, J. S. Naor, and O. Shamir. Unified algorithms for online learning and competitive analysis. In Proceedings of the 25th Annual Conference on Learning Theory, 2012. N. Cesa-bianchi, P. Gaillard, G. Lugosi, and G. Stoltz. Mirror descent meets fixed share (and feels no regret). In Advances in Neural Information Processing Systems 25, pages 980 988, 2012. C.-K. Chiang, T. Yang, C.-J. Lee, M. Mahdavi, C.-J. Lu, R. Jin, and S. Zhu. Online optimization with gradual variations. In Proceedings of the 25th Annual Conference on Learning Theory, 2012. A. Daniely, A. Gonen, and S. Shalev-Shwartz. Strongly adaptive online learning. In Proceedings of The 32nd International Conference on Machine Learning, 2015. P. Gong and J. Ye. Linear convergence of variance-reduced stochastic gradient without strong convexity. Ar Xiv e-prints, ar Xiv:1406.1102, 2014. E. C. Hall and R. M. Willett. Dynamical models and tracking regret in online convex programming. In Proceedings of the 30th International Conference on Machine Learning, pages 579 587, 2013. E. Hazan and S. Kale. Beyond the regret minimization barrier: an optimal algorithm for stochastic strongly-convex optimization. In Proceedings of the 24th Annual Conference on Learning Theory, pages 421 436, 2011. E. Hazan and C. Seshadhri. Adaptive algorithms for online decision problems. Electronic Colloquium on Computational Complexity, 88, 2007. E. Hazan, A. Agarwal, and S. Kale. Logarithmic regret algorithms for online convex optimization. Machine Learning, 69(2-3):169 192, 2007. M. Herbster and M. K. Warmuth. Tracking the best expert. Machine Learning, 32(2):151 178, 1998. A. Jadbabaie, A. Rakhlin, S. Shahrampour, and K. Sridharan. Online optimization: Competing with dynamic comparators. In Proceedings of the 18th International Conference on Artificial Intelligence and Statistics, 2015. A. Mokhtari, S. Shahrampour, A. Jadbabaie, and A. Ribeiro. Online optimization in dynamic environments: Improved regret rates for strongly convex problems. Ar Xiv e-prints, ar Xiv:1603.04954, 2016. I. Necoara, Y. Nesterov, and F. Glineur. Linear convergence of first order methods for non-strongly convex optimization. Ar Xiv e-prints, ar Xiv:1504.06298, 2015. A. Nemirovski. Interior point polynomial time methods in convex programming. Lecture notes, Technion Israel Institute of Technology, 2004. Y. Nesterov. Introductory lectures on convex optimization: a basic course, volume 87 of Applied optimization. Kluwer Academic Publishers, 2004. S. Rakhlin and K. Sridharan. Optimization, learning, and games with predictable sequences. In Advances in Neural Information Processing Systems 26, pages 3066 3074, 2013. S. Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107 194, 2011. S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: primal estimated sub-gradient solver for SVM. In Proceedings of the 24th International Conference on Machine Learning, pages 807 814, 2007. P.-W. Wang and C.-J. Lin. Iteration complexity of feasible descent methods for convex optimization. Journal of Machine Learning Research, 15:1523 1548, 2014. T. Yang, L. Zhang, R. Jin, and J. Yi. Tracking slowly moving clairvoyant: Optimal dynamic regret of online learning with true and noisy gradient. In Proceedings of the 33rd International Conference on Machine Learning, 2016. M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the 20th International Conference on Machine Learning, pages 928 936, 2003.