# nonstationary_online_convex_optimization_with_arbitrary_delays__b641455e.pdf Non-stationary Online Convex Optimization with Arbitrary Delays Yuanyu Wan 1 2 3 Chang Yao 1 2 3 Mingli Song 1 3 Lijun Zhang 4 1 Online convex optimization (OCO) with arbitrary delays, in which gradients or other information of functions could be arbitrarily delayed, has received increasing attention recently. Different from previous studies that focus on stationary environments, this paper investigates the delayed OCO in non-stationary environments, and aims to minimize the dynamic regret with respect to any sequence of comparators. To this end, we first propose a simple algorithm, namely DOGD, which performs a gradient descent step for each delayed gradient according to their arrival order. Despite its simplicity, our novel analysis shows that the dynamic regret of DOGD can be automatically bounded by O( d T(PT +1)) under mild assumptions, and O( d T(PT + 1)) in the worst case, where d and d denote the average and maximum delay respectively, T is the time horizon, and PT is the path-length of comparators. Furthermore, we develop an improved algorithm, which reduces those dynamic regret bounds achieved by DOGD to O( p d T(PT + 1)) and O( p d T(PT + 1)), respectively. The key idea is to run multiple DOGD with different learning rates, and utilize a metaalgorithm to track the best one based on their delayed performance. Finally, we demonstrate that our improved algorithm is optimal in the worst case by deriving a matching lower bound. 1. Introduction Online convex optimization (OCO) has become a popular paradigm for solving sequential decision-making problems 1The State Key Laboratory of Blockchain and Data Security, Zhejiang University, Hangzhou, China 2School of Software Technology, Zhejiang University, Ningbo, China 3Hangzhou High Tech Zone (Binjiang) Institute of Blockchain and Data Security, Hangzhou, China 4National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China. Correspondence to: Yuanyu Wan . Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). (Shalev-Shwartz, 2011; Hazan, 2016; Orabona, 2019). In OCO, an online player acts as the decision maker, which chooses a decision xt from a convex set K Rn at each round t [T]. After the decision xt is committed, the player suffers a loss ft(xt), where ft(x) : K 7 R is a convex function selected by an adversary. To improve the performance in subsequent rounds, the player needs to update the decision by exploiting information about loss functions in previous rounds. Plenty of algorithms and theories have been introduced to guide the player (Zinkevich, 2003; Shalev-Shwartz & Singer, 2007; Hazan et al., 2007). However, most of existing studies assume that the information about each function ft(x) is revealed at the end of round t, which is not necessarily satisfied in many real applications. For example, in online advertisement (Mc Mahan et al., 2013; He et al., 2014), each loss function depends on whether a user clicks an ad or not, which may not be decided even when the user has observed the ad for a long period of time. To tackle this issue, there has been a surge of research interest in OCO with arbitrary delays (Joulani et al., 2013; Mc Mahan & Streeter, 2014; Quanrud & Khashabi, 2015; Joulani et al., 2016; Flaspohler et al., 2021; Wan et al., 2022a;b; 2023a), where the information about ft(x) is revealed at the end of round t + dt 1, and dt 1 denotes the delay. However, these studies focus on developing algorithms to minimize the static regret of the player, i.e., R(T) = PT t=1 ft(xt) minx K PT t=1 ft(x), which is only meaningful for stationary environments where at least one fixed decision can minimize the cumulative loss well, and thus cannot handle non-stationary environments where the best decision is drifting over time. To address this limitation, we investigate the delayed OCO with a more suitable performance metric called dynamic regret (Zinkevich, 2003): R(u1, . . . , u T ) = which compares the player against any sequence of changing comparators u1, . . . , u T K. It is well-known that in the non-delayed setting, online gradient descent (OGD) can attain a dynamic regret bound of O( T(PT +1)) (Zinkevich, 2003), where PT = PT t=2 ut ut 1 2 is the path-length of comparators, and multiple OGD with different learning Non-stationary Online Convex Optimization with Arbitrary Delays rates can be combined to achieve an optimal dynamic regret bound of O( p T(PT + 1)) by using a mete-algorithm (Zhang et al., 2018a). Thus, it is natural to ask whether these algorithms and dynamic regret bounds can be generalized into the setting with arbitrary delays. In this paper, we provide an affirmative answer to the above question. Specifically, we first propose delayed online gradient descent (DOGD), and provide a novel analysis on its dynamic regret. In the literature, Quanrud & Khashabi (2015) have developed a delayed variant of OGD for minimizing the static regret, which performs a gradient descent step by using the sum of gradients received in each round. Different from their algorithm, our DOGD performs a gradient descent step for each delayed gradient according to their arrival order, which allows us to exploit an In-Order property (i.e., delays do not change the arrival order of gradients) to reduce the dynamic regret. Let d = PT t=1 dt/T and d = max{d1, . . . , d T } denote the average and maximum delay, respectively. Our analysis shows that the dynamic regret of DOGD can be automatically bounded by O( d T(PT + 1)) under mild assumptions such as the In Order property, and O( d T(PT + 1)) in the worst case. Furthermore, inspired by Zhang et al. (2018a), we propose an improved algorithm based on DOGD, namely multiple delayed online gradient descent (Mild-OGD). The essential idea is to run multiple DOGD, each with a different learning rate that enjoys small dynamic regret for a specific path-length, and combine them with a meta-algorithm. Compared with Zhang et al. (2018a), the key challenge is that the performance of each DOGD is required by the meta-algorithm, but it is also arbitrarily delayed. To address this difficulty, our meta-algorithm is built upon the delayed Hedge a technique for prediction with delayed expert advice (Korotin et al., 2020), which can track the best DOGD based on their delayed performance. We prove that the dynamic regret of Mild-OGD can be automatically bounded by O( p d T(PT + 1)) under mild assumptions such as the In-Order property, and O( p d T(PT + 1)) in the worst case. In the special case without delay, both bounds reduce to the O( p T(PT + 1)) bound achieved by Zhang et al. (2018a). Finally, we demonstrate that our Mild-OGD is optimal in the worst case by deriving a matching lower bound. 2. Related Work In this section, we briefly review related work on OCO with arbitrary delays and the dynamic regret. 2.1. OCO with Arbitrary Delays To deal with arbitrary delays, Joulani et al. (2013) first propose a black-box technique, which can extend any nondelayed OCO algorithm into the delayed setting. The main idea is to pool multiple instances of the non-delayed algorithm, each of which runs over a subsequence of rounds that satisfies the non-delayed assumption. Moreover, Joulani et al. (2013) show that if the non-delayed algorithm has a static regret bound of R(T), this technique can attain a static regret bound of d R(T/d). Notice that in the non-delayed setting, there exist plenty of algorithms with an O( T) static regret bound, such as OGD (Zinkevich, 2003). As a result, combining with OGD, this technique can achieve a static regret bound of O( d T). However, despite the generality of this technique, it needs to run multiple instances of the non-delayed algorithm, which could be prohibitively resource-intensive (Quanrud & Khashabi, 2015; Joulani et al., 2016). For this reason, instead of adopting the technique of Joulani et al. (2013), subsequent studies extend many specific non-delayed OCO algorithms into the delayed setting by only running a single instance of them with the delayed information about all loss functions. Specifically, Quanrud & Khashabi (2015) propose a delayed variant of OGD, and reduce the static regret to O( d T), which depends on the average delay d, instead of the maximum delay d. By additionally assuming that the In-Order property holds, Mc Mahan & Streeter (2014) develop a delayed variant of the adaptive gradient (Ada Grad) algorithm (Mc Mahan & Streeter, 2010; Duchi et al., 2011), and establish a data-dependent static regret bound, which could be tighter than O( d T) for sparse data. Later, Joulani et al. (2016) propose another delayed variant of Ada Grad, which can attain a data-dependent static regret bound without the In-Order property. Recently, Flaspohler et al. (2021) develop delayed variants of optimistic algorithms (Rakhlin & Sridharan, 2013; Joulani et al., 2017), which can make use of hints about expected future loss functions to improve the O( d T) static regret. Wan et al. (2022a) extend the delayed variant of OGD (Quanrud & Khashabi, 2015) to further exploit the strong convexity of functions. Wan et al. (2022b; 2023a) develop a delayed variant of online Frank-Wolfe (Hazan & Kale, 2012), and obtain a static regret bound of O(T 3/4 + d T 1/4). Their algorithm is projection-free and can be efficiently implemented over complex constraints. We also notice that Korotin et al. (2020) consider the problem of prediction with expert advice a special case of OCO with linear functions and simplex decision sets, and propose a delayed variant of Hedge (Freund & Schapire, 1997) to achieve the O( d T) static regret. 2.2. Dynamic Regret Dynamic regret of OCO is first introduced by Zinkevich (2003), who demonstrates that OGD can attain a dynamic regret bound of O( T(PT + 1)) by simply utilizing a constant learning rate. Later, Zhang et al. (2018a) establish a lower bound of Ω( p T(PT + 1)) for the dynamic regret. Moreover, to improve the upper bound, Zhang et al. (2018a) Non-stationary Online Convex Optimization with Arbitrary Delays propose a novel algorithm that runs multiple instances of OGD with different learning rates in parallel, and tracks the best one via Hedge (Freund & Schapire, 1997). Although the strategy of maintaining multiple learning rates is originally proposed to adaptively minimize the static regret for multiple types of functions (van Erven & Koolen, 2016; van Erven et al., 2021), Zhang et al. (2018a) extend it to achieve an optimal dynamic regret bound of O( p T(PT + 1)). Subsequent studies achieve tighter dynamic regret bounds for special types of data (Cutkosky, 2020) and functions (Zhao et al., 2020; Baby & Wang, 2021; 2022; 2023), and reduce the computational complexity for handling complex constraints (Zhao et al., 2022; Wang et al., 2024). Besides, there also exist plenty of studies (Jadbabaie et al., 2015; Besbes et al., 2015; Yang et al., 2016; Mokhtari et al., 2016; Zhang et al., 2017; 2018b; Baby & Wang, 2019; Wan et al., 2021; 2023b; Zhao & Zhang, 2021; Wang et al., 2021; 2023) that focus on a restricted form of the dynamic regret, in which ut = x t argminx K ft(x). However, as discussed by Zhang et al. (2018a), the restricted dynamic regret is too pessimistic and less flexible than the general one. 2.3. Discussions Although both arbitrary delays and the dynamic regret have attracted much research interest, it is still unclear how arbitrary delays affect the dynamic regret. Recently, Wang et al. (2021; 2023) have demonstrated under a fixed and knowable delay d , simply performing OGD with a delayed gradient ft d +1(xt d +1) is able to achieve a restricted dynamic regret bound of O( p d T(P T + 1)) when P T = PT t=2 x t x t 1 2 is also knowable.1 However, their algorithm and theoretical results do not apply to the general dynamic regret under arbitrary delays. Moreover, one may try to extend existing algorithms with dynamic regret bounds into the delayed setting via the black-box technique of Joulani et al. (2013). However, we want to emphasize that they focus on the static regret, and their analysis cannot directly yield a dynamic regret bound. In addition, since their technique does not achieve the O( d T) static regret, it seems also unable to achieve the O( p d T(PT + 1)) dynamic regret even under the In-Order assumption. 3. Main Results In this section, we first introduce necessary assumptions, and then present our DOGD and Mild-OGD. Finally, we provide a matching lower bound to demonstrate the optimality of our Mild-OGD in the worst case. 1Note that Wang et al. (2021; 2023) aim to handle a special decision set with long-term constraints, and thus their algorithm is more complicated than OGD with the delayed gradient. Here, we omit other details of their algorithm because such a decision set is beyond the scope of this paper. 3.1. Assumptions Assumption 3.1. The gradients of all functions are bounded by G, i.e., ft(x) 2 G for any x K and t [T]. Assumption 3.2. The decision set K contains the origin 0, and its diameter is bounded by D, i.e., x y 2 D for any x, y K. Assumption 3.3. Delays do not change the arrival order of gradients, i.e., the gradient fi(xi) is received before the gradient fj(xj), for any 1 i < j T. Remark: The first two assumptions have been commonly utilized in previous studies on OCO (Shalev-Shwartz, 2011; Hazan, 2016). To further justify the rationality of Assumption 3.3, we notice that parallel and distributed optimization (Mc Mahan & Streeter, 2014; Zhou et al., 2018) is also a representative application of delayed OCO. For parallel optimization with many threads, the delay is mainly caused by the computing time of gradients. Thus, as in Mc Mahan & Streeter (2014), it is reasonable to assume that these delays satisfy the In-Order assumption, because the gradient computed first is more likely to be obtained first. Even for general parallel and distributed optimization, polynomially growing delays, which imply di dj for i < j and thus satisfy the In-Order assumption, have received much attention in recent years (Zhou et al., 2018; Ren et al., 2020; Zhou et al., 2022). Moreover, we want to emphasize that Assumption 3.3 is only utilized to achieve the dynamic regret bound depending on the average delay d, and the case without this assumption is also considered. 3.2. DOGD with Dynamic Regret In the following, we first introduce detailed procedures of DOGD, and then present its theoretical guarantees. 3.2.1. DETAILED PROCEDURES Recall that in the non-delayed setting, the classical OGD algorithm (Zinkevich, 2003) at each round t updates the decision as xt+1 = argmin x K x (xt η ft(xt)) 2 2 (1) where η is a learning rate. To handle the setting with arbitrary delays, Quanrud & Khashabi (2015) have proposed a delayed variant of OGD by replacing ft(xt) with the sum of gradients received in round t. However, it ignores the arrival order of gradients, and thus cannot benefit from the In-Order property when minimizing the dynamic regret. To address this limitation, we propose a new delayed variant of OGD, which performs a gradient descent step for each delayed gradient according to their arrival order. Specifically, our algorithm is named as delayed online gradient descent (DOGD) and outlined in Algorithm 1, where τ records the Non-stationary Online Convex Optimization with Arbitrary Delays Algorithm 1 DOGD 1: Input: a learning rate η 2: Initialization: set y1 = 0 and τ = 1 3: for t = 1, . . . , T do 4: Play xt = yτ and query ft(xt) 5: Receive { fk(xk)|k Ft} 6: for k Ft (in the ascending order) do 7: Compute yτ+1 as in (2) and set τ = τ + 1 8: end for 9: end for number of generated decisions and yτ denotes the τ-th generated decision. Initially, we set y1 = 0 and τ = 1. At each round t [T], we first play the latest decision xt = yτ and query the gradient ft(xt). After that, due to the effect of arbitrary delays, we receive a set of delayed gradients { fk(xk)|k Ft}, where Ft = {k [T]|k + dk 1 = t}. For each k Ft, inspired by (1), we perform the following update yτ+1 = argmin x K x (yτ η fk(xk)) 2 2 (2) and then set τ = τ + 1. Moreover, to utilize the In-Order property, elements in the set Ft are sorted and traversed in the ascending order. 3.2.2. THEORETICAL GUARANTEES We notice that due to the effect of delays, there could exist some gradients that arrive after round T. Although our DOGD does not need to utilize these gradients, they are useful to facilitate our analysis and discussion. Therefore, in the analysis of DOGD, we virtually set xt = yτ and perform steps 5 to 8 in Algorithm 1 at some additional rounds t = T + 1, . . . , T + d 1. In this way, all queried gradients are utilized to generate decisions y1, . . . , y T +1. Moreover, we denote the time-stamp of the τ-th utilized gradient by cτ. To help understanding, one can imagine that DOGD also sets cτ = k at the beginning of its step 7. Then, we establish the following theorem with only Assumptions 3.1 and 3.2. Theorem 3.4. Under Assumptions 3.1 and 3.2, for any comparator sequence u1, . . . , u T K, Algorithm 1 ensures R(u1, . . . , u T ) η + ηG2 T X t=1 G ut uct 2 (3) where mt = t Pt 1 i=1 |Fi|. Remark: The value of mt 1 actually counts the number of gradients that have been queried, but still not received at the end of round t 1. Since the gradient ft(xt) will only be counted as an unreceived gradient in dt 1 rounds, it is easy to verify that t=1 dt = d T. (4) Therefore, the first two terms in the right side of (3) are upper bounded by (2D + PT )G d T so long as G q PT t=1 mt . (5) However, we still need to bound the last term in the right side of (3), which reflects the comparator drift caused by arbitrary delays, and has never appeared in previous studies on the delayed feedback and dynamic regret. To this end, we establish the following lemma regarding the comparator drift. Lemma 3.5. Under Assumption 3.2, for any comparator sequence u1, . . . , u T K, Algorithm 1 ensures t=1 ut uct 2 min {KD, 2d PT } p where K = PT t=1 I(t = ct) and I( ) denotes the indicator function. Remark: Since Algorithm 1 utilizes the received gradients in the ascending order, the value of K counts the number of delays that are not in order. Therefore, Lemma 3.5 implies that the comparator drift can be upper bounded by O( d TPT ) in the worst case because of K T, and vanishes if the In-Order property holds, i.e., K = 0. To facilitate discussions, we mainly focus on these two extremes, though the comparator drift can be bounded by O( p d TPT ) in an intermediate case with K O(T d/d). By further combining Theorem 3.4 with (4) and Lemma 3.5, we derive the following corollary. Corollary 3.6. Under Assumptions 3.1 and 3.2, by setting η as in (5), Algorithm 1 ensures R(u1, . . . , u T ) (2D + PT )G p for any comparator sequence u1, . . . , u T K, where ( 0, if Assumption 3.3 also holds; min {TGD, 2d GPT } , otherwise. (6) Remark: From Corollary 3.6, our DOGD enjoys a dynamic regret bound of O( d T(PT + 1) + C), which is adaptive Non-stationary Online Convex Optimization with Arbitrary Delays to the upper bound of comparator drift. First, because of min {TGD, 2d GPT } G 2d TDPT and d d, the dynamic regret of DOGD can be bounded by O( d T(PT +1)) in the worst case, which magnifies the O( T(PT + 1)) dynamic regret of OGD (Zinkevich, 2003) in the non-delayed setting by a coefficient depending on the maximum delay d. Second, in case C O( d TPT ), the dynamic regret of DOGD automatically reduces to O( d T(PT + 1)), which depends on the average delay. According to (6), this condition can be simply satisfied for all possible PT when the In-Order property holds or d d T. Third, by substituting u1 = = u T into Corollary 3.6, we find that DOGD can attain a static regret bound of O( d T) for arbitrary delays, which matches the best existing result (Quanrud & Khashabi, 2015). Remark: At first glance, Corollary 3.6 needs to set the learning rate as in (5), which may become a limitation of DOGD, because the value of PT t=1 mt is generally unknown in practice. However, we note that Quanrud & Khashabi (2015) also face this issue when minimizing the static regret of OCO with arbitrary delays, and have introduced a simple solution by utilizing the standard doubling trick (Cesa Bianchi et al., 1997) to adaptively adjust the learning rate. The main insight behind this solution is that the value of PT t=1 mt can be calculated on the fly. The details about DOGD with the doubling trick are provided in the appendix. 3.3. Mild-OGD with Improved Dynamic Regret One unsatisfactory point of DOGD is that the dynamic regret linearly depends on the path-length. Notice that if only a specific path-length PT is considered, from Theorem 3.4, we can tune the learning rate as G q PT t=1 mt and obtain the dynamic regret sublinear to PT . However, our goal is to minimize the dynamic regret with respect to any possible path-length PT . To address this dilemma, inspired by Zhang et al. (2018a), we develop an algorithm that runs multiple DOGD as experts, each with a different learning rate for a specific path-length, and combines them with a meta-algorithm. It is worth noting that the meta-algorithm of Zhang et al. (2018a) is incompatible to the delayed setting studied here. To this end, we adopt the delayed Hedge (Korotin et al., 2020), an expert-tracking method under arbitrary delays, to design our meta-algorithm. Moreover, there exist two options for the meta-algorithm to maintain these expert-algorithms: running them over the original functions {ft(x)}t [T ] or the surrogate functions {ℓt(x)}t [T ], where ℓt(x) = ft(xt), x xt (7) Algorithm 2 Mild-OGD: Meta-algorithm 1: Input: a parameter α and a set H containing learning rates for experts 2: Activate a set of experts {Eη|η H} by invoking the expert-algorithm for each learning rate η H 3: Sort learning rates in the ascending order, i.e., η1 η|H|, and set wηi 1 = |H|+1 i(i+1)|H| 4: for t = 1, . . . , T do 5: Receive xη t from each expert Eη 6: Play the decision xt = P η H wη t xη t 7: Query ft(xt) and receive { fk(xk)|k Ft} 8: Update the weight of each expert as in (8) 9: Send { fk(xk)|k Ft} to each expert Eη 10: end for and xt is the decision of the meta-algorithm. In this paper, we choose the second option, because the surrogate functions allow expert-algorithms to reuse the gradient of the meta-algorithm, and thus can avoid inconsistent delays between the meta-algorithm and expert-algorithms. Specifically, our algorithm is named as multiple delayed online gradient descent (Mild-OGD), and stated below. Meta-algorithm Let H denote a set of learning rates for experts. We first activate a set of experts {Eη|η H} by invoking the expert-algorithm for each learning rate η H. Let ηi be the i-th smallest learning rate in H. Following Zhang et al. (2018a), the initial weight of each expert Eηi is set as wηi 1 = |H| + 1 i(i + 1)|H|. In each round t [T], our meta-algorithm receives a decision xη t from each expert Eη, and then plays the weighted decision as xt = X η H wη t xη t . After that, it queries the gradient ft(xt), but only receives { fk(xk)|k Ft} due to the effect of arbitrary delays. Then, according to the delayed Hedge (Korotin et al., 2020), we update the weight of each expert as wη t+1 = wη t e α P k Ft ℓk(xη k) P µ H wµ t e α P k Ft ℓk(xµ k) (8) where α is a parameter and ℓk(x) is defined in (7). This is the critical difference between our meta-algorithm and that in Zhang et al. (2018a), which updates the weight according to the vanilla Hedge (Cesa-Bianchi et al., 1997). Finally, we send gradients { fk(xk)|k Ft} to each expert Eη so that they can update their own decisions without querying additional gradients. The detailed procedures of our meta-algorithm are summarized in Algorithm 2. Non-stationary Online Convex Optimization with Arbitrary Delays Algorithm 3 Mild-OGD: Expert-algorithm 1: Input: a learning rate η 2: Initialization: set yη 1 = 0 and τ = 1 3: for t = 1, . . . , T do 4: Submit xη t = yη τ to the meta-algorithm 5: Receive gradients { fk(xk)|k Ft} from the metaalgorithm 6: for k Ft (in the ascending order) do 7: Compute yη τ+1 as in (9) and set τ = τ + 1 8: end for 9: end for Expert-algorithm The expert-algorithm is instantiated by running DOGD over the surrogate loss function defined in (7), instead of the real loss function. To emphasize this difference, we present its procedures in Algorithm 3. The input and initialization are the same as those in DOGD. At each round t [T], the expert-algorithm first submits the decision xη t = yη τ to the meta-algorithm, and then receives gradients { fk(xk)|k Ft} from the meta-algorithm. For each k Ft, it updates the decision as yη τ+1 = argmin x K x (yη τ η fk(xk)) 2 2 (9) and sets τ = τ + 1. We have the following theoretical guarantee for the dynamic regret of Mild-OGD. Theorem 3.7. Let mt = t Pt 1 i=1 |Fi|. Under Assumptions 3.1 and 3.2, by setting H = ηi = 2i 1D i = 1, . . . , N and α = 1 GD β where N = 1 2 log2(T + 1) + 1 and β = PT t=1 mt, Algorithm 2 ensures R(u1, . . . , u T ) (3 p D(D + PT ) + D)G p + C + 2GD p d T ln (k + 1) d T(PT + 1) + C for any comparator sequence u1, . . . , u T K, where k = j log2 p (PT + D)/D k + 1 and C is defined in (6). Remark: Theorem 3.7 shows that Mild-OGD can attain a dynamic regret bound of O( p d T(PT + 1) + C), which is also adaptive to the upper bound of comparator drift. It is easy to verify that this dynamic regret bound becomes O( p d T(PT + 1)) in the worst case. Moreover, it reduces to O( p d T(PT + 1)) in case C O( p d TPT ), which can be satisfied for all possible PT when the In-order property holds or for PT d T/d2. Compared with the dynamic regret of DOGD, Mild-OGD reduces the linear dependence on PT to be sublinear. Moreover, compared with the optimal O( p T(PT + 1)) bound achieved in the non-delayed setting (Zhang et al., 2018a), Mild-OGD magnifies it by a coefficient depending on delays. We also notice that although Theorem 3.7 requires the value of PT t=1 mt to tune parameters, as previously discussed, this requirement can be removed by utilizing the doubling trick. The details about Mild-OGD with the doubling trick are provided in the appendix. 3.4. Lower Bound Finally, we show that our Mild-OGD is optimal in the worst case by establishing the following lower bound. Theorem 3.8. Let L = TD/ max{P, D} . Suppose K = [ D/(2 n), D/(2 n)]n which satisfies Assumption 3.2. For any OCO algorithm, any P [0, TD], and any positive integer d, there exists a sequence of comparators u1, . . . , u T K satisfying PT P, a sequence of functions f1(x), . . . , f T (x) satisfying Assumption 3.1, and a sequence of delays 1 d1, . . . , d T d such that R(u1, . . . , u T ) 2 , if d > L; d D max{P, D}T 2 , otherwise. Remark: From Theorem 3.8, if d > L = Ω(T/(PT + 1)), there exists an Ω(T) lower bound on the dynamic regret, which can be trivially matched by any OCO algorithm including our Algorithm 2. As a result, we mainly focus on the case d L, and notice that Theorem 3.8 essentially establishes an Ω( p d T(PT + 1)) lower bound, which matches the O( p d T(PT + 1)) dynamic regret of our Mild-OGD in the worst case. To the best of our knowledge, this is the first lower bound for the dynamic regret of the delayed OCO. 4. Analysis In this section, we prove Theorem 3.4, Lemma 3.5, Theorem 3.7, and Theorem 3.8 by introducing some lemmas. The omitted proofs can be found in the appendix. 4.1. Proof of Theorem 3.4 It is easy to verify that R(u1, . . . , u T ) t=1 ft(xt), xt ut where the inequality is due to the convexity of functions. Non-stationary Online Convex Optimization with Arbitrary Delays Let τt = 1 + Pt 1 i=1 |Fi|. Then, combining the above inequality with the fact that c1, . . . , c T is a permutation of 1, . . . , T, we have R(u1, . . . , u T ) t=1 fct(xct), xct uct fct(xct), yτct uct fct(xct), yt ut + yτct yt t=1 fct(xct), ut uct t=1 fct(xct), yt ut + t=1 G yτct yt 2 t=1 G ut uct 2 where the first equality is due to xt = yτt in Algorithm 1, and the last inequality is due to Assumption 3.1. Let y t+1 = yt η fct(xct). For the first term in the right side of (10), we have t=1 fct(xct), yt ut = yt y t+1, yt ut yt ut 2 2 y t+1 ut 2 2 + yt y t+1 2 2 yt ut 2 2 y t+1 ut 2 2 + η fct(xct) 2 2 1 2η yt ut 2 2 yt+1 ut 2 2 + η2G2 yt 2 2 yt+1 2 2 2η + yt+1 yt, ut η y T +1, u T + 1 η ut 1 ut, yt + ηTG2 η y T +1 2 u T 2 + 1 η ut 1 ut 2 yt 2 + ηTG2 where the first inequality is due to Assumption 3.1, the second inequality is due to y1 = 0 and y T +1 2 2 0, and the last inequality is due to Assumption 3.2. Then, we proceed to bound the second term in the right side of (10). Note that before round ct, Algorithm 1 has received τct 1 gradients, and thus has generated y1, . . . , yτct . Moreover, let q = ct + dct 1. It is easy to verify that q ct, and thus Algorithm 1 has also generated y1, . . . , yτct before round q. Since the gradient fct(xct) is used to update yt in round q, we have τct t. (12) From (12), we have k=τct yk yk+1 2 k=τct η fck(xck) 2 t=1 (t τct) where the last inequality is due to Assumption 3.1. Moreover, because of the definitions of τt and mt, we have t=1 (t τct) = where the second equality is due to the fact that c1, . . . , c T is a permutation of 1, . . . , T. Then, combining (13) with (14), we have t=1 G yτct yt 2 ηG2 T X t=1 (mt 1) . (15) Finally, combining (10) with (11) and (15), we have t=1 (ft(xt) ft(ut)) η + ηG2 T X t=1 G ut uct 2 which completes this proof. Non-stationary Online Convex Optimization with Arbitrary Delays 4.2. Proof of Lemma 3.5 Since fct(xct) is the t-th used gradient and arrives at the end of round ct + dct 1, it is not hard to verify that t ct + dct 1 ct + d 1 (16) for any t [T], and there are at most t 1 arrived gradients before round ct + dct 1. Notice that gradients queried at rounds 1, . . . , t must have arrived at the end of round t + d 1. Therefore, we also have ct + dct 2 < t + d 1, which implies that ct t + d dct t + d 1. (17) If t [T] and ct t, according to (16), we have k=ct uk+1 uk 2 min{ct+d 2,T 1} X k=ct uk+1 uk 2. Otherwise, if t [T] and ct > t, according to (17), we have k=t uk+1 uk 2 min{t+d 2,T 1} X k=t uk+1 uk 2. Therefore, combining (18) and (19), we have t=1 ut uct 2 min{ct+d 2,T 1} X k=ct uk+1 uk 2 min{t+d 2,T 1} X k=t uk+1 uk 2 min{t+d 2,T 1} X k=t uk+1 uk 2 t=1 ut+1 ut 2 = 2d PT where the equality is due to the fact that c1, . . . , c T is a permutation of 1, . . . , T. Then, we complete this proof by further noticing that Assumption 3.2 and the definition of K can ensure t=1 ut uct 2 = t=1 I(t = ct) ut uct 2 t=1 I(t = ct)D = DK. 4.3. Proof of Theorem 3.7 D(D + PT )/(βG2), where β = PT t=1 mt. From Assumption 3.2, we have t=2 ut ut 1 2 TD which implies that η1 = D G β η D T + 1 G β η|H|. Therefore, for any possible value of PT , there must exist a learning rate ηk H such that ηk η 2ηk (20) where k = log2 p (PT + D)/D + 1. Then, the dynamic regret can be upper bounded as follows R(u1, . . . , u T ) t=1 ft(xt), xt ut t=1 ℓt (xt) t=1 ℓt (xηk t ) t=1 ℓt (xηk t ) t=1 ℓt (ut) . To bound the first term in the right side of (21), we introduce the following lemma. Lemma 4.1. Let mt = t Pt 1 i=1 |Fi|. Under Assumptions 3.1 and 3.2, for any η H, Algorithm 2 has t=1 ℓt (xt) t=1 ℓt(xη t ) 1 wη 1 + αG2D2 T X Combining Lemma 4.1 with (1/wηk 1 ) (k + 1)2 and α = 1 GD PT t=1 mt , under Assumptions 3.1 and 3.2, we have t=1 ℓt (xt) t=1 ℓt(xηk t ) t=1 mt ln(k + 1) + GD d T ln (k + 1) + GD p where the last inequality is due to (4). Note that each expert Eη actually is equal to running Algorithm 1 with ℓ1(x), . . . , ℓT (x), where each gradient ℓt(xη t ) = ft(xt) is delayed to the end of round t+dt 1. Non-stationary Online Convex Optimization with Arbitrary Delays Therefore, combining Theorem 3.4 with Lemma 3.5 and the definition of C in (6), under Assumptions 3.1 and 3.2, we have t=1 ℓt (xηk t ) t=1 ℓt (ut) ηk + ηk G2 T X 2(D2 + DPT ) η + η G2 T X D(D + PT ) p where the second inequality is due to (20), and the last inequality is due to the definition of η and (4). Finally, we complete this proof by combining (21) with the above two inequalities. 4.4. Proof of Theorem 3.8 Inspired by the proof of the lower bound in the non-delayed setting (Zhang et al., 2018a), we first need to establish a lower bound of static regret in the delayed setting. Although the seminal work of Weinberger & Ordentlich (2002) has already provided such a lower bound, it only holds in the special case that d divides T. To address this limitation, we establish a lower bound of static regret for any d and T, which is presented in the following lemma. Lemma 4.2. Suppose K = [ D/(2 n), D/(2 n)]n which satisfies Assumption 3.2. For any OCO algorithm and any positive integer d, there exists a sequence of functions f1(x), . . . , f T (x) satisfying Assumption 3.1 and a sequence of delays 1 d1, . . . , d T d such that Let Z = T/L . We then divide the total T rounds into Z blocks, where the length of the first Z 1 blocks is L and that of the last block is T (Z 1)L. In this way, we can define the set of rounds in the block z as Tz = {(z 1)L + 1, . . . , min{z L, T}}. Moreover, we define the feasible set of u1, . . . , u T as u1, . . . , u T K t=2 ut ut 1 2 P and construct a subset of C(P) as C (P) = {u1, . . . , u T K |ui = uj, z [Z], i, j Tz } . Notice that the connection C (P) C(P) is derived by the fact that the comparator sequence in C (P) only changes Z 1 P/D times, and thus its path-length does not exceed P. Then, because of C (P) C(P) and Lemma 4.2, there exists a sequence of functions f1(x), . . . , f T (x) satisfying Assumption 3.1 and a sequence of delays 1 d1, . . . , d T d such that t=1 ft(xt) min u1,...,u T C(P ) t=1 ft(xt) min u1,...,u T C (P ) t Tz ft(xt) min x K Finally, we can complete this proof by further noticing that 2 L/d = DGT 2 , if d > L; d D max{P, D}T 2 , otherwise; where the first inequality is due to |Tz| L for any z [Z], and the last inequality is mainly due to L/d 2L/d = 2 TD/ max{P, D} /d 4TD/(max{P, D}d) 5. Conclusion and Future Work In this paper, we study the dynamic regret of OCO with arbitrary delays. To this end, we first propose a simple algorithm called DOGD, the dynamic regret of which can be automatically bounded by O( d T(PT +1)) under mild assumptions such as the In-Order property, and O( d T(PT + 1)) in the worst case. Furthermore, based on DOGD, we develop an improved algorithm called Mild-OGD, which can automatically enjoy an O( p d T(PT + 1)) dynamic regret bound under mild assumptions such as the In-Order property, and an O( p d T(PT + 1)) dynamic regret bound in the worst case. Finally, we provide a matching lower bound to show the optimality of our Mild-OGD in the worst case. It is worth noting that there still are several directions for future research, which are discussed in the appendix due to the limitation of space. Non-stationary Online Convex Optimization with Arbitrary Delays Acknowledgements This work was partially supported by the National Natural Science Foundation of China (62306275, 62122037), the Zhejiang Province High-Level Talents Special Support Program Leading Talent of Technological Innovation of Ten Thousands Talents Program (No. 2022R52046), the Key Research and Development Program of Zhejiang Province (No. 2023C03192), and the Open Research Fund of the State Key Laboratory of Blockchain and Data Security, Zhejiang University. The authors would also like to thank the anonymous reviewers for their helpful comments. Impact Statement This paper presents work whose goal is to advance the field of Machine Learning. There are some potential societal consequences of our work, but we feel that none of them must be specifically highlighted here. Abernethy, J. D., Bartlett, P. L., Rakhlin, A., and Tewari, A. Optimal stragies and minimax lower bounds for online convex games. In Proceedings of the 21st Annual Conference on Learning Theory, pp. 415 424, 2008. Baby, D. and Wang, Y.-X. Online forecasting of totalvariation-bounded sequences. In Advances in Neural Information Processing Systems 32, pp. 11071 11081, 2019. Baby, D. and Wang, Y.-X. Optimal dynamic regret in expconcave online learning. In Proceedings of 34th Annual Conference on Learning Theory, pp. 359 409, 2021. Baby, D. and Wang, Y.-X. Optimal dynamic regret in proper online learning with strongly convex losses and beyond. In Proceedings of the 25th International Conference on Artificial Intelligence and Statistics, pp. 1805 1845, 2022. Baby, D. and Wang, Y.-X. Second order path variationals in non-stationary online learning. In Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, pp. 9024 9075, 2023. Besbes, O., Gur, Y., and Zeevi, A. Non-stationary stochastic optimization. Operations Research, 63(5):1227 1244, 2015. Cesa-Bianchi, N. and Lugosi, G. Prediction, Learning, and Games. Cambridge University Press, 2006. Cesa-Bianchi, N., Freund, Y., Haussler, D., Helmbold, D. P., Schapire, R. E., and Warmuth, M. K. How to use expert advice. Journal of the ACM, 44(3):427 485, 1997. Cutkosky, A. Parameter-free, dynamic, and stronglyadaptive online learning. In Proceedings of the 37th International Conference on Machine Learning, pp. 2250 2259, 2020. Duchi, J. C., Agarwal, A., and Wainwright, M. J. Dual averaging for distributed optimization: Convergence analysis and network scaling. IEEE Transactions on Automatic Control, 57(3):592 606, 2011. Flaspohler, G. E., Orabona, F., Cohen, J., Mouatadid, S., Oprescu, M., Orenstein, P., and Mackey, L. Online learning with optimism and delay. In Proceedings of the 38th International Conference on Machine Learning, pp. 3363 3373, 2021. Freund, Y. and Schapire, R. E. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119 139, 1997. Hazan, E. Introduction to online convex optimization. Foundations and Trends in Optimization, 2(3 4):157 325, 2016. Hazan, E. and Kale, S. Projection-free online learning. In Proceedings of the 29th International Conference on Machine Learning, pp. 1843 1850, 2012. Hazan, E., Agarwal, A., and Kale, S. Logarithmic regret algorithms for online convex optimization. Machine Learning, 69(2):169 192, 2007. He, X., Pan, J., Jin, O., Xu, T., Liu, B., Xu, T., Shi, Y., Atallah, A., Herbrich, R., Bowers, S., and Candela, J. Q. Practical lessons from predicting clicks on ads at facebook. In Proceedings of the 8th International Workshop on Data Mining for Online Advertising, pp. 1 9, 2014. Hoeffding, W. Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, 58(301):13 30, 1963. Jadbabaie, A., Rakhlin, A., Shahrampour, S., and Sridharan, K. Online optimization: Competing with dynamic comparators. In Proceedings of the 18th International Conference on Artificial Intelligence and Statistics, pp. 398 406, 2015. Joulani, P., Gy orgy, A., and Szepesv ari, C. Online learning under delayed feedback. In Proceedings of the 30th International Conference on Machine Learning, pp. 1453 1461, 2013. Joulani, P., Gy orgy, A., and Szepesv ari, C. Delaytolerant online convex optimization: Unified analysis and adaptive-gradient algorithms. Proceedings of the 30th AAAI Conference on Artificial Intelligence, pp. 1744 1750, 2016. Non-stationary Online Convex Optimization with Arbitrary Delays Joulani, P., Gy orgy, A., and Szepesv ari, C. A modular analysis of adaptive (non-)convex optimization: Optimism, composite objectives, and variational bounds. In Proceedings of the 28th International Conference on Algorithmic Learning Theory, pp. 681 720, 2017. Korotin, A., V yugin, V., and Burnaev, E. Adaptive hedging under delayed feedback. Neurocomputing, 397:356 368, 2020. Mc Mahan, H. B. and Streeter, M. Adaptive bound optimization for online convex optimization. In Proceedings of the 23rd Conference on Learning Theory, pp. 244 256, 2010. Mc Mahan, H. B. and Streeter, M. Delay-tolerant algorithms for asynchronous distributed online learning. In Advances in Neural Information Processing Systems 27, pp. 2915 2923, 2014. Mc Mahan, H. B., Holt, G., Sculley, D., Young, M., Ebner, D., Grady, J., Nie, L., Phillips, T., Davydov, E., Golovin, D., Chikkerur, S., Liu, D., Wattenberg, M., Hrafnkelsson, A. M., Boulos, T., and Kubica, J. Ad click prediction: a view from the trenches. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1222 1230, 2013. Mokhtari, A., Shahrampour, S., Jadbabaie, A., and Ribeiro, A. Online optimization in dynamic environments: Improved regret rates for strongly convex problems. In Proceedings of 55th Conference on Decision and Control, pp. 7195 7201, 2016. Orabona, F. A modern introduction to online learning. ar Xiv:1912.13213, 2019. Quanrud, K. and Khashabi, D. Online learning with adversarial delays. In Advances in Neural Information Processing Systems 28, pp. 1270 1278, 2015. Rakhlin, A. and Sridharan, K. Online learning with predictable sequences. In Proceedings of the 26th Annual Conference on Learning Theory, pp. 993 1019, 2013. Ren, Z., Zhou, Z., Qiu, L., Deshpande, A., and Kalagnanam, J. Delay-adaptive distributed stochastic optimization. In Proceedings of the 34th AAAI Conference on Artificial Intelligence, pp. 5503 5510, 2020. Shalev-Shwartz, S. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107 194, 2011. Shalev-Shwartz, S. and Singer, Y. A primal-dual perspective of online learning algorithm. Machine Learning, 69(2 3): 115 142, 2007. van Erven, T. and Koolen, W. M. Meta Grad: Multiple learning rates in online learning. In Advances in Neural Information Processing Systems 29, pp. 3666 3674, 2016. van Erven, T., Koolen, W. M., and van der Hoeven, D. Metagrad: Adaptation using multiple learning rates in online learning. Journal of Machine Learning Research, 22(161):1 61, 2021. Wan, Y., Xue, B., and Zhang, L. Projection-free online learning in dynamic environments. In Proceedings of the 35th AAAI Conference on Artificial Intelligence, pp. 10067 10075, 2021. Wan, Y., Tu, W.-W., and Zhang, L. Online strongly convex optimization with unknown delays. Machine Learning, 111(3):871 893, 2022a. Wan, Y., Tu, W.-W., and Zhang, L. Online Frank-Wolfe with arbitrary delays. In Advances in Neural Information Processing Systems 35, 2022b. Wan, Y., Wang, Y., Yao, C., Tu, W.-W., and Zhang, L. Projection-free online learning with arbitrary delays. ar Xiv:2204.04964v2, 2023a. Wan, Y., Zhang, L., and Song, M. Improved dynamic regret for online Frank-Wolfe. In Proceedings of the 36th Annual Conference on Learning Theory, 2023b. Wang, J., Liang, B., Dong, M., Boudreau, G., and Abou Zeid, H. Delay-tolerant constrained OCO with application to network resource allocation. In Proceedings of the 2021 IEEE Conference on Computer Communications, pp. 1 10, 2021. Wang, J., Dong, M., Liang, B., Boudreau, G., and Abou Zeid, H. Delay-tolerant OCO with long-term constraints: Algorithm and its application to network resource allocation. IEEE/ACM Transactions on Networking, 31(1): 147 163, 2023. Wang, Y., Yang, W., Jiang, W., Lu, S., Wang, B., Tang, H., Wan, Y., and Zhang, L. Non-stationary projectionfree online learning with dynamic and adaptive regret guarantees. In Proceedings of the 38th AAAI Conference on Artificial Intelligence, pp. 15671 15679, 2024. Weinberger, M. J. and Ordentlich, E. On delayed prediction of individual sequences. IEEE Transactions on Information Theory, 48(7):1959 1976, 2002. Yang, T., Zhang, L., Jin, R., and Yi, J. Tracking slowly moving clairvoyant: Optimal dynamic regret of online learning with true and noisy gradient. In Proceedings of the 33rd International Conference on Machine Learning, 2016. Non-stationary Online Convex Optimization with Arbitrary Delays Zhang, L., Yang, T., Yi, J., Jin, R., and Zhou, Z.-H. Improved dynamic regret for non-degenerate functions. In Advances in Neural Information Processing Systems 30, pp. 732 741, 2017. Zhang, L., Lu, S., and Zhou, Z.-H. Adaptive online learning in dynamic environments. In Advances in Neural Information Processing Systems 31, pp. 1323 1333, 2018a. Zhang, L., Yang, T., Jin, R., and Zhou, Z.-H. Dynamic regret of strongly adaptive methods. In Proceedings of the 35th International Conference on Machine Learning, pp. 5877 5886, 2018b. Zhao, P. and Zhang, L. Improved analysis for dynamic regret of strongly convex and smooth functions. In Proceedings of the 3rd Conference on Learning for Dynamics and Control, pp. 48 59, 2021. Zhao, P., Zhang, Y.-J., Zhang, L., and Zhou, Z.-H. Dynamic regret of convex and smooth functions. In Advances in Neural Information Processing Systems 33, pp. 12510 12520, 2020. Zhao, P., Xie, Y.-F., Zhang, L., and Zhou, Z.-H. Efficient methods for non-stationary online learning. In Advances in Neural Information Processing Systems 35, pp. 11573 11585, 2022. Zhou, Z., Mertikopoulos, P., Bambos, N., Glynn, P., Ye, Y., Li, L.-J., and Fei-Fei, L. Distributed asynchronous optimization with unbounded delays: How slow can you go? In Proceedings of the 35th International Conference on Machine Learning, pp. 5970 5979, 2018. Zhou, Z., Mertikopoulos, P., Bambos, N., Glynn, P. W., and Ye, Y. Distributed stochastic optimization with large delays. Mathematics of Operations Research, 47(3):2082 2111, 2022. Zinkevich, M. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the 20th International Conference on Machine Learning, pp. 928 936, 2003. Non-stationary Online Convex Optimization with Arbitrary Delays A. Detailed Discussions on Future Work First, we notice that the O( d T) static regret bound can be achieved under arbitrary delays (Quanrud & Khashabi, 2015). Thus, it is natural to ask whether the O( p d T(PT + 1)) dynamic regret bound can also be achieved without additional assumptions. However, from Theorem 3.4, compared with the static regret, it is more challenging to minimize the dynamic regret in the delayed setting, because delays will further cause a comparator drift, i.e., PT t=1 ut uct 2. It seems highly non-trivial to reduce the comparator drift without additional assumptions, and we leave this question as a future work. Second, we have utilized the doubling trick to avoid tuning the learning rate with the unknown cumulative delay. One potential limitation of this technique is that it needs to repeatedly restart itself, while forgetting all the preceding information. For minimizing the static regret with arbitrary delays, Joulani et al. (2016) have addressed this limitation by continuously adjusting the learning rate according to the norm of received gradients. Thus, it is also appealing to extend this idea for minimizing the dynamic regret with arbitrary delays. Third, our proposed algorithms require the time-stamp of delayed feedback. It is interesting to investigate how to minimize the dynamic regret with anonymous and arbitrary delays. A potential useful property is that under the In-Order assumption, the arrival order of the delayed gradients already ensures the ascending order of their time-stamps. Since our DOGD in Algorithm 1 only utilizes the time-stamp to sort the elements in Ft, it actually can be implemented by simply performing the gradient descent step in (1) whenever a gradient arrives even without the time-stamp. However, in our Mild-OGD, the time-stamp is further utilized to compute the delayed surrogate loss of experts, i.e., ℓk(xη k) in (8), which cannot be discarded. B. Proof of Lemma 4.1 We first define k Fi ℓk(xη k), Lη t = i=1 ℓi(xη i ), and Wt = X η H wη 1e α Lη t . Moreover, we define ct = (Lη t )η H R|H|, ct = ( Lη t )η H R|H|, and wt = (wη t )η H R|H|. According to Algorithm 2, for any t 1, it is easy to verify that wη t+1 = wη t e α P k Ft ℓk(xη k) P µ H wµ t e α P k Ft ℓk(xµ k) = wη 1e αLη t P µ H wµ 1 e αLµ t . Combining with the above definitions, we have wt+1 = argmin w α ln(w1) + ct, w + 1 where = {w 0| w, 1 = 1} and R(w) = P i wi ln wi. Similarly, for any t 1, we define wt+1 = argmin w α ln(w1) + ct, w + 1 In this way, for any t 1, we also have wt+1 = ( wη t+1)η H, where wη t+1 = wη 1e α Lη t P µ H wµ 1 e α Lµ t . Moreover, we define w1 = w1 and xt = X η H wη t xη t . (22) Then, we will bound the distance between xt and xt based on the following lemma. Non-stationary Online Convex Optimization with Arbitrary Delays Lemma B.1. (Lemma 5 in Duchi et al. (2011)) Let ΠK(u, α) = argminx K u, x + 1 αR(x). If R(x) is 1-strongly convex with respect to a norm , it holds that ΠK(u, α) ΠK(v, α) α u v for any u and v, where is the dual norm of . Since R(w) = P i wi ln wi is 1-strongly convex with respect to 1, by applying Lemma B.1, for any t > 1, we have η H ( wη t wη t )xη t η H | wη t wη t | xη t 2 D wt wt 1 αD ct 1 ct 1 . Let Ut = [t] \ i [t]Fi. Note that Ut actually records the time-stamp of gradients that are queried, but still not arrive at the end of round t. Then, for t > 1, it is not hard to verify that xt xt 2 αD ct 1 ct 1 αD max η H k Ut 1 ℓk(xη k) GD2 = α (mt 1) GD2 (23) where the last inequality is due to the definition of Ut and the fact that Assumptions 3.1 and 3.2 ensure |ℓk(xη k)| = | fk(xk), xη k xk | fk(xk) 2 xη k xk 2 GD (24) for any k [T] and η H. The above inequality shows that xt is close to xt. In the following, we first focus on the analysis of xt, and then combine with the distance between xt and xt. To this end, we notice that η H wη 1e α Lη T ln max η H wη 1e α Lη T = α min η H Next, for any t 2, we have η H wη 1e α Lη t P η H wη 1e α Lη t 1 η H wη 1e α Lη t 1e αℓt(xη t ) P η H wη 1e α Lη t 1 η H wη t e αℓt(xη t ) Combining (26) and wη 1 = wη 1, we have ln WT = ln W1 + η H wη t e αℓt(xη t ) To proceed, we introduce Hoeffding s inequality (Hoeffding, 1963). Lemma B.2. Let X be a random variable with a X b. Then, for any s R, it holds that ln E[es X] s E[X] + s2(b a)2 From (24) and Lemma B.2, we have η H wη t e αℓt(xη t ) η H wη t ℓt(xη t ) + α2G2D2 2 αℓt ( xt) + α2G2D2 where the second inequality is due to Jensen s inequality and (22). Non-stationary Online Convex Optimization with Arbitrary Delays Combining (27) with (28), we have t=1 ℓt ( xt) + α2G2D2T Then, by further combining with (25), we have t=1 ℓt ( xt) min η H t=1 ℓt(xη t ) + 1 Finally, combining with (23), for any η H, we have t=1 ℓt (xt) t=1 ℓt(xη t ) + 1 t=1 ℓt (xt) t=1 ℓt ( xt) + t=1 ℓt ( xt) t=1 ℓt(xη t ) + 1 t=1 ft(xt), xt xt + αG2D2T t=1 ft(xt) 2 xt xt 2 + αG2D2T t=1 (mt 1) + αG2D2T 2 αG2D2 T X which completes this proof. C. Proof of Lemma 4.2 Let Z = T/d . We first divide the total T rounds into Z blocks, where the length of the first Z 1 blocks is d and that of the last block is T (Z 1)d. In this way, we can define the set of rounds in the block z as Tz = {(z 1)d + 1, . . . , min{zd, T}}. For any z [Z] and t Tz, we construct the delay as dt = min{zd, T} t + 1 which satisfies 1 dt d. These delays ensure that the information of any function in each block z is delayed to the end of the block, which is critical for us to construct loss functions that maximize the impact of delays on the static regret. Note that to establish the lower bound of the static regret in the non-delayed setting, one can utilize a randomized strategy to select loss functions for each round (Abernethy et al., 2008). Here, to maximize the impact of delays, we only select one loss function hz(x) for all rounds in the same block z, i.e., ft(x) = hz(x) for any t Tz. Specifically, we set hz(x) = G n wz, x where the i-th coordinate of wz is 1 with probability 1/2 for any i [n] and will be denoted as wz,i. It is not hard to verify that hz(x) satisfies Assumption 3.1. From the above definitions, we have Ew1,...,w Z[R(T)] =Ew1,...,w Z t=1 ft(xt) min x K =Ew1,...,w Z G n wz, xt min x K =Ew1,...,w Z G|Tz| n wz, x Non-stationary Online Convex Optimization with Arbitrary Delays where the third equality is due to Ew1,...,w Z[ wz, xt ] = 0 for any t Tz, which can be derived by the fact that any decision xt in the block z is made before receiving the information of wz, and thus is independent with wz. Since a linear function is minimized at the vertices of the cube, we further have Ew1,...,w Z[R(T)] = Ew1,...,w Z min x { D/(2 n),D/(2 n)}n G|Tz| n wz, x =Ew1,...,w Z wz,i G|Tz| n 2 Ew1,...,w Z z=1 wz,1|Tz| (PZ z=1 |Tz|)2 where the first inequality is due to the Khintchine inequality and the second inequality is due to the Cauchy-Schwarz inequality. The expected lower bound in (30) implies that for any OCO algorithm and any positive integer d, there exists a particular choice of w1, . . . , w Z such that D. DOGD with the Doubling Trick As discussed after Corollary 3.6, our DOGD needs a learning rate depending on the following value However, it may be not available beforehand. Fortunately, the doubling trick (Cesa-Bianchi & Lugosi, 2006) provides a way to adaptively estimate this value. Specifically, it will divide the total T rounds into several epochs, and run a new instance of DOGD per epoch. Let sv and sv+1 1 respectively denote the start round and the end round of the v-th epoch. In this way, to tune the learning rate for the v-th epoch, we only need to know the following value i=sv |Fsv i | where Fsv i = {k [sv, i]|k + dk 1 = i}. According to the doubling trick, we can estimate this value to be 2v at the start round sv of the v-th epoch. Then, for any t > sv, we first judge whether the estimate is still valid, i.e., i=sv |Fsv i | where the left side can be calculated at the beginning of round t. If the answer is positive, the round t is still assigned to the v-th epoch, and the instance of DOGD keeps running. Otherwise, the round t is set as the start round of the (v + 1)-th epoch, and a new instance of DOGD is activated. Notice that in the start round of the (v + 1)-th epoch, the new estimate must be valid, since t = sv+1 and i=sv+1 |Fsv+1 i | Non-stationary Online Convex Optimization with Arbitrary Delays Algorithm 4 DOGD with the Doubling Trick 1: Initialization: set y1 = 0, τ = 1, v = 1, and sv = 1 2: for t = 1, . . . , T do 3: if Pt j=sv j + 1 sv Pj 1 i=sv |Fsv i | > 2v then 4: Set y1 = 0, τ = 1, v = v + 1, and sv = t 5: end if 6: Play xt = yτ and query ft(xt) 7: Receive { fk(xk)|k Fsv t }, where Fsv t = {k [sv, t]|k + dk 1 = t} 8: for k Fsv t (in the ascending order) do 9: Compute yτ+1 = argminx K x (yτ ηv fk(xk)) 2 2, where ηv = D G2v/2 10: Set τ = τ + 1 11: end for 12: end for Moreover, it is natural to set s1 = 1. Then, the detailed procedures of DOGD with the doubling trick are summarized in Algorithm 4. Remark: First, in Algorithm 4, the learning rate ηv is set by replacing PT t=1 mt in the learning rate required by Corollary 3.6 with 2v. Second, in each epoch v, we do not need to utilize gradients queried before this epoch. For this reason, in Algorithm 4, we only receive { fk(xk)|k Fsv t }, instead of { fk(xk)|k Ft}. We have the following theorem, which can recover the dynamic regret bound in Corollary 3.6 up to a constant factor. Theorem D.1. Under Assumptions 3.1 and 3.2, for any comparator sequence u1, . . . , u T K, Algorithm 2 ensures R(u1, . . . , u T ) 2G (2D + PT ) d T where C is defined in (6). Proof. For any sv and j sv, we first notice that the value of j sv Pj 1 i=sv |Fsv i | counts the number of gradients that have been queried over interval [sv, j 1], but still not arrive at the end of round j 1. Moreover, the gradient fj(xj) will only be counted as an unreceived gradient in dj 1 rounds. Therefore, for any sv t T, it is easy to verify that i=sv |Fsv i | j=1 dj = d T. For brevity, let V denote the final v of Algorithm 4, and let S = d T. It is easy to verify that V 1 + log2 S. (31) Then, let s V +1 = T +1. We notice that for v [V ], Algorithm 4 actually starts or restarts Algorithm 1 with the learning rate of ηv at round sv, which ends at round sv+1 1. Therefore, combining Theorem 3.4 with Lemma 3.5, under Assumptions 3.1 and 3.2, we have t=sv ft(xt) t=sv ft(ut) D2 + D Psv+1 1 t=sv+1 ut ut 1 2 ηv + ηv G2 sv+1 1 X i=sv |Fsv i | 0, if Assumption 3.3 also holds; (sv+1 sv)GD, 2d G t=sv+1 ut ut 1 2 , otherwise. (33) Non-stationary Online Convex Optimization with Arbitrary Delays Moreover, we notice that Algorithm 4 also ensures that i=sv |Fsv i | By substituting the above inequality into (32), we have t=sv ft(xt) t=sv ft(ut) D2 + D Psv+1 1 t=sv+1 ut ut 1 2 ηv + ηv G22v + Cv t=sv+1 ut ut 1 2 G2v/2 (2D + PT ) + Cv. Then, because of (31), we have R(u1, . . . , u T ) = t=sv ft(xt) t=sv ft(ut) v=1 G2v/2 (2D + PT ) + =G (2D + PT ) v=1 Cv 2G (2D + PT ) Moreover, it is not hard to verify that (sv+1 sv)GD, 2d G t=sv+1 ut ut 1 2 v=1 (sv+1 sv)GD, t=sv+1 ut ut 1 2 min {TGD, 2d GPT } which implies that V X v=1 Cv C. (37) Finally, we complete this proof by substituting (37) and S = d T into (36). E. Mild-OGD with the Doubling Trick Similar to DOGD, Mild-OGD requires the value of PT t=1 mt for setting GD q PT t=1 mt and ηi = 2i 1D G q PT t=1 mt (38) where α is the learning rate for updating the weight, and ηi is the learning rate for the i-th expert. To address this limitation, we can utilize the doubling trick as described in the previous section. The only change is to replace DOGD with Mild-OGD. The detailed procedures of Mild-OGD with the doubling trick are outlined in Algorithms 5 and 6. Remark: We would like to emphasize that since multiple instances of the expert-algorithm run over the surrogate losses defined by the meta-algorithm, these instances and the meta-algorithm will start a new epoch synchronously. Moreover, as shown in step 6 of Algorithm 5, in the start of each epoch, we need to reinitialize the weight of each expert Eη. As shown in step 11, in each epoch v, we update the weight by using the learning rate αv, which replaces PT t=1 mt in (38) with 2v. Non-stationary Online Convex Optimization with Arbitrary Delays Algorithm 5 Mild-OGD with the Doubling Trick: Meta-algorithm 1: Initialization: set v = 1 and sv = 1 2: Activate a set of experts {Eη|η H} by invoking the expert-algorithm for each constant η X, where H = ηi = D2i 1 i = 1, . . . , l log2 T + 1 m + 1 3: Set wηi t = |H|+1 i(i+1)|H| 4: for t = 1, . . . , T do 5: if Pt j=sv j + 1 sv Pj 1 i=sv |Fsv i | > 2v then 6: Set v = v + 1, sv = t, and wηi t = |H|+1 i(i+1)|H| 7: end if 8: Receive xη t from each expert Eη 9: Play the decision xt = P η H wη t xη t 10: Query ft(xt) and receive { fk(xk)|k Fsv t }, where Fsv t = {k [sv, t]|k + dk 1 = t} 11: Update the weight of each expert by wη t+1 = wη t e αv P k Fsv t ℓk(xη k) µ H wµ t e αv P k Fsv t ℓk(xµ k) where ℓk(x) = fk(xk), x xk and αv = 1 GD2v/2 12: Send { fk(xk)|k Fsv t } to each expert Eη 13: end for Algorithm 6 Mild-OGD with the Doubling Trick: Expert-algorithm 1: Input: a constant η 2: Initialization: set yη 1 = 0, τ = 1, v = 1, and sv = 1 3: for t = 1, . . . , T do 4: if Pt j=sv j + 1 sv Pj 1 i=sv |Fsv i | > 2v then 5: Set y1 = 0, τ = 1, v = v + 1, and sv = t 6: end if 7: Submit xη t = yη τ to the meta-algorithm 8: Receive gradients { fk(xk)|k Fsv t } from the meta-algorithm 9: for k Ft (in the ascending order) do 10: Compute yη τ+1 = argminx K x yη τ η 2v/2 fk(xk) 2 2 11: Set τ = τ + 1 12: end for 13: end for Additionally, to facilitate presentation, in step 2 of Algorithm 5, each ηi in H only contains the constant part that does not depend on the value of PT t=1 mt. Meanwhile, according to steps 1 and 10 of Algorithm 6, the i-th expert will receive ηi from the meta-algorithm, and combine it with the estimation of PT t=1 mt to compute the learning rate. Furthermore, we have the following theorem, which can recover the dynamic regret bound in Theorem 3.7 up to a constant factor. Theorem E.1. Under Assumptions 3.1 and 3.2, for any comparator sequence u1, . . . , u T K, Algorithm 2 ensures R(u1, . . . , u T ) 2 2 ln j log2 p (D + PT ) /D k + 2 + 1 GD + 3G D2 + DPT d T where C is defined in (6). Non-stationary Online Convex Optimization with Arbitrary Delays Proof. Following the proof of Theorem D.1, we use V to denote the final v of Algorithms 5 and 6 and define s V +1 = T + 1. Moreover, let S = d T. It is easy to verify that (31) also holds. Then, we consider the dynamic regret of Algorithm 5 over the interval [sv, sv+1 1] for each v [V ]. Let t=sv+1 ut ut 1 2 From Assumption 3.2, we have t=sv+1 ut ut 1 2 (sv+1 sv 1)D TD which implies that T + 1 G η|H|. Therefore, for any possible value of Psv+1 1 t=sv+1 ut ut 1 2, there must exist a constant ηkv H such that ηkv ηv 2ηkv (39) t=sv+1 ut ut 1 2 + 1 j log2 p (D + PT ) /D k + 1. Moreover, we notice that each expert Eη over the interval [sv, sv+1 1] actually runs Algorithm 1 with the learning rate η 2v/2 to handle the surrogate losses ℓsv(x), . . . , ℓsv+1 1(x), where each gradient ℓt(xη t ) = ft(xt) is delayed to the end of round t + dt 1 for t [sv, sv+1 1]. Therefore, by combining Theorem 3.4 with Lemma 3.5, under Assumptions 3.1 and 3.2, we have ℓt(xηkv t ) ℓt(ut) 2v/2 D2 + D Psv+1 1 t=sv+1 ut ut 1 2 ηkv + ηkv G2 i=sv |Fsv i | 2v/2 D2 + D Psv+1 1 t=sv+1 ut ut 1 2 ηkv + ηkv G22v/2 + Cv t=sv+1 ut ut 1 2 2v (D2 + DPT ) + Cv where Cv is defined in (33), the second inequality is due to the fact that Algorithm 6 also ensures (34), and the third inequality is due to (39) and the definition of ηv . Moreover, it is also easy to verify that Algorithm 5 actually starts or restarts Algorithm 2 with the learning rate of αv at round sv, which ends at round sv+1 1. Then, by using Lemma 4.1 with (1/wηkv sv ) (kv + 1)2, under Assumptions 3.1 and 3.2, we have t=sv ℓt (xt) t=sv ℓt(xηkv t ) 2 αv ln(kv + 1) + αv G2D2 sv+1 1 X i=sv |Fsv i | 2 ln j log2 p (D + PT ) /D k + 2 + 1 2v/2GD Non-stationary Online Convex Optimization with Arbitrary Delays where the second inequality is due to αv = 1 GD2v/2 , the definition of kv, and the fact that Algorithm 5 also ensures (34). By combining (40) and (41), it is not hard to verify that t=sv ft(xt) t=sv ft(ut) t=sv ℓt(xt) t=sv ℓt(ut) t=sv ℓt(xt) t=sv ℓt(xηkv t ) + t=sv ℓt(xηkv t ) t=sv ℓt(ut) 2 ln j log2 p (D + PT ) /D k + 2 + 1 2v/2GD + 3G p 2v (D2 + DPT ) + Cv Then, we have R(u1, . . . , u T ) = t=sv ft(xt) t=sv ft(ut) 2 ln j log2 p (D + PT ) /D k + 2 + 1 GD + 3G p D2 + DPT V X = 2 ln j log2 p (D + PT ) /D k + 2 + 1 GD + 3G p 2 2 ln j log2 p (D + PT ) /D k + 2 + 1 GD + 3G D2 + DPT where the first inequality is due to (42), and the last inequality is due to (31). Finally, by substituting (37) and S = d T into the above inequality, we complete this proof.