# adversarial_combinatorial_semibandits_with_graph_feedback__b940cca9.pdf Adversarial Combinatorial Semi-bandits with Graph Feedback Yuxiao Wen 1 In combinatorial semi-bandits, a learner repeatedly selects from a combinatorial decision set of arms, receives the realized sum of rewards, and observes the rewards of the individual selected arms as feedback. In this paper, we extend this framework to include graph feedback, where the learner observes the rewards of all neighboring arms of the selected arms in a feedback graph G. We establish that the optimal regret over a time horizon T scales as eΘ(S αST), where S is the size of the combinatorial decisions and α is the independence number of G. This result interpolates between the known regrets eΘ(S T) under full information (i.e., G is complete) and eΘ( KST) under the semi-bandit feedback (i.e., G has only self-loops), where K is the total number of arms. A key technical ingredient is to realize a convexified action using a random decision vector with negative correlations. We also show that online stochastic mirror descent (OSMD) that only realizes convexified actions in expectation is suboptimal. In addition, we describe the problem of combinatorial semi-bandits with general capacity and apply our results to derive an improved regret upper bound, which may be of independent interest. 1. Introduction Combinatorial semi-bandits are a class of online learning problems that generalize the classical multi-armed bandits (Robbins, 1952) and have a wide range of applications including multi-platform online advertising (Avadhanula et al., 2021), online recommendations (Wang et al., 2017), webpage optimization (Liu & Li, 2021), and online shortest path (Gy orgy et al., 2007). In these applications, instead of taking an individual action, a set of actions is chosen at each time 1Courant Institute of Mathematical Sciences, New York University, New York, USA. Correspondence to: Yuxiao Wen . Proceedings of the 42 nd International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s). (Cesa-Bianchi & Lugosi, 2012; Audibert et al., 2014; Chen et al., 2013). Mathematically, over a time horizon of length T and for a fixed combinatorial budget S, a learner repeatedly chooses a (potentially constrained) combination of K individual arms within the budget, i.e. from the following decision set A0 A v {0, 1}K : v 1 = S , and receives a linear payoff v, rt where rt [0, 1]K denotes the reward associated to each arm at time t. After making the decision at time t, the learner observes {vart a : a [K]} as the semi-bandit feedback or the entire reward vector rt under full information. When S = 1, it reduces to the multi-armed bandits with either the bandit feedback or full information. For S > 1, the learner is allowed to select S arms at each time and collect the cumulative reward. Under the adversarial setting for bandits (Auer et al., 1995), no statistical assumption is made about the reward vectors {rt}t [T ]. Instead, they are (potentially) generated by an adaptive adversary. The objective is to minimize the expected regret of the learner s algorithm π compared to the best fixed decision in hindsight, defined as follows: E[R(π)] = E t=1 v vt, rt where vt A0 is the decision chosen by π at time t. The expectation is taken over any randomness in the learner s algorithm and over the rewards, since the reward rt is allowed to be generated adaptively and hence can be random. Note that while the adversary can generate the rewards rt adaptively, i.e. based on the learner s past decisions, the regret in (1) is measured against a fixed decision v assuming the adversary would generate the same rewards. While the semi-bandit feedback has been extensively studied, the current literature falls short of capturing additional information structures on the rewards of the individual arms, except for the full information case. As a motivating example, consider the multi-platform online advertising problem, where the arms represent the (discretized) bids. At each round and on each platform, the learner makes a bid and receives zero reward on losing the auction and her surplus on winning the auction. In many ads exchange platforms, Adversarial Combinatorial Semi-bandits with Graph Feedback the winning bid is always announced, and hence the learner can compute the counterfactual reward for any bids higher than her chosen bid (Han et al., 2024). This additional information is not taken into account in the semi-bandit feedback. Another example is the online recommendation problem, where the website plans to present a combination of recommended items to the user. The semi-bandit feedback assumes that the user s behavior on the displayed items will reveal no information about the undisplayed items. However, this assumption often ignores the semantic relationship between the items. For instance, suppose two items i and j are both tissue packs with similar prices. If item i is displayed and the user clicks on it, a click is likely to happen if item j were to be displayed. On the other hand, if item i is a football and item j is a wheelchair, then a click on one probably means a no-click on the other. Information of this kind is beneficial for the website planner and yet overlooked in the semi-bandit feedback. To capture this rich class of combinatorial semi-bandits with additional information, we consider a more general feedback structure described by a directed graph G = ([K], E) among the K arms. We assume G is strongly observable, i.e. for every a [K], either (a, a) E or (b, a) E for all b = a. After making the decision v A0 at each time, the learner now observes the rewards associated to all neighboring arms of the selected arms in v: virt i : a [K] such that va = 1 and (a, i) E . This graph formulation allows us to leverage information that is unexploited in the semi-bandit feedback. Note that when G is complete, the feedback structure corresponds to having full information; when G contains only the self-loops, it becomes the semi-bandit feedback. In the presence of a general G, the exploration-exploitation trade-off becomes more complicated, and the goals of this paper are (1) to fully exploit this additional structure in the regret minimization and (2) to understand the fundamental learning limit in this class of problems. 1.1. Related work The optimal regret of the combinatorial semi-bandits has drawn a lot of attention and has been extensively studied in the bandit literature. With linear payoff, Koolen et al. (2010) shows that the Online Stochastic Mirror Descent (OSMD) algorithm achieves near-optimal regret eΘ(S T) under full information. In the case of the semi-bandit feedback, Audibert et al. (2014) shows that OSMD achieves near-optimal regret eΘ( KST) using an unbiased estimator rt a = vt art a/Evt[vt a], where vt is the random decision selected at time t and the expectation denotes the probability of choosing arm a.1 The transition of the optimal regret s dependence from KS to S, as the feedback becomes richer, remains a curious and important open problem. Another type of feedback is the bandit or full-bandit feedback, which assumes only the realized payoff v, rt is revealed (rather than the rewards for individual arms). In this case, the minimax optimal regret is eΘ( KS3T) (Audibert et al., 2014; Cohen et al., 2017; Ito et al., 2019). This additional S factor, compared to the semi-bandit feedback, matches the difference in the observations: in this bandit feedback, the learner obtains a single observation at each time, while in the semi-bandit the learner gains S observations. When the payoff function is nonlinear in v, Han et al. (2021) shows that the optimal regret scales with Kd where d roughly stands for the complexity of the payoff function. More variants of combinatorial semi-bandits include the knapsack constraint (Sankararaman & Slivkins, 2018), the fractional decisions (Wen et al., 2015), and the contextual counterpart (Zierahn et al., 2023). In the multi-armed bandits, multiple attempts have been made to formulate and exploit the feedback structure as feedback graphs since Mannor & Shamir (2011). In particular, the optimal regret is shown to be eΘ( αT) when T α3 (Alon et al., 2015; Eldowa et al., 2024) and is a mixture of T 1/2 and T 2/3 terms when T is small due to the exploration-exploitation trade-off (Koc ak & Carpentier, 2023). When the graph is only weakly observable, i.e. every node a [K] has nonzero in-degree, the optimal regret is eΘ δ1/3T 2/3 (Alon et al., 2015). Here α and δ are the independence and the domination number of the graph G respectively, defined in Section 1.3. Instead of a fixed graph G, Cohen et al. (2016) and Alon et al. (2017) study time-varying graphs {Gt} and show that an upper bound e O q PT t=1 αt can be achieved. Addi- tionally, a recent line of research (Balseiro et al., 2023; Han et al., 2024; Wen et al., 2024) introduces graph feedback to the tabular contextual bandits, in which case the optimal regret depends on a complicated graph quantity that interpolates between α and K as the number of contexts changes. 1.2. Our results In this paper, we present results on combinatorial semibandits with a strongly observable feedback graph G and the full decision set A0 = A, while results on general A0 are discussed in Section 5.1 and 5.2. Our results are summarized in Table 1, and the main contribution of this 1Audibert et al. (2014) only argues there exists a particular decision subset A0 under which the regret is Ω( KST). The lower bound for A is given by Lattimore et al. (2018). Adversarial Combinatorial Semi-bandits with Graph Feedback Table 1. Minimax regret bounds up to polylogarithmic factors. Our results are in bold. Semi-bandit (α = K) General feedback graph G Full information (α = 1) paper is four-fold: 1. We introduce the formulation of a general feedback structure using feedback graphs in combinatorial semibandits. 2. On the full decision set A, we establish a minimax regret lower bound Ω(S αST) that correctly captures the regret dependence on the feedback structure and outlines the transition from eΘ(S KST) as the feedback gets richer. 3. We propose a policy OSMD-G (OSMD under graph feedback) that achieves near-optimal regret under general directed feedback graphs and adversarial rewards. Importantly, we identify that sampling with negative correlations is crucial in achieving the near-optimal regret, and that the original OSMD is provably suboptimal. 4. We formulate the problem of combinatorial semibandits with general capacity in Section 4 and provide an improved regret by applying our results under graph feedback. This formulation may be of independent interest. When the feedback graphs {Gt}t [T ] are allowed to be timevarying, we can also obtain a corresponding upper bound. The upper bounds are summarized in the following theorem. Theorem 1.1. Consider the full decision set A. For 1 S K and any strongly observable directed graph G = ([K], E), there exists an algorithm π that achieves regret E[R(π)] = e O S When the feedback graphs {Gt}t [T ] are time-varying, the same algorithm π achieves E[R(π)] = e O where αt = α(Gt) is the independence number of Gt. This algorithm π is OSMD-G proposed in Section 3.1. In OSMD-G, the learner solves for an optimal convexified action x Conv(A) via mirror descent at each time t, using the past observations, and then realizes it (in expectation) via selecting a random decision vector vt. In the extreme cases of full information and semi-bandit feedback, the optimal regret is achieved as long as vt realizes the convexified action x in expectation (Audibert et al., 2014). However, this realization in expectation alone is provably suboptimal under graph feedback, as shown later in Theorem 3.4. Under a general graph G, the regret analysis for a tight bound crucially requires this random decision vector to have negative correlations among the arms, i.e. Cov(vt i, vt j) 0 for i = j, in addition to the realization of x in expectation. Consequently, the following technical lemma is helpful in our upper bound analysis: Lemma 1.2. Fix any 1 S K and x Conv(A). There exists a probability distribution p over A that satisfies: 1. (Mean) i [K], Ev p[vi] = xi. 2. (Negative correlations) i = j, Ev p[vivj] xixj, i.e. any pair of arms (i, j) is negatively correlated. In particular, there is an efficient scheme to sample from p. This lemma is a corollary of Theorem 1.1 in Chekuri et al. (2009), and the sampling scheme is the randomized swap rounding (Algorithm 2). The mean condition guarantees that the convexified action is realized in expectation. The negative correlations essentially allow us to control the variance of the observed rewards in OSMD-G, thereby decoupling the final regret into two terms. Intuitively, the negative correlations imply a more exploratory sampling scheme; a more detailed discussion is in Section 3.1. To show that OSMD-G achieves near-optimal performance, we consider the following minimax regret: R = inf π sup {rt} E[R(π)] (2) where the inf is taken over all possible algorithms and the sup is taken over all potentially adversarial reward sequences. The following lower bound holds: Theorem 1.3. Consider any decision subset A0 A and strongly observable graph G. When T max{S, α3/S} and S K/2, it holds that T log(K/S) + Adversarial Combinatorial Semi-bandits with Graph Feedback Our lower bound construction in the proof is stochastic, as is standard in the literature, and thus stochastic combinatorial semi-bandits will not be easier. 1.3. Notations For n N, denote [n] = {1, 2, . . . , n}. The convex hull of A is denoted by Conv(A), and the truncated convex hull is defined by Convϵ(A) = {x Conv(A) : xi ϵ for all i [K]}. We use the standard asymptotic notations Ω, O, Θ to denote the asymptotic behaviors up to constants, and eΩ, e O, eΘ up to polylogarithmic factors respectively. Our results will concern the following graph quantities: α = max{|I| : I [K] is an independent subset in G}, δ = min{|D| : D [K] is a dominating subset in G}. In a graph G, I [K] is an independent subset if for any i, j I, (i, j) / E; and D [K] is a dominating subset if for any u [K], there exists i D such that (i, u) E. For each node a [K], denote its out-neighbors in G by Nout(a) = {i [K] : (a, i) E} and its in-neighbors by Nin(a) = {i [K] : (i, a) E}. Then for a binary vector v A that represents an S-arm subset of [K], we denote its out-neighbors in G by the union Nout(v) = S va=1 Nout(a). Let D Rd be an open convex set, D be its closure, and F : D R be a differentiable, strictly convex function. We denote the Bregman divergence defined by F as DF (x, y) = F(x) F(y) F(y), x y . 2. Regret lower bound In this section, we sketch the proof of the lower bound in Theorem 1.3 and defer the complete proof to Appendix A. The idea is to divide this learning problem into S independent sub-problems and present the exploration-exploitation trade-off under a set of hard instances to arrive at the final minimax lower bound. Under the complete graph G, Koolen et al. (2010) already gives a lower bound Ω(S T log(K/S)) by reducing the full information combinatorial semi-bandits to the full information multi-armed bandits with rewards ranging in [0, S]. This reduction argument, however, does not lead to the other Ω( αST) part of the lower bound. It constructs a multi-armed bandit policy from any given combinatorial semi-bandit policy and shows they share the same expected regret. Thus the lower bound of one translates to that of the other. As soon as the feedback structure is not full information, the observations and thus the behaviors of the two policies no longer align. To prove the second part, note that Ω( αST) only manifests in the lower bound when S < α. In this case, we partition an independent subset I [K] of size α into S subsets I1, . . . , IS of equal size α S and embeds an independent multi-armed bandit hard instance in each Im for m [S]. The other arms J = [K]\I may be more informative but will incur large regret. Thus a good learner cannot leverage arms in J due to the exploration-exploitation trade-off. The learner then needs to learn S independent sub-problems with ST total number of arm pulls. If the learner is balanced in the sense that for each sub-problem m [S], a Im 1[a is pulled] T, then the existing multi-armed bandit lower bound implies that the regret incurred in each sub-problem is Ω( p αT/S), thereby a total regret Ω( αST). While in our case the learner may arbitrarily allocate the arm pulls over the S sub-problems, it turns out to be sufficient to focus on the balanced learners via a stopping time argument proposed in Lattimore et al. (2018). Intuitively, if a learner devotes pulls Tm(T) T for some m, then he/she must suffers regret (Tm(T) T) where is the reward gap in the hard instance, which leads to suboptimal performance. 3. A near-optimal algorithm This section is structured as follows: In Section 3.1, we present our OSMD-G algorithm and highlight the choice of reward estimators and the sampling scheme that allow us to deal with general feedback graphs. Then we show that OSMD-G indeed achieves near-optimal regret e O(S αST) in Section 3.2. Finally, we argue in Section 3.3 that if the requirement of negative correlations is removed, OSMD-G would be suboptimal. 3.1. Online stochastic mirror descent with graphs The overall idea of OSMD-G (Algorithm 1) is to perform a gradient descent step at each time t, based on unbiased reward estimators, in a dual space defined by a mirror mapping F that satisfies the following: Definition 3.1. Given an open convex set D Rd, a mirror mapping F : D R satisfies F is strictly convex and differentiable on D; limx D F(x) = + . Adversarial Combinatorial Semi-bandits with Graph Feedback While OSMD-G works with any well-defined mirror mapping, we will prove the desired upper bound in Section 3.2 for OSMD-G with the negative entropy F(x) = PK i=1(xi log(xi) xi) defined on D = RK + . For this choice of F, the dual space D = RK and hence (5) is always valid. In fact, (5) admits the explicit form wt+1 = xt exp(η rt). Recall at each time t, for a selected decision vt A, the learner observes graph feedback {vt irt i : i Nout(vt)}. Based on this, we define the reward estimator for each arm a [K] at time t in (4). As we invoke a sampling scheme to realize xt in expectation, i.e. Evt pt[vt] = xt, our estimator in (4) is unbiased. A crucial step in OSMD-G is to sample a decision vt at each time t that satisfies both the mean condition Evt pt[vt] = xt and the negative correlation Evt pt[vt ivt j] xt ixt j. Thanks to Lemma 1.2, both conditions are guaranteed for all possible target xt Conv(A) when we invoke Algorithm 2 as our sampling subroutine.2 The description and details of Algorithm 2 are deferred to Appendix B. While seemingly intuitive given that vt 1 = S, we emphasize that the negative correlations Evt pt[vt ivt j] xt ixt j do not necessarily hold and can be non-trivial to achieve. Consider the case S = 2. When xt = 2 K 1 is the uniform vector, a uniform distribution over all pairs satisfies the correlation condition, seeming to suggest the choice of p(i, j) xt ixt j. However, when xt = (1, 0.8, 0.2), the only such solution is to sample the combination {1, 2} with probability 0.8 and {1, 3} with probability 0.2, suggesting a zero probability for sampling {2, 3}. A general strategy must be able to generalize both scenarios. From the perspective of linear programming, the correlation condition adds K 2 constraints to the original K constraints (from the mean condition) in finding pt, making it much harder to find a feasible solution. Now we give an intuitive argument for why such distribution p exists under A and how the structure of the latter helps. When S = 1, any distributions possess negative correlations. Inductively, let us suppose such distributions exist for 1, 2, . . . , S 1. Then for a fixed target x Conv(A), we can always find an index i [K] such that Pi 1 j=1 xj +cxi = 1 and PK j=i+1 xj +(1 c)xi = S 1 for some c [0, 1]. Namely, the target of size S is partitioned into two sub-targets with ranges [1, i] and [i, K], each with sizes 1 and S 1, and with an overlap on index i. We can then assign vi = 0 with probability 1 xi, to the first half [1, i] with probability cxi, and to [i, K] with probability (1 c)xi. To obtain a final size S solution, we draw v supported on [1, i 1] with size 0 or 1 and v on [i + 1, K] 2The use of Algorithm 2 is not essential as long as one can guarantee the negative correlations in Lemma 1.2. Algorithm 1 Online Stochastic Mirror Descent under Graph Feedback (OSMD-G) Input: time horizon T, decision set A, arms [K], combinatorial budget S, feedback graph G, a truncation rate ϵ (0, 1), a learning rate η > 0, a mirror mapping F defined on a closed convex set D Convϵ(A). Initialize: x1 arg minx Convϵ(A) F(x). for t = 1 to T do Generate a combinatorial decision vt by Algorithm 2 with target xt. Observe the feedback {rt a : a Nout(vt)}. Denote i Nin(a) 1[vt i = 1](1 rt a) P i Nin(a) xt i . (3) Build the reward estimator for each a [K]: rt a = 1 ˆrt a. (4) if S = 1 then Denote Ut = {a [K] : ˆrt a 1 (K 1)ϵ}. Set rt = 1 + P a Ut xt aˆrt a. Set rt a rt a rt for all a [K]. end Find wt+1 D such that F(wt+1) = F(xt) + η rt. (5) Project wt+1 to the truncated convex hull Convϵ(A): xt+1 arg min x Convϵ(A) DF (x, wt+1). (6) with size S 1 or S 2, conditioned on the assignment of vi. For any j1 [1, i 1], j2 [i + 1, K], and i, any two of them are negatively correlated because, at a high level, the presence of one reduces the size budget of the other. The negative correlations among the first half [1, i 1] and [i + 1, K] are guaranteed by the induction hypothesis of the existence of such distributions for solutions with size less than S. Finally, the structure of A ensures that our pieced-together solution is valid, i.e. lies in A. 3.2. Regret upper bound In the following theorem, we show that OSMD-G achieves near-optimal regret for a strongly observable time-invariant feedback graph. The proof for time-varying feedback graphs {Gt}t [T ] only takes a one-line change in (11). It is clear that Theorem 3.2 implies Theorem 1.1. Theorem 3.2. Let the mirror mapping be F(x) = Adversarial Combinatorial Semi-bandits with Graph Feedback PK i=1(xi log xi xi). When the correlation condition for pt is satisfied, the expected regret of Algorithm 1 is upper bounded by E[R(Alg 1)] ϵKT+S log(K/S) η +η(6S+4α log(4KS/(ϵα)))T. In particular, with truncation ϵ = 1 KT and learning rate η = q 5S log(K/S) (6S+4α log(4SK2T/α))T = e O q S (S+α)T , it becomes E[R(Alg 1)] 1 + β S 24 log(K/S) log(4SK2T/α) = e O(1). Proof. We present the proof for the case S 2 here. The proof for S = 1 is similar and is deferred to Appendix C due to space limit. Now fix any v A. Let vϵ = arg min v Convϵ(A) v v 1 which satisfies (v vϵ) rt v vϵ 1 Kϵ since rt [0, 1]K. We can decompose the regret as t=1 (v vϵ) rt + vϵ vt T rt # Standard OSMD analysis applied to the truncated convex hull Convϵ(A) further bounds the second term in (7) as follows (see e.g. Theorem 3 in Audibert et al. (2014)). ϵTK + S log(K/S) a=1 xt a rt a 2 # To bound the last term, we first use the non-negativity of ˆrt a, defined in (3), to further decompose it: a=1 xt a rt a 2 = a=1 xt a 1 ˆrt a 2 a=1 xt a 1 + ˆrt a 2 ST + a=1 xt a ˆrt a 2 Now we proceed to bound term (A). Recall that G is strongly observable, and let U = {a [K] : (a, a) / E} be the set of nodes with no self-loops. On the set U we have a U xt a ˆrt a 2 # i Nin(a) 1[vt i = 1](1 rt a) P i Nin(a) xt i i =a 1[vt i = 1] P a U E[xt a] 4ST. (9) Here (a) is due to rt a [0, 1] and that, if a U, then (i, a) E for all i = a, and (b) uses P i =a xt i = S xt a S 1. On the other hand, by the choice of vt in Algorithm 1, the random variables vt i are negatively correlated. Thus for each a [K], we can upper bound the second moment of the following sum: i Nin(a) vt i i Nin(a) Evt pt vt i !2 i Nin(a) vt i i Nin(a) Evt pt vt i !2 i Nin(a) Var vt i i,j Nin(a) i =j Cov vt i, vt j i Nin(a) xt i i Nin(a) xt i. (10) Then on the set U c [K]\U, we have a/ U xt a ˆrt a 2 # i Nin(a) 1[vt i = 1] P i Nin(a) xt i i Nin(a) xt i i/ U:i Nin(a) xt i Adversarial Combinatorial Semi-bandits with Graph Feedback (c) T S + 4α log 4KS Here (c) uses PK a=1 xt a S, Lemma F.2 on the restricted subgraph G|U c, and the fact that α(G|U c) = α(G) = α. Combining (9) and (11) yields a=1 xt a rt a 2 # 6TS + 4Tα log 4KS Finally, combining (12) with (8), we end up with the desired upper bound E[R(Alg 1)] ϵKT + S log(K/S) η + η(6S + 4α log(4KS/(ϵα)))T. Note that at each time t and for each arm a [K], the total number of arms that observe a is a random variable due to the random decision vt. In (10) in the proof above, one can naively bound the second moment of this random variable by i Nin(a) vt i i Nin(a) vt i since vt 1 S, which leads to an upper bound e O(S αT). We will see that this rate is sometimes not improvable for certain proper decision subsets A0 A later in Section 5.1. To improve on this bound for A, we need to further exploit the structures of the full decision set A and the sampling distribution pt of vt, which motivates Lemma 1.2. The negative correlations therein allow us to decompose this second moment into the squared mean and a sum of the individual variances, as in (10). By saving on the O(K2) correlation terms, this decomposition shaves the factor in (10) from Sα to S+α, yielding the desired result e O(S αST). Remark 3.3. It turns out that when S 2 and G is strongly observable, the presence of the nodes with no self-loop can be easily handled in this upper bound analysis, whereas the case S = 1 proved in Appendix C requires more care. This matches the intuition that, when S 2, the learner always observes the entire subset U at every time t. Therefore, the extension from U = to |U| 1 does not add to the difficulty in learning. 3.3. The necessity of negative correlations The previous section shows an improved performance for OSMD-G when vt has negative correlations, which is a requirement never seen in either the semi-bandit feedback or the full feedback in previous literature. In either of the two cases, OSMD with the mean condition (in Lemma 1.2) alone is sufficient to achieve the near-optimal regret. Then, one may naturally ask if the vanilla OSMD-G with only the mean condition still achieves this improved rate, i.e. when it only guarantees Evt pt[vt] = xt. The answer is negative. Theorem 3.4. Fix any problem parameters (K, S, α, T) with Sα K, S K 2 , and T max{S, α3}, and consider the full decision set A. There exists a feedback graph G = ([K], E) and a sampling scheme pt that satisfies Evt pt[vt] = xt, such that sup {rt} E[R(π0)] = Ω S where π0 denotes OSMD-G equipped with this pt and mirror mapping F(x) = PK i=1(xi log xi xi). Proof. The core idea of this proof is that, for some G and pt, running the vanilla OSMD-G on this problem instance is equivalent to running OSMD on a multi-armed bandit with rewards ranging in [0, S]. Without loss of generality, assume K = n S for n N.3 By assumption α n. First, we construct the graph G. Let V1, . . . , Vn partition the nodes [K] each with size S, and let H = ([n], En) be an arbitrary graph on n nodes with independence number α(H) = α. Then we let (a, b) E iff either a, b Vi or a Vi, b Vj, and (Vi, Vj) En, i.e. each Vi is a clique and H is a graph over the cliques. For clarity, we denote the mean condition as Evt pt[vt] = xt (M) and for vector q RK, we say q aligns with the cliques if qa = qb q(Vi), a, b Vi i [n]. (AC) Now we consider a sampling scheme pt as follows: (1) if xt satisfies (AC), then let vt = Vi with probability xt(Vi); (2) otherwise, use any distribution pt satisfying (M). Note that (1) gives a valid distribution over the cliques and satisfies (M). We will show via an induction that if rt satisfies (AC) for all t [T], then (2) never happens. As the base case, the OSMD initialization x1 = 1 K 1 satisfies (AC). For the inductive step, when xt satisfies (AC), we have vt = Vi for some i and thereby satisfies (AC). By construction of G, the reward estimator rt also satisfies (AC). Given the negative entropy mapping F, straightforward computation shows that both wt+1 and xt+1 satisfy (AC), completing the induction. Consequently, we have vt = Vit for some 3If S does not divide K, one can put the remainder nodes in one of the cliques and slightly change the sampling pt to draw uniformly within this clique, while maintaining the mean condition. Adversarial Combinatorial Semi-bandits with Graph Feedback it [n] when rt satisfies (AC) for all t [T]. Namely, OSMD-G now reduces to a policy running on an n-armed bandit with feedback graph H, and now the lower bound of the latter can apply. From the lower bound of the multi-armed bandits with feedback graphs (see e.g. Alon et al. (2015)), there exists a set of reward sequences {ht(j)}t [T ],j J with some index set J and ht(j) [0, S]n such that Ej Unif(J )[Rj,MAB(π)] = Ω(S for any policy π, where Rj,MAB(π) denotes the multi-armed bandit regret when the reward sequence is {ht(j)}t, [T ]. Define the clique-averaged reward sequences by rt a(j) = ht i(j) |Vi| [0, 1] for a Vi for each j J . Since (AC) is guaranteed, we have sup {rt} E[R(π0)] Ej Unif(J )[Rj(π0)] = Ω(S where Rj(π0) denotes the regret for this vanilla OSMD-G π0 under reward sequence {rt(j)}t [T ]. We remark that Theorem 3.4 does not directly show the necessity of negative correlations, even though they are sufficient as shown by Theorem 1.1. It only says that the mean condition alone is insufficient when dealing with general graph feedback, despite its success in the existing literature. It is possible that imposing extra conditions other than negative correlations can also lead to the near-optimal regret. 4. Solving semi-bandits with general capacity In this section, we introduce a natural extension of combinatorial semi-bandits and show how we derive a near-optimal regret by applying the graph feedback. Specifically, consider the semi-bandit feedback where the learner observes {vart a : a [K]}, but now each arm a can be selected for at most na 1 times, i.e. the decision set becomes A {v [n1] [n2] [n K] : v 1 = S}.4 Existing results do not directly apply to this extension. Instead, one can consider the equivalent problem with N PK a=1 na arms by having na copies of arm a for each a [K], with the special structure that rt a(i) = rt a(j) for all t [T] and i, j [na], when {a(i) : i [na]} are the copies of a. If we simply take the upper bound for the semi-bandit feedback from Audibert et al. (2014) and ignore this structure, we arrive at regret e O( ST PK a=1 na 4As a motivation, consider dynamic allocations with S units of resource at each time, and the K arms have different capacities for the amount of resource they can consume and transform to utility. Their capacities can even be time-varying, by Theorem 1.1. On the other hand, thanks to this special structure, there is a feedback graph that consists of K cliques, each with na nodes. Then we can exploit it using Algorithm 1 proposed in this work. Applying Theorem 1.1 leads to the regret eΘ( KST) (when K S; otherwise it is eΘ(S T)), which is near-optimal when na = 1 for Ω(K) arms following Theorem 1.3. Remarkably, although each arm has a different consumption capacity, the regret characterization remains the same. This crucially relies on exploiting the feedback structure present in this equivalent formulation. 5. Extension to general decision subsets 5.1. When negative correlations are impossible So far, we have shown the optimal regret eΘ(S αST) on the full decision set A. Our upper bound in Theorem 1.1 fails on general decision subsets A0 A, because it is not always possible to find a distribution pt for the decision vt in OSMD-G that provides the negative correlations in Lemma 1.2. For example, when there is a pair of arms (a, b) with va = vb for all v A0, it is simply impossible to achieve negative correlations. This failure, however, is not merely an analysis artifact. In the following, we present an example where moving from the full set A to a proper subset A0 A provably increases the optimal regret to eΘ(min{S KST}) when S K 2 . This argument is very similar to the proof of Theorem 3.4. We first consider the case Sα K. Assume again S K 2 and S divides K. We let V1, V2, . . . , VK/S be a partition of the arms [K] of equal size S. For the feedback graph G, let each Vi be a clique for i = 1, . . . , K/S. Let H = ({V1, . . . , VK/S}, E) be an arbitrary other graph over the cliques such that (Vi, Vj) E in H iff (a, b) E for all a Vi and b Vj in G. The independence numbers α(G) = α(H) are equal. On the full decision set A, Theorem 1.1 and 1.3 tell us the optimal regret is eΘ(S Now consider a proper decision subset Apartition = {11:S, 1S+1:2S, . . . , 1K S+1:K} (13) where (1i:j)k = 1[i k j] is one on the coordinates from i to j and zero otherwise. Namely, the only feasible decisions are the first S arms in V1, the next S arms in V2, ..., and the last S arms in VK/S. It is straightforward to see that this problem is equivalent to a multi-armed bandit with K/S arms and a feedback graph H, and the rewards range in [0, S]. From the bandit literature (Alon et al., 2015), the optimal regret on this decision subset Apartition is eΘ(S αT) which is fundamentally different from the result for the full decision set, even under the same feedback graph. On the other hand, if Sα > K, a similar construction fol- Adversarial Combinatorial Semi-bandits with Graph Feedback lows, except that some of the grouped nodes Vi are no longer cliques in order to satisfy α(G) = α, and that the graph H has only self-loops. Then α(H) = K S and the regret is eΘ( KST). To formalize this statement: Theorem 5.1. Fix any problem parameters (K, S, α, T) with Sα K, S K 2 , and T max{S, α3}. There exists a decision subset A0 A such that R (A0) = Ω min{S where R (A0) denotes the minimax regret, as defined in (2), on this subset A0. Given this (counter-)example, the following upper bound is of interest: Theorem 5.2. On general decision subset A0 A where only the mean condition is guaranteed, the algorithm OSMD-G achieves E[R(Alg 1)] = e O S In particular, when Sα > K, one can ignore the graph feedback and directly apply OSMD. The combination of OSMD and OSMD-G then guarantees e O min{S For any target xt Conv(A0), there is always a probability distribution pt such that Evt pt[vt] = xt, which is used in earlier works (Koolen et al., 2010; Audibert et al., 2014). With this choice of pt, OSMD-G achieves the regret in Theorem 5.2. The proof follows from Section 3.2 and is left to Appendix E. Together with the construction of Apartition in (13), it suggests that leveraging the negative correlations, whenever the decision subset A0 allows, is crucial to achieving improved regret e O(S αST). We will see examples of A0 where negative correlations are guaranteed in the next section. Note on general A0, the efficiency of OSMD-G is no longer guaranteed; see discussions in Koolen et al. (2010); Audibert et al. (2014). To compensate, we provide an efficient elimination-based algorithm that is agnostic of the structure of the decision subset A0 and achieves e O(S αT) when the rewards are stochastic. The algorithm and its analysis are left in Appendix D. 5.2. When negative correlations are possible This section aims to extend the upper bound in Theorem 1.1 to some other decision subsets A0 A. First, by Theorem 1.1 in Chekuri et al. (2009), Lemma 1.2 and OSMD-G can be generalized directly to any decision subset A 0 {v {0, 1}K : v 1 S} that forms a matroid. Notably, matroids require that decisions with size less than S are also feasible, hence they are different from the setup A0 A we consider throughout this work. In addition, while Chekuri et al. (2009) focuses on matroids, the proof of their Theorem 1.1 only relies on the following exchange property of a decision set A0: for any v, u A0, there exist i u v and j v u such that u {i} + {j}, v {j} + {i} A0. Lemma 1.2 remains valid for any such A0. Here we provide an example of A0 A that satisfies this property: Consider the problem that the learner operates on S systems in parallel, and on each system s he/she has Ks arms to choose from. Then K = P s [S] Ks and the feasible decisions are A0 = {(v1, . . . , v S) : vs [Ks]}. It is clear that this A0 satisfies the exchange property above, and hence OSMD-G and Theorem 1.1 apply directly to such problems. The independence number α can be small if there is shared information among the S systems. 5.3. Other open problems Weakly observable graphs: The results in this work focus on the strongly observable feedback graphs. A natural extension would be the minimax regret characterization when the feedback graph G = ([K], E) is only weakly observable. Recall that when S = 1, Alon et al. (2015) shows the optimal regret is eΘ(δ1/3T 2/3). To get a taste of it, consider a simple explore-then-commit (ETC) policy under stochastic rewards: the learner first explores the arms in a minimal dominating subset as uniformly as possible for T0 time steps, and then commit to the S empirically best arms for the rest of the time.5 Its performance is characterized by the following result. Theorem 5.3. With high probability, the ETC policy achieves regret e O(ST 2/3 + δ1/3S2/3T 2/3). When S = 1, this policy is near-optimal. We briefly outline the proof here. When δ S, thanks to the stochastic assumption and concentration inequalities, each one of the S empirically best arms contributes only a sub-optimality of order e O( p δ/ST0) with high probability. Trading off T0 in the upper bound ST0 + ST p δ/(ST0) gives the bound e O(δ1/3S2/3T 2/3). When δ < S, a similar analysis yields the bound e O(ST 2/3). Problem-dependent bounds: With the semi-bandit feedback and stochastic rewards, Combes et al. (2015) proves a problem-dependent bound e O K where min denotes the mean reward gap between the best decision and the second-best decision, or equivalently the S-th best arm and the (S + 1)-th under the full decision set A. It would be another interesting question to see how the presence of feedback graph G helps the problem-dependent bounds. 5While finding the minimal dominating subset is NP-hard, there is an efficient log(K)-approximate algorithm, which we include in Appendix F for completeness. Adversarial Combinatorial Semi-bandits with Graph Feedback Acknowledgements The author is grateful to Yanjun Han for very helpful discussions and pointing to the matroid literature. The author also thanks anonymous reviewers for pointing out a flaw in an earlier version of the lower bound proof and for the advice to present an impossibility result for OSMD without negative correlations. Impact Statement This paper presents work whose goal is to advance the field of Machine Learning and the theoretical understanding of Online Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here. Alon, N., Cesa-Bianchi, N., Dekel, O., and Koren, T. Online learning with feedback graphs: Beyond bandits. In Conference on Learning Theory, pp. 23 35. PMLR, 2015. Alon, N., Cesa-Bianchi, N., Gentile, C., Mannor, S., Mansour, Y., and Shamir, O. Nonstochastic multi-armed bandits with graph-structured feedback. SIAM Journal on Computing, 46(6):1785 1826, 2017. Audibert, J.-Y., Bubeck, S., and Lugosi, G. Regret in online combinatorial optimization. Mathematics of Operations Research, 39(1):31 45, 2014. Auer, P., Cesa-Bianchi, N., Freund, Y., and Schapire, R. E. Gambling in a rigged casino: The adversarial multi-armed bandit problem. In Proceedings of IEEE 36th annual foundations of computer science, pp. 322 331. IEEE, 1995. Avadhanula, V., Colini Baldeschi, R., Leonardi, S., Sankararaman, K. A., and Schrijvers, O. Stochastic bandits for multi-platform budget optimization in online advertising. In Proceedings of the Web Conference 2021, pp. 2805 2817, 2021. Balseiro, S., Golrezaei, N., Mahdian, M., Mirrokni, V., and Schneider, J. Contextual bandits with cross-learning. Mathematics of Operations Research, 48(3):1607 1629, 2023. Cesa-Bianchi, N. and Lugosi, G. Prediction, learning, and games. Cambridge university press, 2006. Cesa-Bianchi, N. and Lugosi, G. Combinatorial bandits. Journal of Computer and System Sciences, 78(5):1404 1422, 2012. Chekuri, C., Vondr ak, J., and Zenklusen, R. Dependent randomized rounding for matroid polytopes and applications. ar Xiv preprint ar Xiv:0909.4348, 2009. Chen, W., Wang, Y., and Yuan, Y. Combinatorial multiarmed bandit: General framework and applications. In International conference on machine learning, pp. 151 159. PMLR, 2013. Chvatal, V. A greedy heuristic for the set-covering problem. Mathematics of operations research, 4(3):233 235, 1979. Cohen, A., Hazan, T., and Koren, T. Online learning with feedback graphs without the graphs. In International Conference on Machine Learning, pp. 811 819. PMLR, 2016. Cohen, A., Hazan, T., and Koren, T. Tight bounds for bandit combinatorial optimization. In Conference on Learning Theory, pp. 629 642. PMLR, 2017. Combes, R., Talebi Mazraeh Shahi, M. S., Proutiere, A., et al. Combinatorial bandits revisited. Advances in neural information processing systems, 28, 2015. Eldowa, K., Esposito, E., Cesari, T., and Cesa-Bianchi, N. On the minimax regret for online learning with feedback graphs. Advances in Neural Information Processing Systems, 36, 2024. Gy orgy, A., Linder, T., Lugosi, G., and Ottucs ak, G. The on-line shortest path problem under partial monitoring. Journal of Machine Learning Research, 8(10), 2007. Han, Y., Wang, Y., and Chen, X. Adversarial combinatorial bandits with general non-linear reward functions. In International Conference on Machine Learning, pp. 4030 4039. PMLR, 2021. Han, Y., Weissman, T., and Zhou, Z. Optimal no-regret learning in repeated first-price auctions. Operations Research, 2024. Ito, S., Hatano, D., Sumita, H., Takemura, K., Fukunaga, T., Kakimura, N., and Kawarabayashi, K.-I. Improved regret bounds for bandit combinatorial optimization. Advances in Neural Information Processing Systems, 32, 2019. Koc ak, T. and Carpentier, A. Online learning with feedback graphs: The true shape of regret. In International Conference on Machine Learning, pp. 17260 17282. PMLR, 2023. Koolen, W. M., Warmuth, M. K., Kivinen, J., et al. Hedging structured concepts. In COLT, pp. 93 105. Citeseer, 2010. Adversarial Combinatorial Semi-bandits with Graph Feedback Lattimore, T., Kveton, B., Li, S., and Szepesvari, C. Toprank: A practical algorithm for online stochastic ranking. Advances in Neural Information Processing Systems, 31, 2018. Liu, Y. and Li, L. A map of bandits for e-commerce. ar Xiv preprint ar Xiv:2107.00680, 2021. Mannor, S. and Shamir, O. From bandits to experts: On the value of side-observations. Advances in Neural Information Processing Systems, 24, 2011. Robbins, H. E. Some aspects of the sequential design of experiments. Bulletin of the American Mathematical Society, 58:527 535, 1952. Sankararaman, K. A. and Slivkins, A. Combinatorial semibandits with knapsacks. In International Conference on Artificial Intelligence and Statistics, pp. 1760 1770. PMLR, 2018. Wang, Y., Ouyang, H., Wang, C., Chen, J., Asamov, T., and Chang, Y. Efficient ordered combinatorial semi-bandits for whole-page recommendation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 31, 2017. Wen, Y., Han, Y., and Zhou, Z. Stochastic contextual bandits with graph feedback: from independence number to mas number. Advances in Neural Information Processing Systems, 2024. Wen, Z., Kveton, B., and Ashkan, A. Efficient learning in large-scale combinatorial semi-bandits. In Bach, F. and Blei, D. (eds.), Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pp. 1113 1122, Lille, France, 07 09 Jul 2015. PMLR. Zierahn, L., van der Hoeven, D., Cesa-Bianchi, N., and Neu, G. Nonstochastic contextual combinatorial bandits. In International conference on artificial intelligence and statistics, pp. 8771 8813. PMLR, 2023. Adversarial Combinatorial Semi-bandits with Graph Feedback A. Proof of Theorem 1.3 Under the full information setup (i.e. when G is a complete graph), a lower bound Ω(S p T log(K/S)) was given by (Koolen et al., 2010), which implies that R (G) = Ω(S p T log(K/S)) for any general graph G. Note the assumption S K/2 is used in their proof to reduce the K arms into an instance of multi-armed bandits with full information and K S arms, which then gives the desired lower bound. To show the second part of the lower bound, without loss of generality, we may assume α = n S for some n N 4. Consider a maximal independent set I [K] and partition it into I1, . . . , IS such that |Im| = n = α S for m [S]. Index each subset by Im = {am,1, . . . , am,n}. To construct a hard instance, let u [n]S be a parameter and the product reward distribution be P u = Q a [K] Bern(µa) where 1 4 + if a = am,um Im for m [S]; 1 4 if a I\{am,um}m [S]; 0 if a I. The reward gap (0, 1/4) will be specified later. Also let P u m differ from P u at µa = 1 4 for all a Im, where u m = (u1, . . . , um 1, 0, um+1, . . . , u S) denotes the parameter u with m-th entry replaced by 0. Then the following observations hold: 1. For each u [n]S, the optimal combinatorial decision is v (u) = {am,um}m [S], and any other v A suffers an instantaneous regret at least |v\v (u)|; 2. For each u or u m, a decision v A suffers an instantaneous regret at least 1 Fix any policy π and denote by vt the arms pulled by π at time t. Let Nm,j(t) be the number of times am,j is pulled at the end of time t and Nm(t) = Pn j=1 Nm,j(t), and N0(t) be the total number of pulls outside I at the end of time t. Let u be uniformly distributed over [n]S, E(u)[ ] denote the expectation under environment P u, and Eu[ ] denote the expectation over u Unif([n]S). Define the stopping time by τm = min{T, min{t : Tm(t) T}}. Note that T Nm(τm) T + S since at each round the learner can pull at most S arms in Im. Under any u, the regret is lower bounded by: E(u)[R(π)] E(u) " m=1 Nm(T) Nm,um(T) = E(u) " S X m=1 T Nm,um(T) E(u)[R(π)] E(u) j=1 1[am,j vt] j=1 1[am,j vt] = E(u) " S X m=1 Nm(τm) Nm,um(τm) Together with x + y max{x, y}, we have m=1 E(u)[max{T Nm,um(T), Nm(τm) Nm,um(τm)}] m=1 E(u)[T Nm,um(τm)] Adversarial Combinatorial Semi-bandits with Graph Feedback where the second line follows from the definition of τm. Next, we lower bound the worst-case regret by the Bayes regret: max u [n]S E(u)[R(π)] Eu E(u)[R(π)] m=1 Eu E(u)[T Nm,um(τm)] um=1 E(u)[T Nm,um(τm)] um=1 E(u)[Nm,um(τm)] For any fixed m, u m, and um [n], let Pm denote the law of Nm,um(τm) under environment u, and P m denote the law of Nm,um(τm) under environment u m. Then E(u)[Nm,um(τm)] E(u m)[Nm,um(τm)] (a) T 1 2KL(P m Pm) 3 E(u m)[N0(τm) + Nm,um(τm)] E(u m)[N0(T)] + E(u m)[Nm,um(τm)]. Here (a) uses Pinsker s inequality, and (b) uses the chain rule of the KL divergence, the inequality KL(Bern(p) Bern(q)) (p q)2 q(1 q) and (0, 1/4), and the important fact that Tm,um(τm) is Fτm-measurable. The last fact crucially allows us to look at the KL divergence only up to time τm. Note that E(u m)[R(π)] 1 4E(u m)[N0(T)]. So if E(u m)[N0(T)] αST for any m [S], the policy incurs too large regret under this environment u m and we are done. Now suppose E(u m)[N0(T)] < αST for every m. By Cauchy-Schwartz inequality and the definition of τm, um=1 E(u)[Nm,um(τm)] um=1 E(u m)[Nm,um(τm)] + 4 T um=1 E(u m)[Nm,um(τm)] T + S + 4 T q αST + n(T + S). (15) Plugging (15) into (14) leads to max u [n]S E(u)[R(π)] αST + T + S αST + T + S where (c) uses the assumptions that T S, n 4, and 2T αST when T α3 S . Plugging in = 1 64 p n T and recalling n = α S yield the desired bound max u [n]S E(u)[R(π)] 1 1024 Note that the constants in this proof are not optimized. B. Randomized Swap Rounding This section introduces the randomized swap rounding scheme by Chekuri et al. (2009) that is invoked in Algorithm 1. Note that randomized swap rounding is not always valid for any decision set A: its validity crucially relies on the exchange Adversarial Combinatorial Semi-bandits with Graph Feedback property that for any u, c A, there exist a u\c and a c\u such that u {a} + {a } A and c {a } + {a} A. This property is satisfied by the full decision set A as well as any subset A {v {0, 1}K : v 1 S} that forms a matroid. However, for general A this can be violated, and as discussed in Section 5.1, no sampling scheme can guarantee the negative correlations and the learner must suffer a eΘ(S αT) regret. Algorithm 2 Randomized Swap Rounding Input: decision set A, arms [K], target x = PN i=1 wivi where N = |A|. Initialize: u v1. for i = 1 to N 1 do Denote c vi+1 and βi Pi j=1 wj. while u = c do Pick a u\c and a c\u such that u {a} + {a } A and c {a } + {a} A. With probability βi βi+wi+1 , set c c {a } + {a}; Otherwise, set u u {a} + {a }. end end Output u. C. Case S = 1 in the proof of Theorem 3.2 In this section, we present the proof of Theorem 3.2 for the special case S = 1. The overall idea is the same as in Section 3.2 but requires an adaptation of Lemma 4 in Alon et al. (2015) to our reward setting. Proof. Let U = {a [K] : (a, a) / E}. For the clarity of notation, let rt a be defined as in (4) and recall rt = 1 + P a U xt aˆrt a 0. Fix any v A and let vϵ = arg minv Convϵ(A) v v 1. The regret becomes vϵ vt rt ct1 # for any ct R when S = 1, where 1 RK denotes the all-one vector. Recall that rt a is an unbiased estimator of rt a and plug in ct = rt, we get vϵ vt rt rt1 # Following the same lines in the proof of Theorem 3.2, we arrive at a similar decomposition as (8): vϵ vt rt rt1 # a=1 xt a rt a rt 2 # Now for any time t, it holds that a Ut xt a rt a rt 2 = a Ut xt a ˆrt a + rt 1 2 a Ut xt a ˆrt a 2 a Ut xt aˆrt a a U xt a ˆrt a 2 xt a 2 ˆrt a 2 a U xt a(1 xt a) ˆrt a 2 Adversarial Combinatorial Semi-bandits with Graph Feedback where the inequality is due to the non-negativity of xt a and ˆrt a. On the other hand, by definition of Ut = {a [K] : ˆrt a 1 (K 1)ϵ}, it holds that rt 1 + 1 (K 1)ϵ. Then a/ Ut xt a rt a rt 2 a/ U xt a rt a 2 since rt a rt ˆrt a 1 (K 1)ϵ 0 for each a / Ut and rt 0. Finally, for every a U, it holds that ˆrt a 1 (K 1)ϵ since xt Convϵ(A), and so U Ut for all time t. Substituting back in (16), we get vϵ vt rt rt1 # a U xt a(1 xt a) ˆrt a 2 a Ut\U xt a(1 xt a) ˆrt a 2 a/ Ut xt a rt a 2 First, we bound the expectation of term (A) as follows: a U xt a(1 xt a) ˆrt a 2 # i =a 1[vt i = 1](1 rt a) P i =a 1[vt i = 1] Note (C) can be decomposed as follows: a/ Ut xt a rt a 2 a/ Ut xt a 1 ˆrt a 2 a/ Ut xt a 1 + ˆrt a 2 a/ Ut xt a ˆrt a 2 Since 1 xt a [0, 1] in term (B), we can plug the above bounds back in (17) and get vϵ vt rt rt1 # a/ U xt a ˆrt a 2 # η + ST + T S + 4α log 4KS where the last inequality follows from (11). D. Arm elimination algorithm for stochastic rewards As promised in Section 5.1, we present an elimination-based algorithm, called Combinatorial Arm Elimination, that is agnostic to the decision subset A0 and achieves regret e O(S αT). We assume the reward rt i [0, 1] for each arm i [K] Adversarial Combinatorial Semi-bandits with Graph Feedback Algorithm 3 Combinatorial Arm Elimination Input: time horizon T, decision subset A0 A, arm set [K], combinatorial budget S, feedback graph G, and failure probability ϵ (0, 1). Initialize: Active set Aact A0, minimum count N 0. Let ( rt a, nt a) be the empirical reward and the observation count of arm a [K] at time t. For each combinatorial decision v Aact, let rt v = P a v rt a be the empirical reward and nt v = mina v nt a be the minimum observation count. for t = 1 to T do Let AN {v Aact : nt v = N} be the decisions that have been observed least. Let GN be the graph G restricted to the set Ut = {a [K] : v AN with a v} = S v AN v. Let at Ut be the arm with the largest out-degree (break tie arbitrarily). Pull any decision vt AN with at vt. Observe the feedback {rt a : a Nout(vt)} and update ( rt a, nt a) accordingly. if minv AN nt v > N then Update the minimum count N minv Aact nt v. Let rt max maxv Aact rt v be the maximum empirical reward in the active set. Update the active set as follows: v Aact : rt v rt max 6S log(2T) log(KT/ϵ) is i.i.d. with a time-invariant mean µi. The algorithm maintains an active set of the decisions and successively eliminates decisions that are statistically suboptimal. It crucially leverages a structured exploration within the active set Aact. In the proof below and in Algorithm 3, for ease of notation, we let v A0 denote both the binary vector and the subset of [K] it represents. So a v [K] if va = 1. Theorem D.1. Fix any failure probability ϵ (0, 1). For any decision subset A0 A, with probability at least 1 ϵ, Algorithm 3 achieves expected regret E[R(Alg 3)] = e O Sα + S p log(KT/ϵ)αT . Proof. Fix any ϵ (0, 1). For any n 0, denote n = 3 p log(2T) log(KT/ϵ)/n (let 0 = 1 for simplicity). During the period of N = n, by Lemma F.6, with probability at least 1 ϵ, we have | rt a µa| n for any individual arm a Ut at any time t. In the remaining proof, we assume this event holds. Then the optimal combinatorial decision v is not eliminated at the end of this period, since rt v µv S n µmax S n rt max 2S n. In addition, for any v Aact, the elimination step guarantees that µv rt v S n rt max 3S n rt v 3S n µv 4S n. (18) Let Tn be the duration of N = n. Recall that at Ut has the largest out-degree in the graph G restricted to Ut. By Lemma F.1 and Lemma F.3, we are able to bound Tn: Tn (1 + log(K))δ(GN) 50 log(K)(1 + log(K))α(GN) 50 log(K)(1 + log(K))α M. By (18), the regret incurred during Tn is bounded by 4S n Tn. Thus with probability at least 1 ϵ, the total regret is upper Adversarial Combinatorial Semi-bandits with Graph Feedback E[R(Alg 3)] ST0 + 4S SM + 12SM p log(2T) log(KT/ϵ) p log(2T) log(KT/ϵ) log(K)(1 + log(K)) log(2T) log(KT/ϵ)S E. Proof of Theorem 5.2 The proof of Theorem 5.2 follows that of Theorem 3.2. The only difference is that the correlation condition of pt is no longer guaranteed on general A0. Now we can only bound (10) as E P i Nin(a) vt i 2 SE h P i Nin(a) vt i i . Then (11) E P i Nin(a) vt i 2 i Nin(a) xt i 2 a/ U xt a SE h P i Nin(a) vt i i i Nin(a) xt i 2 a/ U xt a S P i Nin(a) xt i P i Nin(a) xt i 2 a/ U S xt a P i Nin(a) xt i a/ U S xt a P i/ U:i Nin(a) xt i (b) 4SαT log 4K where (a) is by vt 1 S and (b) uses Lemma F.2. Plugging this back to (11) in the proof of Theorem 3.2 yields the first bound. When the feedback graphs are time-varying, one gets instead e O S q PT t=1 αt F. Auxiliary lemmas For any directed graph G = (V, E), one can find a dominating set by recursively picking the node with the largest out-degree (break tie arbitrarily) and removing its neighbors. The size of such dominating set is bounded by the following lemma: Lemma F.1 ((Chvatal, 1979)). For any graph G = (V, E), the above greedy procedure outputs a dominating set D with |D| (1 + log |V |)δ(G). Lemma F.2 (Lemma 5 in (Alon et al., 2015)). Let G = ([K], E) be a directed graph with i Nout(i) for all i [K]. Let wi be positive weights such that wi ϵ P i [K] wi for all i [K] for some constant ϵ (0, 1 j [K]:j i wj 4α log 4K Adversarial Combinatorial Semi-bandits with Graph Feedback Lemma F.3 (Lemma 8 in (Alon et al., 2015)). For any directed graph G = (V, E), one has δ(G) 50α(G) log |V |. Lemma F.4. Let F : X R be a convex, differentiable function and D Rd be an open convex subset. Let x = arg minx D F(x). Then for any y D, (y x )T F(x ) 0. Proof. We will prove by contradiction. Suppose there is y D with (y x )T F(x ) < 0. Let z(t) = F(x + t(y x )) for t [0, 1] be the line segment from F(x ) to F(y). We have z (t) = (y x )T F(x + t(y x )) and hence z (0) = (y x )T F(x ) < 0. Since D is open and F is continuous, there exists t > 0 small enough such that z(t) < z(0) = F(x ), which yields a contradiction. Lemma F.5 (Chapter 11 in (Cesa-Bianchi & Lugosi, 2006)). Let F be a Legendre function on open convex set D Rd. Then F = F and F = ( F) 1. Also for any x, y D, DF (x, y) = DF ( F(y), F(x)). Lemma F.6 (Lemma 1 in (Han et al., 2024)). Fix any ϵ (0, 1). With probability at least 1 ϵ, it holds that log(2T) log(KT/ϵ) for all a [K] and all t [T].