# multiobjective_bandits_optimizing_the_generalized_gini_index__1f86a781.pdf Multi-objective Bandits: Optimizing the Generalized Gini Index R obert Busa-Fekete 1 Bal azs Sz or enyi 2 3 Paul Weng 4 5 Shie Mannor 3 We study the multi-armed bandit (MAB) problem where the agent receives a vectorial feedback that encodes many possibly competing objectives to be optimized. The goal of the agent is to find a policy, which can optimize these objectives simultaneously in a fair way. This multi-objective online optimization problem is formalized by using the Generalized Gini Index (GGI) aggregation function. We propose an online gradient descent algorithm which exploits the convexity of the GGI aggregation function, and controls the exploration in a careful way achieving a distribution-free regret O(T 1/2) with high probability. We test our algorithm on synthetic data as well as on an electric battery control problem where the goal is to trade off the use of the different cells of a battery in order to balance their respective degradation rates. 1. Introduction The multi-armed bandit (MAB) problem (or bandit problem) refers to an iterative decision making problem in which an agent repeatedly chooses among K options, metaphorically corresponding to pulling one of K arms of a bandit machine. In each round, the agent receives a random payoff, which is a reward or a cost that depends on the arm being selected. The agent s goal is to optimize an evaluation metric, e.g., the error rate (expected percentage of times a suboptimal arm is played) or the cumulative regret (difference between the sum of payoffs obtained and the (expected) payoffs that could have been obtained by selecting the best arm in each round). In the stochastic multi-armed bandit setup, the payoffs are assumed to obey fixed distributions that can vary with the arms but do not change with time. To achieve the desired goal, *Equal contribution 1Yahoo Research, New York, NY, USA 2Research Group on AI, Hungarian Acad. Sci. and Univ. of Szeged, Szeged, Hungary 3Technion Institute of Technology, Haifa, Israel 4SYSU-CMU JIE, SEIT, SYSU, Guangzhou, P.R. China 5SYSU-CMU JRI, Shunde, P.R. China. Correspondence to: Paul Weng . Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by the author(s). the agent has to tackle the classical exploration/exploitation dilemma: It has to properly balance the pulling of arms that were found to yield low costs in earlier rounds and the selection of arms that have not yet been tested often enough (Auer et al., 2002a; Lai & Robbins, 1985). The bandit setup has become the standard modeling framework for many practical applications, such as online advertisement (Slivkins, 2014) or medical treatment design (Press, 2009) to name a few. In these tasks, the feedback is formulated as a single real value. However many real-world online learning problems are rather multi-objective, i.e., the feedback consists of a vectorial payoffs. For example, in our motivating example, namely an electric battery control problem, the learner tries to discover a best battery controller, which balances the degradation rates of the battery cells (i.e., components of a battery), among a set of controllers while facing a stochastic power demand. Besides, there are several studies published recently that consider multi-objective sequential decision problem under uncertainty (Drugan & Now e, 2013; Roijers et al., 2013; Mahdavi et al., 2013). In this paper, we formalize the multi-objective multi-armed bandit setting where the feedback received by the agent is in the form of a D-dimensional cost vector. The goal here is to be both efficient, i.e., minimize the cumulative cost for each objective, and fair, i.e., balance the different objectives. One natural way to ensure this is to try to find a cost vector on the Pareto front that is closest to the origin or to some other ideal point. A generalization of this approach (when using the infinite norm) is the Generalized Gini Index (GGI), a wellknown inequality measure in economics (Weymark, 1981). GGI is convex, which suggests applying the Online Convex Optimization (OCO) techniques (Hazan, 2016; Shalev-Shwartz, 2012). However, a direct application of this technique may fail to optimize GGI under noise, because the objective can be only observed with a bias that is induced by the randomness of the cost vectors and by the fact that the performance is measured by the function value of the average cost instead of the average of the costs function value. The solution we propose is an online learning algorithm which is based on Online Gradient Descent (OGD) (Zinkevich, 2003; Hazan, 2016) with additional exploration that enables us to control the bias of the objective function. We also show that its regret is almost optimal: up to a logarithmic factor, Multi-objective Bandits: Optimizing the Generalized Gini Index it matches the distribution-free lower bound of the stochastic bandit problem (Auer et al., 2002b), which naturally applies to our setup when the feedback is one-dimensional. The paper is organized as follows: after we introduce the formal learning setup, we briefly recall the necessary notions from multi-objective optimization in Section 3. Next, in Section 4, GGI is introduced and some of its properties are described. In Sections 5 and 6, we present how to compute the optimal policy for GGI, and define the regret notion. Section 7 contains our main results where we define our OGD-based algorithm and analyze its regret. In Section 8, we test our algorithm and demonstrate its versatility in synthetic and real-world battery-control experiments. In Section 9, we provide a survey of related work, and finally conclude the paper in Section 10. 2. Formal setup The multi-armed or K-armed bandit problem is specified by real-valued random variables X1, . . . , XK associated, respectively, with K arms (that we simply identify by the numbers 1, . . . , K). In each time step t, the online learner selects one and obtains a random sample of the corresponding distributions. These samples, which are called costs, are assumed to be independent of all previous actions and costs.1 The goal of the learner can be defined in different ways, such as minimizing the sum of costs over time (Lai & Robbins, 1985; Auer et al., 2002a). In the multi-objective multi-armed bandit (MO-MAB) problem, costs are not scalar real values, but real vectors. More specifically, a D-objective K-armed bandit problem (D 2, K 2) is specified by K real-valued multivariate random variables X1, . . . , XK over [0, 1]D. Let µk = E[Xk] denote the expected vectorial cost of arm k where µk = (µk,1, . . . , µk,D). Furthermore, µ denotes the matrix whose rows are the µk s. In each time step the learner can select one of the arms and obtain a sample, which is a cost vector, from the corresponding distribution. Samples are also assumed to be independent over time and across the arms, but not necessarily across the components of cost vectors. At time step t, kt denotes the index of the arm played by the learner and X(t) kt,1, . . . X(t) kt,D) the resulting payoff. After playing t time steps, the empirical estimate of the expected cost µk of the kth arm is: 1Our setup is motivated by a practical application where feedback is more naturally formulated in terms of cost. However the stochastic bandit problem is generally based on rewards, which can be easily turned into costs by using the transformation x 7! 1 x assuming that the rewards are from [0, 1]. k = 1 Tk(t) k 1(k = k) (1) where all operations are meant elementwise, Tk(t) is the number of times the kth arm has been played (i.e., Tk(t) = Pt =1 1(k = k)) and 1( ) is the indicator function. 3. Multi-objective optimization In order to complete the MO-MAB setting, we need to introduce the notion of optimality of the arms. First, we introduce the Pareto dominance relation defined as follows, for any v, v0 2 RD: v v0 , 8d = 1, . . . , D, vd v0 Let O RD be a set of D-dimension vectors. The Pareto front of O, denoted O , is the set of vectors such that: 8v 2 O, v v ) v = v % In multi-objective optimization, one usually wants to compute the Pareto front, or search for a particular element of the Pareto front. In practice, it may be costly (and even infeasible depending on the size of the solution space) to determine all the solutions of the Pareto front. One may then prefer to directly aim for a particular solution in the Pareto front. This problem is formalized as a single objective optimization problem, using an aggregation function. An aggregation (or scalarizing) function, which is a non-decreasing function φ : RD ! R, allows every vector to receive a scalar value to be optimized2. The initial multi-objective problem is then rewritten as follows: min φ(v) s.t. v 2 O . (4) A solution to this problem yields a particular solution on the Pareto front. Note that if φ is not strictly increasing in every component, some care is needed to ensure that the solution of (4) is on the Pareto front. Different aggregation function can be used depending on the problem at hand, such as sum, weighted sum, min, max, (augmented) weighted Chebyshev norm (Steuer & Choo, 1983), Ordered Weighted Averages (OWA) (Yager, 1988) or Ordered Weighted Regret (OWR) (Ogryczak et al., 2011) and its weighted version (Ogryczak et al., 2013). In this study, we focus on the Generalized Gini Index (GGI) (Weymark, 1981), a special case of OWA. 2A multivariate function f : RD ! R is said to be monotone (non-decreasing) if for all fixed x, x0 2 RD such that x x0 implies that f(x) f(x0). Multi-objective Bandits: Optimizing the Generalized Gini Index 4. Generalized Gini Index For a given n 2 N, [n] denotes the set {1, 2, . . . , n}. The Generalized Gini Index (GGI) (Weymark, 1981) is defined as follows for a cost vector x = (x1, . . . , x D) 2 RD: wdxσ(d) = w|xσ where σ 2 SD, which depends on x, is the permutation that sorts the components of x in a decreasing order, xσ = (xσ(1), , xσ(D)) is the sorted vector and weights wi s are assumed to be non-increasing, i.e., w1 w2 . . . w D. Given this assumption, Gw(x) = max 2SD w|x = max 2SD w| x and is therefore a piecewise-linear convex function. Figure 1 illustrates GGI on a bi-objective optimization task. To better understand GGI, we introduce its formulation in terms of Lorenz vectors. The Lorenz vector of x is the vector L(x) = (L1(x), . . . , LD(x)) where Ld(x) is the sum of the d smallest components of x. Then, GGI can be rewritten as follows: d Ld(x) (5) where w0 = (w0 1, . . . , w0 D) is the vector defined by 8d 2 [D], w0 d = wd wd+1 with w D+1 = 0. Note that all the components of w0 are nonnegative as we assume that those of w are non-increasing. GGI3 was originally introduced for quantifying the inequality of income distribution in economics. It is also known in statistics (Buczolich & Sz ekely, 1989) as a special case of Weighted Average Ordered Sample statistics, which does not require that weights be non-increasing and is therefore not necessarily convex. GGI has been characterized by Weymark (1981). It encodes both efficiency as it is monotone with Pareto dominance and fairness as it is non-increasing with Pigou-Dalton transfers (1912; 1920); they are two principles formulating natural requirements, which is an important reason why GGI became a well-established measure of balancedness. Informally, a Pigou-Dalton transfer amounts to increasing a lower-valued objective while decreasing another higher-valued objective by the same quantity such that the order between the two objectives is not reversed. The effect of such a transfer is to balance a cost vector. Formally, GGI satisfies the following fairness property: 8x such that xi < xj, 8 2 (0, xj xi), Gw(x + ei ej) Gw(x) where ei and ej are two vectors of the canonical basis. As a consequence, among vectors of equal sum, the best 3Note that in this paper GGI is expressed in terms of costs and therefore lower GGI values are preferred. Figure 1. Point A is always preferred (w.r.t. GGI) to B due to Pareto dominance; A is always preferred to C due to the Pigou-Dalton transfer principle; depending on the weights of GGI, A may be preferred to D or the other way around; with w = (1, 1/2), A is preferred to D (points equivalent to A are on the dashed green line). The optimal mixed solution is the black dot. cost vector (w.r.t. GGI) is the one with equal values in all objectives if feasible. The original Gini index can be recovered as a special case of GGI by setting the weights as follows: 8d 2 [D], wd = (2(D d) + 1)/D2. (6) This yields a nice graphical interpretation. For a fixed vector x, the distribution of costs can be represented as a curve connecting the points (0, 0), (1/D, L1(x)), (2/D, L2(x)), ..., (1, LD(x)). An ideally fair distribution with an identical total cost is given by the straight line connecting (0, 0), (1/D, LD(x)/D), (2/D, 2LD(x)/D), ..., (1, LD(x)), which equally distributes the total cost over all components. Then 1 Gw(x)/ x with weights (6) and x = P xi/D is equal to twice the area between the two curves. From now on, to simplify the presentation, we focus on GGI with strictly decreasing weights in [0, 1]D, i.e., d < d0 implies wd > wd0. This means that GGI is strictly decreasing with Pigou-Dalton transfers and all the components of w0 are positive. Based on formulation (5), Ogryczak & Sliwinski (2003) showed that the GGI value of a vector x can be obtained by solving a linear program. We shall recall their results and define the linear program-based formulation of GGI. Proposition 1. The GGI score Gw(x) of vector x is the optimal value of the following linear program subject to rd + bj,d xj 8j, d 2 [D] bj,d 0 8j, d 2 [D] Multi-objective Bandits: Optimizing the Generalized Gini Index 5. Optimal policy In the single objective case, arms are compared in terms of their means, which induce a total ordering over arms. In the multi-objective setting, we use the GGI criterion to compare arms. One can compute the GGI score of each arm k as Gw(µk) if its vectorial mean µk is known. Then an optimal arm k minimizes the GGI score, i.e., However, in this work, we also consider mixed strategies, which can be defined as A = { 2 RK| PK k=1 k = 1 0 }, because they may allow to reach lower GGI values than any fixed arm (see Figure 1). A policy parameterized by chooses arm k with probability k. An optimal mixed policy can then be obtained as follows: In general, Gw Gw(µk ), therefore using mixed strategies is justified in our setting. Based on Proposition 1, if the arms means were known, could be computed by solving the following linear program: subject to rd + bj,d kµk,j 8j, d 2 [D] k=1 k = 1 0 bj,d 0 8j, d 2 [D] After playing T rounds, the average cost can be written as Our goal is to minimize the GGI index of this term. Accordingly we expect the learner to collect costs so as their average in terms of GGI, that is, Gw should be as small as possible. As shown in the previous section, for a given bandit instance with arm means µ = (µ1, . . . , µK), the optimal policy achieves Gw = Gw (µ ) if the randomness of the costs are not taken into account. We consider the performance of the optimal policy as a reference value, and define the regret of the learner as the difference of the GGI of its average cost and the GGI of the optimal policy: Gw (µ ) . (9) Note that GGI is a continuous function, therefore if the learner follows a policy (T ) that is approaching as T ! 1, then the regret is vanishing. We shall also investigate a slightly different regret notion called pseudo-regret: Gw (µ ) (10) where (T ) = 1 T t=1 (t). We will show that the difference between the regret and pseudo-regret of our algorithm is O(T 1/2) with high probability, thus having a high probability regret bound O(T 1/2) for one of them implies a regret bound O(T 1/2) for the other one. Remark 1. The single objective stochastic multi-armed bandit problem (Auer et al., 2002a; Bubeck & Cesa-Bianchi, 2012) can be naturally accommodated into our setup with D = 1 and w1 = 1. In this case, implements the pure strategy that always pulls the optimal arm with the highest mean denoted by µk . Thus Gw (µ ) = µk in this case. Assuming that the learner plays only with pure strategies, the pseudo-regret defined in (10) can be written as: (µkt Tµk ) = 1 Tk(T) (µk µk ) which coincides with the single objective pseudo-regret (see for example (Auer et al., 2002a)), apart from the fact that we work with costs instead of rewards. Therefore our notion of multi-objective pseudo-regret can be seen as a generalization of the single objective pseudo-regret. Remark 2. Single-objective bandit algorithm can be applied in our multi-objective setting by transforming the multivariate payoffs X(t) kt into a single real value Gw(X(t) kt ) in every time step t. However, in general, this approach fails to optimize GGI as formulated in (7) due to GGI s non-linearity, even if the optimal policy is a pure strategy. Moreover, applying a multi-objective bandit algorithm such as MO-UCB (Drugan & Now e, 2013) would be inefficient as they were developed to find all Pareto-optimal arms and then to sample them uniformly at random. This approach may be reasonable to apply only when k = 1/#K where K = {k 2 [K] : k > 0} contains all the Pareto optimal arms, which is clearly not the case for every MO-MAB instance. 7. Learning algorithm based on OCO In this section we propose an online learning algorithm called MO-OGDE, to optimize the regret defined in the previous section. Our method exploits the convexity of the GGI operator and formalizes the policy search problem as an online convex optimization problem, which is solved by Online Gradient Descent (OGD) (Zinkevich, 2003) algorithm with projection to a gradually expanding truncated probability simplex. Multi-objective Bandits: Optimizing the Generalized Gini Index Algorithm 1 MO-OGDE(δ) 1: Pull each arm once 2: Set (K+1) = (1/K, , 1/K) 3: for rounds t = K + 1, K + 2, . . . do 4: Choose an arm kt according to (t) 5: Observe the sample X(t) kt and compute f (t) t 7: (t+1) = OGDEstep( (t), t, rf (t)) return 1 Then we shall provide a regret analysis of our method. Due to space limitation, the proofs are deferred to the appendix. 7.1. MO-OGDE Our objective function to be minimized can be viewed as a function of , i.e., f( ) = Gw(µ ) where the matrix µ = (µ1, . . . , µK) contains the means of the arm distributions as its columns. Note that the convexity of GGI implies the convexity of f( ). Since we play with mixed strategies, the domain of our optimization problem is the K-dimensional probability simplex K = { 2 RK| PK k=1 k = 1 0 }, which is a convex set. Then the gradient of f( ) with respect to k can be computed as @f( ) d=1 wdµk, (d)where is the permutation that sorts the components of µ in a decreasing order. The means µk s are not known but they can be estimated based on the costs observed so far as given in (1). The objective function based on the empirical mean estimates is denoted by f (t)( ) = Gw(bµ(t) ) where bµ(t) = (bµ(t) 1 , . . . , bµ(t) K ) contains the empirical estimates for µ1, . . . , µK in time step t as its columns. Our Multi-Objective Online Gradient Descentalgorithmwith Exploration is defined in Algorithm 1, which we shall refer to as MO-OGDE. Our algorithm is based on the well-known Online Gradient Descent (Zinkevich, 2003; Hazan, 2016) that carries out the gradient step and the projection back onto the domain in each round. The MO-OGDE algorithm first pulls each arm at once as an initialization step. Then in each iteration, it chooses each arm k with probability (t) k , and it computes f (t) based on the empirical mean estimates. Next, it carries out the gradient step based on rf (t)( (t)) with a step size t as defined in line 6, and computes the projection β K onto the nearest point of the convex set: with β = t. The gradient step and projection are carried out using OGDEstep( , , g) = K ( g( )) (11) The key ingredient of MO-OGDE is the projection step onto the truncated probability simplex t K which ensures that (t) k > t/K for every k 2 [K] and t 2 [T]. This forced exploration is indispensable in our setup, since the objective function f( ) depends on the means of the arm distributions, which are not known. To control the difference between f( ) and f (t)( ), we need good estimates for µ1, . . . , µK, which are obtained via forced exploration. That is why, the original OGD (Hazan, 2016) algorithm, in general, fails to optimize GGI in our setting. Our analysis, presented in the next section, focuses on the interplay between the forced exploration and the bias of the loss functions f (1)( ), . . . , f (T )( ). 7.2. Regret analysis The technical difficulty in optimizing GGI in an online fashion is that, in general, f (t)( ) f( ) (= Gw(bµ(t) ) Gw(µ )) is of order mink(Tk(t)) 1/2, which, unless all the arms are sampled a linear number of times, incurs a regret of the same order of magnitude, which is typically too large. Nevertheless, an optimal policy determines a convex combination of several arms in the form of µ , which is the optimal cost vector in terms of GGI given the arm distribution at hand. Let us denote K = {k 2 [K] : k > 0}. Note that #K D. Moreover, arms in [K] \ K with an individual GGI lower than arms in K do not necessarily participate in the optimal combination. Obviously, an algorithm that achieves a O(T 1/2) needs to pull the arms in #K linear time, and at the same time, estimate the arms in [K] \ K with a certain precision. The main idea in the approach proposed in this paper is, despite the above remarks, to apply some online convex optimization algorithm on the current estimate f (t)( ) = Gw(bµ(t) ) of the objective function f( ) = Gw(µ ), use forced exploration of order T 1/2, and finally show that the estimate f (t) of the objective function has error O(T 1/2) along the trajectory generated by the online convex optimization algorithm. In particular, we show that f( 1 t=1 (t)) 1 T t=1 f (t)( (t)) + O(T 1/2). The intuitive reason for this is that k 1 t=1 bµ(t) (t) 1 T t=1 µ (t)k = O(T 1/2), which is based on the following observation: an arm in K is either pulled often, thus its mean estimate is then accurate enough, or an arm in [K] \ K is pulled only a few times, nevertheless PT k is then small enough to make the poor accuracy of its mean estimate insignificant. Below we make this argument formal by proving the following theorem: Theorem 1. With probability at least 1 δ: 6D ln3(8DKT 2/δ) for any big enough T, where L is the Lipschitz constant of Multi-objective Bandits: Optimizing the Generalized Gini Index Its proof follows four subsequent steps presented next. Step 1 As a point of departure, we analyze OGDEstep in (11). In particular, we compute the regret that is commonly used in OCO setup for f (t)( (t)). Lemma 1. Define t=1,...,T as: (1) = (1/K, . . . , 1/K) (t+1) = OGDEstep( (t), t, rf (t)) with 1, . . . , T 2 [0, 1]. Then the following upper bound is guaranteed for all T 1 and for any 2 K: f (t)( (t)) where sup 2 K krf (t)( )k < G KD for all t 2 [T]. The proof of Lemma 1 is presented in Appendix A. If the projection is carried out onto K according to the OGE algorithm instead of the truncated probability t K, the regret bound in Lemma 1 would be improved only by a constant factor (see Theorem 3.1 in (Hazan, 2016)). As a side remark, note that Lemma 1 holds for arbitrary convex function since only the convexity of f (t)( ) is used in the proof. Step 2 Next, we show that f (t)( ) converges to f( ) as fast as O(T 1/2) along the trajectory t=1,...,T generated by MO-OGDE. Proposition 2. With probability at least 1 2(DT + 1)Kδ, 6D(1 + ln2 T) ln(2/δ) The proof of Proposition 2 is deferred to Appendix B. Proposition 2, combined with the fact that Gw(bµ(t) (t)) = 1 f (t)( (t)) where we used the convexity of GGI, and Gw implies the following result. Corollary 1. With probability at least 1 2(DT + 1)Kδ, 6D(1 + ln2 T) ln 2 Step 3 Next, we provide a regret bound for the pseudo-regret of MO-OGDE by using Lemma 1 and Corollary 1. To this end, we introduce some further notations. First of all, let denote the set of all the minimum points of f over K. As we show later in Lemma 3, K is a convex polytope, and thus the set ext( K) of its extreme points is finite. Proposition 3. With probability at least 1 4DT 2Kδ: 6D(1 + ln2 T) ln(2/δ) where χ1(t) = 1(t 1) + 1(t > 1)(ta0/(2|ext( with a0 = min 2ext( K) mink: k>0 k and g = inf 2 K\ The proof is deferred to Appendix C. In the proof first we decompose the regret into various terms which can be upper bounded based on Lemma 1 and Corollary 1. This implies that 1 k will get arbitrarily close to some K if T is big enough because, as we show later in Lemma 3, g is strictly positive (see Appendix D). As a consequence, for a big enough T > 1, the difference between the f (t)( ) and f( ) is vanishing as fast as O(T 1/2) which is crucial for a regret bound of order T 1/2. Step 4 Finally, Theorem 1 yields from Proposition 3 by simplifying the right hand side. The calculation is presented in Appendix E. 7.3. Regret vs. pseudo-regret Next, we upper-bound the difference of regret defined in (9) and the pseudo-regret defined in (10). To that aim, we first upper-bound the difference of Tk(t) and Pt Claim 1. For any t = 1, 2, . . . and any k = 1, . . . , K it holds that P h555Tk(t) Pt Proof : As P[kt = k] = k, it holds that Tk(t) Pt =1[1(k = k) ( ) k ] is a martingale. Besides, |1(k = k) k| 1 by construction. The claim then follows by Azuma s inequality. Based on Claim 1 and Prop. 2, we upper-bound the difference between the pseudo-regret and regret of MO-OGDE. Multi-objective Bandits: Optimizing the Generalized Gini Index Corollary 2. With probability at least 1 δ |R(T ) R(T )| L 12D ln(4(DT + 1)/δ) The proof of Corollary 2 is deferred to Appendix F. According to Corollary 2, the difference between the regret and pseudo regret is O(T 1/2) with high probability, hence Theorem 1 implies a O(T 1/2) bound for the regret of 8. Experiments To test our algorithm, we carried out two sets of experiments. In the first we generated synthetic data from multi-objective bandit instances with known parameters. In this way, we could compute the pseudo-regret (10) and, thus investigate the empirical performance of the algorithms. In the second set of experiments, we run our algorithm on a complex multi-objective online optimization problem, namely an electric battery control problem. Before presenting those experiments, we introduce another algorithm that will serve as a baseline. 8.1. A baseline method In the previous section we introduced a gradient-based approach that uses the mean estimates to approximate the gradient of the objective function. Nevertheless, using the mean estimates, the optimal policy can be directly approximated by solving the the linear program given in (8). We use the same exploration as MO-OGDE, see line 6 of Algorithm 1. More concretely, the learner solves the following linear program in each time step t: subject to rd + bj,d k,j 8j, d 2 [D] T 1 = 1 t/K bj,d 0 8j, d 2 [D] Note that the solution of the learner program above regarding is in t K. We refer to this algorithm as MO-LP. Note that this approach is computationally expensive, since a linear program needs to be solved at each time step. But the policy of each step is always optimal restricted to the truncated simplex t K with respect to the mean estimates, unlike the gradient descent method. 8.2. Synthetic Experiments We generated random multi-objective bandit instances for which each component of the multivariate cost distributions Figure 2. The regret of the MO-LP and MO-OGDE. The regret and its error is computed based on 100 repetitions and plotted in terms of the number of rounds. The dimension of the arm distributions was set to D 2 {5, 10}, which is indicated in the title of the panels. obeys Bernoulli distributions with various parameters. The parameters of each Bernoulli distributions are drawn uniformly at random from [0, 1] independently from each other. The number of arms K was set to {5, 20} and the dimension of the cost distribution was taken from D 2 {5, 10}. The weight vector w of GGI was set to wd = 1/2d 1. Since the parameters of the bandit instance are known, the regret defined in Section 6 can be computed. We ran the MOOGDE and MO-LP algorithms with 100 repetitions. The multi-objective bandit instance were regenerated after each run. The regrets of the two algorithms, which are averaged out over the repetitions, are plotted in Figure 2 along with the error bars. The results reveal some general trends. First, the average regrets of both algorithms converge to zero. Second the MO-LP algorithm outperforms the gradient descent algorithm for small number of round, typically T < 5000 on the more complex bandit instances (K = 20). This fact might be explained by the fact that the MO-LP solves a linear program for estimating whereas the MO-OGDE minimizes the same objective but using a gradient descent approach with projection, which might achieve slower convergence in terms of regret, nevertheless its computational time is significantly decreased compared to the baseline method. 8.3. Battery control task We also tried our algorithms on a more realistic domain: the cell balancing problem. As the performance profile of battery cells, subcomponents of an electric battery, may vary due to small physical and manufacturing differences, efficient balancing of those cells is needed for better performance and longer battery life. We model this problem as a MO-MAB where the arms are different cell control strategies and the Multi-objective Bandits: Optimizing the Generalized Gini Index Figure 3. The regret of the MO-OGDE and MO-LP on the battery control task. The regret is averaged over 100 repetitions and plotted in terms of the number of rounds. The dimension of the arm distributions was D = 12. goal is to balance several objectives: state-of-charge (SOC), temperature and aging. More concretely, the learner loops over the following two steps: (1) she chooses a control strategy for a short duration and (2) she observes its effect on the objectives (due to stochastic electric consumption). For the technical details of this experiments see Appendix G. We tackled this problem as a GGI optimization problem. The results (averaged over 100 runs) are presented in Figure 3 where we evaluated MO-OGDE vs. MO-LP. The dashed green lines represent the regrets of playing fixed deterministic arms. Although MO-OGDE and MO-LP both learn to play a mixed policy that is greatly better than any individual arm, MO-OGDE is computationally much more efficient. 9. Related work The single-objective MAB problem has been intensively studied especially in recent years (Bubeck & Cesa-Bianchi, 2012), nevertheless there is only a very limited number of work concerning the multi-objective setting. To the best of our best knowledge, Drugan & Now e (2013) considered first the multi-objective multi-armed problem in a regret optimization framework with a stochastic assumption. Their work consists of extending the UCB algorithm (Auer et al., 2002a) so as to be able to handle multi-dimensional feedback vectors with the goal of determining all arms on the Pareto front. Azar et al. (2014) investigated a sequential decision making problem with vectorial feedback. In their setup the agent is allowed to choose from a finite set of actions and then it observes the vectorial feedback for each action, thus it is a full information setup unlike our setup. Moreover, the feedback is non-stochastic in their setup, as it is chosen by an adversary. They propose an algorithm that can handle a general class of aggregation functions, such as the set of bounded domain, continuous, Lipschitz and quasi-concave functions. In the online convex optimization setup with multiple objectives (Mahdavi et al., 2013), the learner s forecast x(t) is evaluated in terms of multiple convex loss functions f (t) 0 (x), f (t) 1 (x), . . . , f (t) K (x) in each time step t. The goal of the learner is then to minimize Pt 0 (x( )) while keeping the other objectives below some predefined threshold, i.e. 1 t i (x( )) γi for all i 2 [K]. Note that, with linear loss functions, the multiple-objective convex optimization setup boils down to linear optimization with stochastic constraints, and thus it can be applied to solve the linear program given in Section 5 whose solution is the optimal policy in our setup. For doing this, however, each linear stochastic constraint needs to be observed, whereas we only assume bandit information. In the approachability problem (Mannor et al., 2014; 2009; Abernethy et al., 2011), there are two players, say A and B. Players A and B choose actions from the compact convex sets X RK and Y RD, respectively. The feedback to the players is computed as a function u : X Y 7! Rp. A given convex set S Rp is known to both players. Player A wants to land inside with the cumulative payoffs, i.e., player A s goal is to minimize dist( 1 t=1 u(x(t), y(t)), S) where x(t) and y(t) are the actions chosen by player A and B respectively, and dist(s, S) = infs02S ks0 sk, whereas player B, called adversary, wants to prevent player A to land in set S. In our setup, Player B who generates the cost vectors, is assumed to be stochastic. The set S consists of a single value which is µ , and u corresponds to µIbalpha, thus p = D. What makes our setup essentially different from approachability is that, S is not known to any player. That is why Player A, i.e. the learner, needs to explore the action space which is achieved by forced exploration. 10. Conclusion and future work We introduced a new problem in the context of multiobjective multi-armed bandit (MOMAB). Contrary to most previously proposed approaches in MOMAB, we do not search for the Pareto front, instead we aim for a fair solution, which is important for instance when each objective corresponds to the payoff of a different agent. To encode fairness, we use the Generalized Gini Index (GGI), a well-known criterion developed in economics. To optimize this criterion, we proposed a gradient-based algorithm that exploits the convexity of GGI. We evaluated our algorithm on two domains and obtained promising experimental results. Several multi-objective reinforcement learning algorithm have been proposed in the literature (G abor et al., 1998; Roijers et al., 2013). Most of these methods make use of a simple linear aggregation function. As a future work, it would be interesting to extend our work to the reinforcement learning setting, which would be useful to solve the electric battery control problem even more finely. Multi-objective Bandits: Optimizing the Generalized Gini Index Acknowledgements The authors would like to thank Vikram Bhattacharjee and Orkun Karabasoglu for providing the battery model. This research was supported in part by the European Communitys Seventh Framework Programme (FP7/2007-2013) under grant agreement 306638 (SUPREL). Abernethy, Jacob D., Bartlett, Peter L., and Hazan, Elad. Blackwell approachability and no-regret learning are equivalent. In COLT 2011 - The 24th Annual Conference on Learning Theory, June 9-11, 2011, Budapest, Hungary, pp. 27 46, 2011. Auer, P., Cesa-Bianchi, N., and Fischer, P. Finite-time analysis of the multiarmed bandit problem. Machine Learning, 47:235 256, 2002a. Auer, P., Cesa-Bianchi, N., Freund, Y., and Schapire, R.E. The nonstochastic multiarmed bandit problem. SIAM Journal on Computing, 32(1):48 77, 2002b. Azar, Yossi, Feige, Uriel, Feldman, Michal, and Tennenholtz, Moshe. Sequential decision making with vector outcomes. In ITCS, pp. 195 206, 2014. Boyd, Stephen and Vandenberghe, Lieven. Convex Optimization. Cambridge University Press, New York, NY, USA, 2004. Bubeck, S. and Cesa-Bianchi, N. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends in Machine Learning, 5(1):1 122, Buczolich, Zolt an and Sz ekely, G abor J. When is a weighted average of ordered sample elements a maximum likelihood estimator of the location parameter? Advances in Applied Mathematics, 10(4):439 456, 1989. Dalton, H. The measurement of inequality of incomes. Economic J., 30(348 361), 1920. Dambrowski, J. Review on methods of state-of-charge esti- mation with viewpoint to the modern Li Fe PO4/Li4Ti5O12 lithium-ion systems. In International Telecommunication Energy Conference, 2013. Drugan, M. M. and Now e, A. Designing multi-objective multi-armed bandits algorithms: A study. In IJCNN, pp. 1 8, 2013. G abor, Zolt an, Kalm ar, Zsolt, and Szepesv ari, Csaba. Multi-criteria reinforcement learning. In ICML, pp. 197 205, 1998. Gao, Lijun, Chen, Shenyi, and Dougal, Roger A. Dynamic lithium-ion battery model for system simulation. IEEE Trans. on Components and Packaging Technologies, 25 (3):495 505, 2002. Hazan, Elad. Introduction to Online Convex Optimization. Johnson, V.H. Battery performance models in ADVISOR. Journal of Power Sources, 110:312 329, 2002. Lai, T. L. and Robbins, Herbert. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6(1):4 22, 1985. Mahdavi, Mehrdad, Yang, Tianbao, and Jin, Rong. Stochas- tic convex optimization with multiple objectives. In NIPS, pp. 1115 1123. 2013. Mannor, Shie, Tsitsiklis, John N., and Yu, Jia Yuan. Online learning with sample path constraints. Journal of Machine Learning Research, 10:569 590, 2009. doi: 10.1145/1577069.1577089. Mannor, Shie, Perchet, Vianney, and Stoltz, Gilles. Ap- proachability in unknown games: Online learning meets multi-objective optimization. In Proceedings of The 27th Conference on Learning Theory, COLT 2014, Barcelona, Spain, June 13-15, 2014, pp. 339 355, 2014. Ogryczak, W. and Sliwinski, T. On solving linear programs with the ordered weighted averaging objective. Eur. J. Operational Research, 148:80 91, 2003. Ogryczak, W., Perny, P., and Weng, P. On minimizing ordered weighted regrets in multiobjective Markov decision processes. In ADT, Lecture Notes in Artificial Intelligence, 2011. Ogryczak, W., Perny, P., and Weng, P. A compromise programming approach to multiobjective Markov decision processes. International Journal of Information Technology & Decision Making, 12:1021 1053, 2013. Pigou, A. Wealth and Welfare. Macmillan, 1912. Press, W.H. Bandit solutions provide unified ethical models for randomized clinical trials and comparative effectiveness research. PNAS, 106(52):22398 22392, 2009. Roijers, Diederik M., Vamplew, Peter, Whiteson, Shimon, and Dazeley, Richard. A survey of multi-objective sequential decision-making. J. Artif. Intell. Res., 48: 67 113, 2013. Shalev-Shwartz, Shai. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107 194, 2012. Multi-objective Bandits: Optimizing the Generalized Gini Index Slivkins, A. Contextual bandits with similarity information. J. Mach. Learn. Res., 15:2533 2568, 2014. Steuer, R.E. and Choo, E.-U. An interactive weighted Tchebycheff procedure for multiple objective programming. Mathematical Programming, 26:326 344, 1983. Tao, Gao. Research on Li Mn2O4 battery s state of charge es- timation with the consideration of degradation. Technical report, Tsinghua University, 2012. Weymark, John A. Generalized gini inequality indices. Mathematical Social Sciences, 1(4):409 430, 1981. Yager, R.R. On ordered weighted averaging aggregation operators in multi-criteria decision making. IEEE Trans. on Syst., Man and Cyb., 18:183 190, 1988. Zinkevich, Martin. Online convex programming and generalized infinitesimal gradient ascent. In ICML, 2003.