# extreme_bandits__e671f97b.pdf Extreme bandits Alexandra Carpentier Statistical Laboratory, CMS University of Cambridge, UK a.carpentier@statslab.cam.ac.uk Michal Valko Seque L team INRIA Lille - Nord Europe, France michal.valko@inria.fr In many areas of medicine, security, and life sciences, we want to allocate limited resources to different sources in order to detect extreme values. In this paper, we study an efficient way to allocate these resources sequentially under limited feedback. While sequential design of experiments is well studied in bandit theory, the most commonly optimized property is the regret with respect to the maximum mean reward. However, in other problems such as network intrusion detection, we are interested in detecting the most extreme value output by the sources. Therefore, in our work we study extreme regret which measures the efficiency of an algorithm compared to the oracle policy selecting the source with the heaviest tail. We propose the EXTREMEHUNTER algorithm, provide its analysis, and evaluate it empirically on synthetic and real-world experiments. 1 Introduction We consider problems where the goal is to detect outstanding events or extreme values in domains such as outlier detection [1], security [18], or medicine [17]. The detection of extreme values is important in many life sciences, such as epidemiology, astronomy, or hydrology, where, for example, we may want to know the peak water flow. We are also motivated by network intrusion detection where the objective is to find the network node that was compromised, e.g., by seeking the one creating the most number of outgoing connections at once. The search for extreme events is typically studied in the field of anomaly detection, where one seeks to find examples that are far away from the majority, according to some problem-specific distance (cf. the surveys [8, 16]). In anomaly detection research, the concept of anomaly is ambiguous and several definitions exist [16]: point anomalies, structural anomalies, contextual anomalies, etc. These definitions are often followed by heuristic approaches that are seldom analyzed theoretically. Nonetheless, there exist some theoretical characterizations of anomaly detection. For instance, Steinwart et al. [19] consider the level sets of the distribution underlying the data, and rare events corresponding to rare level sets are then identified as anomalies. A very challenging characteristic of many problems in anomaly detection is that the data emitted by the sources tend to be heavy-tailed (e.g., network traffic [2]) and anomalies come from the sources with the heaviest distribution tails. In this case, rare level sets of [19] correspond to distributions tails and anomalies to extreme values. Therefore, we focus on the kind of anomalies that are characterized by their outburst of events or extreme values, as in the setting of [22] and [17]. Since in many cases, the collection of the data samples emitted by the sources is costly, it is important to design adaptive-learning strategies that spend more time sampling sources that have a higher risk of being abnormal. The main objective of our work is the active allocation of the sampling resources for anomaly detection, in the setting where anomalies are defined as extreme values. Specifically, we consider a variation of the common setting of minimal feedback also known as the bandit setting [14]: the learner searches for the most extreme value that the sources output by probing the sources sequentially. In this setting, it must carefully decide which sources to observe because it only receives the observation from the source it chooses to observe. As a consequence, it needs to allocate the sampling time efficiently and should not waste it on sources that do not have an abnormal character. We call this specific setting extreme bandits, but it is also known as max-k problem [9, 21, 20]. We emphasize that extreme bandits are poles apart from classical bandits, where the objective is to maximize the sum of observations [3]. An effective algorithm for the classical bandit setting should focus on the source with the highest mean, while an effective algorithm for the extreme bandit problem should focus on the source with the heaviest tail. It is often the case that a heavy-tailed source has a small mean, which implies that the classical bandit algorithms perform poorly for the extreme bandit problem. The challenging part of our work dwells in the active sampling strategy to detect the heaviest tail under the limited bandit feedback. We proffer EXTREMEHUNTER, a theoretically founded algorithm, that sequentially allocates the resources in an efficient way, for which we prove performance guarantees. Our algorithm is efficient under a mild semi-parametric assumption common in extreme value theory, while known results by [9, 21, 20] for the extreme bandit problem only hold in a parametric setting (see Section 4 for a detailed comparison). 2 Learning model for extreme bandits In this section, we formalize the active (bandit) setting and characterize the measure of performance for any algorithm π. The learning setting is defined as follows. Every time step, each of the K arms (sources) emits a sample Xk,t Pk, unknown to the learner. The precise characteristics of Pk are defined in Section 3. The learner π then chooses some arm It and then receives only the sample XIt,t. The performance of π is evaluated by the most extreme value found and compared to the most extreme value possible. We define the reward of a learner π as: Gπ n = max t n XIt,t The optimal oracle strategy is the one that chooses at each time the arm with the highest potential revealing the highest value, i.e., the arm with the heaviest tail. Its expected reward is then: E [G n] = max k K E max t n Xk,t The goal of learner π is to get as close as possible to the optimal oracle strategy. In other words, the aim of π is to minimize the expected extreme regret: Definition 1. The extreme regret in the bandit setting is defined as: E [Rπ n] = E [G n] E [Gπ n] = max k K E max t n Xk,t E max t n XIt,t 3 Heavy-tailed distributions In this section, we formally define our observation model. Let X1, . . . , Xn be n i.i.d. observations from a distribution P. The behavior of the statistic maxi n Xi is studied by extreme value theory. One of the main results is the Fisher-Tippett-Gnedenko theorem [11, 12] that characterizes the limiting distribution of this maximum as n converges to infinity. Specifically, it proves that a rescaled version of this maximum converges to one of the three possible distributions: Gumbel, Fr echet, or Weibull. This rescaling factor depends on n. To be concise, we write maxi n Xi converges to a distribution to refer to the convergence of the rescaled version to a given distribution. The Gumbel distribution corresponds to the limiting distribution of the maximum of not too heavy tailed distributions, such as sub-Gaussian or sub-exponential distributions. The Weibull distribution coincides with the behaviour of the maximum of some specific bounded random variables. Finally, the Fr echet distribution corresponds to the limiting distribution of the maximum of heavy-tailed random variables. As many interesting problems concern heavy-tailed distributions, we focus on Fr echet distributions in this work. The distribution function of a Fr echet random variable is defined for x m, and for two parameters α, s as: P(x) = exp x m In this work, we consider positive distributions P : [0, ) [0, 1]. For α > 0, the Fisher Tippett-Gnedenko theorem also states that the statement P converges to an α-Fr echet distribution is equivalent to the statement 1 P is a α regularly varying function in the tail . These statements are slightly less restrictive than the definition of approximately α-Pareto distributions1, i.e., that there exists C such that P verifies: lim x |1 P(x) Cx α| x α = 0, (1) or equivalently that P(x) = 1 Cx α + o(x α). If and only if 1 P is α regularly varying in the tail, then the limiting distribution of maxi Xi is an α-Fr echet distribution. The assumption of α regularly varying in the tail is thus the weakest possible assumption that ensures that the (properly rescaled) maximum of samples emitted by a heavy tailed distributions has a limit. Therefore, the very related assumption of approximate Pareto is almost minimal, but it is (provably) still not restrictive enough to ensure a convergence rate. For this reason, it is natural to introduce an assumption that is slightly stronger than (1). In particular, we assume, as it is common in the extreme value literature, a second order Pareto condition also known as the Hall condition [13]. Definition 2. A distribution P is (α, β, C, C )-second order Pareto (α, β, C, C > 0) if for x 0: 1 P(x) Cx α C x α(1+β) By this definition, P(x) = 1 Cx α + O x α(1+β) , which is stronger than the assumption P(x) = 1 Cx α + o(x α), but similar for small β. Remark 1. In the definition above, β defines the rate of the convergence (when x diverges to infinity) of the tail of P to the tail of a Pareto distribution 1 Cx α. The parameter α characterizes the heaviness of the tail: The smaller the α, the heavier the tail. In the reminder of the paper, we will be therefore concerned with learning the α and identifying the smallest one among the sources. 4 Related work There is a vast body of research in offline anomaly detection which looks for examples that deviate from the rest of the data, or that are not expected from some underlying model. A comprehensive review of many anomaly detection approaches can be found in [16] or [8]. There has been also some work in active learning for anomaly detection [1], which uses a reduction to classification. In online anomaly detection, most of the research focuses on studying the setting where a set of variables is monitored. A typical example is the monitoring of cold relief medications, where we are interested in detecting an outbreak [17]. Similarly to our focus, these approaches do not look for outliers in a broad sense but rather for the unusual burst of events [22]. In the extreme values settings above, it is often assumed, that we have full information about each variable. This is in contrast to the limited feedback or a bandit setting that we study in our work. There has been recently some interest in bandit algorithms for heavy-tailed distributions [4]. However the goal of [4] is radically different from ours as they maximize the sum of rewards and not the maximal reward. Bandit algorithms have been already used for network intrusion detection [15], but they typically consider classical or restless setting. [9, 21, 20] were the first to consider the extreme bandits problem, where our setting is defined as the max-k problem. [21] and [9] consider a fully parametric setting. The reward distributions are assumed to be exactly generalized extreme value distributions. Specifically, [21] assumes that the distributions are exactly Gumbel, P(x) = exp( (x m)/s)), and [9], that the distributions are exactly of Gumbel or Fr echet P(x) = exp( (x m)α/(sα))). Provided that these assumptions hold, they propose an algorithm for which the regret is asymptotically negligible when compared to the optimal oracle reward. These results are interesting since they are the first for extreme bandits, but their parametric assumption is unlikely to hold in practice and the asymptotic nature of their bounds limits their impact. Interestingly, the objective of [20] is to remove the parametric assumptions of [21, 9] by offering the THRESHOLDASCENT algorithm. However, no analysis of this algorithm for extreme bandits is provided. Nonetheless, to the best of our knowledge, this is the closest competitor for EXTREMEHUNTER and we empirically compare our algorithm to THRESHOLDASCENT in Section 7. 1We recall the definition of the standard Pareto distribution as a distribution P, where for some constants α and C, we have that for x C1/α, P = 1 Cx α. In this paper we also target the extreme bandit setting, but contrary to [9, 21, 20], we only make a semi-parametric assumption on the distribution; the second order Pareto assumption (Definition 2), which is standard in extreme value theory (see e.g., [13, 10]). This is light-years better and significantly weaker than the parametric assumptions made in the prior works for extreme bandits. Furthermore, we provide a finite-time regret bound for our more general semi-parametric setting (Theorem 2), while the prior works only offer asymptotic results. In particular, we provide an upper bound on the rate at which the regret becomes negligible when compared to the optimal oracle reward (Definition 1). 5 Extreme Hunter In this section, we present our main results. In particular, we present the algorithm and the main theorem that bounds its extreme regret. Before that, we first provide an initial result on the expectation of the maximum of second order Pareto random variables which will set the benchmark for the oracle regret. We first characterize the expectation of the maximum of second order Pareto distributions. The following lemma states that the expectation of the maximum of i.i.d. second order Pareto samples is equal, up to a negligible term, to the expectation of the maximum of i.i.d. Pareto samples. This result is crucial for assessing the benchmark for the regret, in particular the expected value of the maximal oracle sample. Theorem 1 is based on Lemma 3, both provided in the appendix. Theorem 1. Let X1, . . . , Xn be n i.i.d. samples drawn according to (α, β, C, C )-second order Pareto distribution P (see Definition 2). If α > 1, then: E(max i Xi) (n C)1/αΓ 1 1 n (n C)1/α + 2C Dβ+1 Cβ+1nβ (n C)1/α + B = o (n C)1/α , where D2, D1+β > 0 are some universal constants, and B is defined in the appendix (9). Theorem 1 implies that the optimal strategy in hindsight attains the following expected reward: E [G n] max k h (Ckn)1/αk Γ 1 1 Algorithm 1 EXTREMEHUNTER K: number of arms n: time horizon b: where b βk for all k K N: minimum number of pulls of each arm Initialize: Tk 0 for all k K δ exp( log2 n)/(2n K) Run: for t = 1 to n do for k = 1 to K do if Tk N then estimate bhk,t that verifies (2) estimate b Ck,t using (3) update Bk,t using (5) with (2) and (4) end if end for Play arm kt arg maxk Bk,t Tkt Tkt + 1 end for Our objective is therefore to find a learner π such that E [G n] E [Gπ n] is negligible when compared to E[G n], i.e., when compared to (n C )1/α Γ 1 1 α n1/α where is the optimal arm. From the discussion above, we know that the minimization of the extreme regret is linked with the identification of the arm with the heaviest tail. Our EXTREMEHUNTER algorithm is based on a classical idea in bandit theory: optimism in the face of uncertainty. Our strategy is to estimate E [maxt n Xk,t] for any k and to pull the arm which maximizes its upper bound. From Definition 2, the estimation of this quantity relies heavily on an efficient estimation of αk and Ck, and on associated confidence widths. This topic is a classic problem in extreme value theory, and such estimators exist provided that one knows a lower bound b on βk [10, 6, 7]. From now on we assume that a constant b > 0 such that b mink βk is known to the learner. As we argue in Remark 2, this assumption is necessary . Since our main theoretical result is a finite-time upper bound, in the following exposition we carefully describe all the constants and stress what quantities they depend on. Let Tk,t be the number of samples drawn from arm k at time t. Define δ = exp( log2 n)/(2n K) and consider an estimator bhk,t of 1/αk at time t that verifies the following condition with probability 1 δ, for Tk,t larger than some constant N2 that depends only on αk, Ck, C and b: 1 αk bhk,t D p log(1/δ)T b/(2b+1) k,t = B1(Tk,t), (2) where D is a constant that also depends only on αk, Ck, C , and b. For instance, the estimator in [6] (Theorem 3.7) verifies this property and provides D and N2 but other estimators are possible. Consider the associated estimator for Ck: b Ck,t = T 1/(2b+1) k,t u=1 1 n Xk,u T bhk,t/(2b+1) k,t o For this estimator, we know [7] with probability 1 δ that for Tk,t N2: Ck b Ck,t E q log(Tk,t/δ) log(Tk,t)T b/(2b+1) k,T = B2(Tk,t), (4) where E is derived in [7] in the proof of Theorem 2. Let N = max A log(n)2(2b+1)/b, N2 where A depends on (αk, Ck)k, b, D, E, and C , and is such that: max (2B1(N), 2B2(N)/Ck) 1, N (2D log2 n)(2b+1)/b, and N > 2D 1 maxk 1/αk This inspires Algorithm 1, which first pulls each arm N times and then, at each time t > KN, pulls the arm that maximizes Bk,t, which we define as: b Ck,t + B2 (Tk,t) n bhk,t+B1(Tk,t) Γ bhk,t, B1 (Tk,t) , (5) where Γ(x, y) = Γ(1 x y), where we set Γ = Γ for any x > 0 and + otherwise. Remark 2. A natural question is whether it is possible to learn βk as well. In fact, this is not possible for this model and a negative result was proved by [7]. The result states that in this setting it is not possible to test between two fixed values of β uniformly over the set of distributions. Thereupon, we define b as a lower bound for all βk. With regards to the Pareto distribution, β = corresponds to the exact Pareto distribution, while β = 0 for such distribution that is not (asymptotically) Pareto. We show that this algorithm meets the desired properties. The following theorem states our main result by upper-bounding the extreme regret of EXTREMEHUNTER. Theorem 2. Assume that the distributions of the arms are respectively (αk, βk, Ck, C ) second order Pareto (see Definition 2) with mink αk > 1. If n Q, the expected extreme regret of EX- TREMEHUNTER is bounded from above as: E [Rn] L(n C )1/α K n log(n)(2b+1)/b + n log(n)(1 1/α ) + n b/((b+1)α ) = E [G n] o(1), where L, Q > 0 are some constants depending only on (αk, Ck)k, C , and b (Section 6). Theorem 2 states that the EXTREMEHUNTER strategy performs almost as well as the best (oracle) strategy, up to a term that is negligible when compared to the performance of the oracle strategy. Indeed, the regret is negligible when compared to (n C )1/α , which is the order of magnitude of the performance of the best oracle strategy E [G n] = maxk K E [maxt n Xk,t]. Our algorithm thus detects the arm that has the heaviest tail. For n large enough (as a function of (αk, βk, Ck)k, C and K), the two first terms in the regret become negligible when compared to the third one, and the regret is then bounded as: E [Rn] E [G n] O n b/((b+1)α ) We make two observations: First, the larger the b, the tighter this bound is, since the model is then closer to the parametric case. Second, smaller α also tightens the bound, since the best arm is then very heavy tailed and much easier to recognize. In this section, we prove an upper bound on the extreme regret of Algorithm 1 stated in Theorem 2. Before providing the detailed proof, we give a high-level overview and the intuitions. In Step 1, we define the (favorable) high probability event ξ of interest, useful for analyzing the mechanism of the bandit algorithm. In Step 2, given ξ, we bound the estimates of αk and Ck, and use them to bound the main upper confidence bound. In Step 3, we upper-bound the number of pulls of each suboptimal arm: we prove that with high probability we do not pull them too often. This enables us to guarantee that the number of pulls of the optimal arms is on ξ equal to n up to a negligible term. The final Step 4 of the proof is concerned with using this lower bound on the number of pulls of the optimal arm in order to lower bound the expectation of the maximum of the collected samples. Such step is typically straightforward in the classical (mean-optimizing) bandits by the linearity of the expectation. It is not straightforward in our setting. We therefore prove Lemma 2, in which we show that the expected value of the maximum of the samples in the favorable event ξ will be not too far away from the one that we obtain without conditioning on ξ. Step 1: High probability event. In this step, we define the favorable event ξ. We set δ def= exp( log2n)/(2n K) and consider the event ξ such that for any k K, N T n: 1 αk hk(T) D p log(1/δ)T b/(2b+1), Ck Ck(T) E p log(T/δ)T b/(2b+1), where hk(T) and Ck(T) are the estimates of 1/αk and Ck respectively using the first T samples. Notice, they are not the same as bhk,t and b Ck,t which are the estimates of the same quantities at time t for the algorithm, and thus with Tk,t samples. The probability of ξ is larger than 1 2n Kδ by a union bound on (2) and (4). Step 2: Bound on Bk,t. The following lemma holds on ξ for upperand lower-bounding Bk,t. Lemma 1. (proved in the appendix) On ξ, we have that for any k K, and for Tk,t N: 1 + F log(n) p log(n/δ)T b/(2b+1) k,t (6) Step 3: Upper bound on the number of pulls of a suboptimal arm. We proceed by using the bounds on Bk,t from the previous step to upper-bound the number of suboptimal pulls. Let be the best arm. Assume that at round t, some arm k = is pulled. Then by definition of the algorithm B ,t Bk,t, which implies by Lemma 1: (C n)1/α Γ 1 1 α (Ckn)1/αk Γ 1 1 1 + F log(n) p log(n/δ)T b/(2b+1) k,t Rearranging the terms we get: (C n)1/α Γ 1 1 (Ckn)1/αk Γ 1 1 αk 1 + F log(n) p log(n/δ)T b/(2b+1) k,t (7) We now define k which is analogous to the gap in the classical bandits: k = (C n)1/α Γ 1 1 (Ckn)1/αk Γ 1 1 Since Tk,t n, (7) implies for some problem dependent constants G and G dependent only on (αk, Ck)k, C and b, but independent of δ that: Tk,t N + G log2n log(n/δ) (2b+1)/(2b) N + G log2n log(n/δ) (2b+1)(2b) This implies that number T of pulls of arm is with probability 1 δ , at least k = G log2n log(2n K/δ ) (2b+1)/(2b) KN, where δ = 2n Kδ. Since n is larger than Q 2KN + 2GK log2n log (2n K/δ ) (2b+1)/(2b) , we have that T n 2 as a corollary. Step 4: Bound on the expectation. We start by lower-bounding the expected gain: E[Gn]=E max t n XIt,Tk,t E max t n XIt,Tk,t1{ξ} E max t n X ,T ,t1{ξ} =E max i T Xi1{ξ} The next lemma links the expectation of maxt T X ,t with the expectation of maxt T X ,t1{ξ}. Lemma 2. (proved in the appendix) Let X1, . . . , XT be i.i.d. samples from an (α, β, C, C )-second order Pareto distribution F. Let ξ be an event of probability larger than 1 δ. Then for δ < 1/2 and for T Q large enough so that c max 1/T, 1/T β 1/4 for a given constant c > 0, that depends only on C, C and β, and also for T log(2) max C (2C )1/β , 8 log (2) : E max t T Xt1{ξ} (TC)1/α Γ 1 1 α 4 + 8 α 1 (TC)1/α δ1 1/α T (TC)1/α + 2C D1+β C1+βT β (TC)1/α + B . Since n is large enough so that 2n2Kδ = 2n2K exp log2n 1/2, where δ = exp log2n , and the probability of ξ is larger than 1 δ , we can use Lemma 2 for the optimal arm: E max t T X ,t1{ξ} (T C ) α 4+ 8 α 1 δ 1 1 T 4C Dmax (C )1+b(T )b 2B where Dmax def= maxi D1+βi. Using Step 3, we bound the above with a function of n. In particular, we lower-bound the last three terms in the brackets using T n 2 and the (T C )1/α factor as: (T C )1/α (n C )1/α 1 GK n log(2n2K/δ ) 2b+1 We are now ready to relate the lower bound on the gain of EXTREMEHUNTER with the upper bound of the gain of the optimal policy (Theorem 1), which brings us the upper bound for the regret: E [Rn] = E [G n] E [Gn] E [G n] E max i T Xi E [G n] E max t T X ,t1{ξ} H(n C )1/α 1 n + 1 (n C )b + GK n log(2n2K/δ ) 2b+1 n + δ 1 1/α + B (n C )1/α , where H is a constant that depends on (αk, Ck)k, C , and b. To bound the last term, we use the definition of B (9) to get the n β /((β +1)α ) term, upper-bounded by n b/((b+1)α ) as b β . Notice that this final term also eats up n 1 and n b terms since b/((b + 1)α ) min(1, b). We finish by using δ = exp log2n and grouping the problem-dependent constants into L to get the final upper bound: E [Rn] L(n C )1/α K n log(n)(2b+1)/b + n log(n)(1 1/α ) + n b/((b+1)α ) 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 0 extreme regret Comparison of extreme bandit strategies (K=3) Extreme Hunter UCB Threshold Ascent 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 0 extreme regret Comparison of extreme bandit strategies (K=3) Extreme Hunter UCB Threshold Ascent 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 0 extreme regret Comparison of extreme bandit strategies on the network data K=5 Extreme Hunter Threshold Ascent Figure 1: Extreme regret as a function of time for the exact Pareto distributions (left), approximate Pareto (middle) distributions, and the network traffic data (right). 7 Experiments In this section, we empirically evaluate EXTREMEHUNTER on synthetic and real-world data. The measure of our evaluation is the extreme regret from Definition 1. Notice that even thought we evaluate the regret as a function of time T, the extreme regret is not cumulative and it is more in the spirit of simple regret [5]. We compare our EXTREMEHUNTER with THRESHOLDASCENT [20]. Moreover, we also compare to classical UCB [3], as an example of the algorithm that aims for the arm with the highest mean as opposed to the heaviest tail. When the distribution of a single arm has both the highest mean and the heaviest-tail, both EXTREMEHUNTER and UCB are expected to perform the same with respect to the extreme regret. In the light of Remark 2, we set b = 1 to consider a wide class of distributions. Exact Pareto Distributions In the first experiment, we consider K = 3 arms with the distributions Pk(x) = 1 x αk, where α = [5, 1.1, 2]. Therefore, the most heavy-tailed distribution is associated with the arm k = 2. Figure 1 (left) displays the averaged result of 1000 simulations with the time horizon T = 104. We observe that EXTREMEHUNTER eventually keeps allocating most of the pulls to the arm of the interest. Since in this case, the arm with the heaviest tail is also the arm with the largest mean, UCB also performs well and it is even able to detect the best arm earlier. THRESHOLDASCENT, on the other way, was not always able to allocate the pulls properly in 104 steps. This may be due to the discretization of the rewards that this algorithm is using. Approximate Pareto Distributions For the exact Pareto distributions, the smaller the tail index the higher the mean and even UCB obtains a good performance. However, this is no longer necessarily the case for the approximate Pareto distributions. For this purpose, we perform the second experiment where we mix an exact Pareto distribution with a Dirac distribution in 0. We consider K = 3 arms. Two of the arms follow the exact Pareto distributions with α1 = 1.5 and α3 = 3. On the other hand, the second arm has a mixture weight of 0.2 for the exact Pareto distribution with α2 = 1.1 and 0.8 mixture weight of the Dirac distribution in 0. For this setting, the second arm is the most heavy-tailed but the first arms has the largest mean. Figure 1 (middle) shows the result. We see that UCB performs worse since it eventually focuses on the arm with the largest mean. THRESHOLDASCENT performs better than UCB but not as good as EXTREMEHUNTER. Computer Network Traffic Data In this experiment, we evaluate EXTREMEHUNTER on heavytailed network traffic data which was collected from user laptops in the enterprise environment [2]. The objective is to allocate the sampling capacity among the computer nodes (arms), in order to find the largest outbursts of the network activity. This information then serves an IT department to further investigate the source of the extreme network traffic. For each arm, a sample at the time t corresponds to the number of network activity events for 4 consecutive seconds. Specifically, the network events are the starting times of packet flows. In this experiment, we selected K = 5 laptops (arms), where the recorded sequences were long enough. Figure 1 (right) shows that EXTREMEHUNTER again outperforms both THRESHOLDASCENT and UCB. Acknowledgements We would like to thank John Mark Agosta and Jennifer Healey for the network traffic data. The research presented in this paper was supported by Intel Corporation, by French Ministry of Higher Education and Research, and by European Community s Seventh Framework Programme (FP7/2007-2013) under grant agreement no270327 (Comp LACS). [1] Naoki Abe, Bianca Zadrozny, and John Langford. Outlier Detection by Active Learning. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 504 509, 2006. [2] John Mark Agosta, Jaideep Chandrashekar, Mark Crovella, Nina Taft, and Daniel Ting. Mixture models of endhost network traffic. In IEEE Proceedings of INFOCOM,, pages 225 229. [3] Peter Auer, Nicol o Cesa-Bianchi, and Paul Fischer. Finite-time Analysis of the Multiarmed Bandit Problem. Machine Learning, 47(2-3):235 256, 2002. [4] S ebastien Bubeck, Nicol o Cesa-Bianchi, and G abor Lugosi. Bandits With Heavy Tail. Information Theory, IEEE Transactions on, 59(11):7711 7717, 2013. [5] S ebastien Bubeck, R emi Munos, and Gilles Stoltz. Pure Exploration in Multi-armed Bandits Problems. Algorithmic Learning Theory, pages 23 37, 2009. [6] Alexandra Carpentier and Arlene K. H. Kim. Adaptive and minimax optimal estimation of the tail coefficient. Statistica Sinica, 2014. [7] Alexandra Carpentier and Arlene K. H. Kim. Honest and adaptive confidence interval for the tail coefficient in the Pareto model. Electronic Journal of Statistics, 2014. [8] Varun Chandola, Arindam Banerjee, and Vipin Kumar. Anomaly detection: A survey. ACM Comput. Surv., 41(3):15:1 15:58, July 2009. [9] Vincent A. Cicirello and Stephen F. Smith. The max k-armed bandit: A new model of exploration applied to search heuristic selection. AAAI Conference on Artificial Intelligence, 2005. [10] Laurens de Haan and Ana Ferreira. Extreme Value Theory: An Introduction. Springer Series in Operations Research and Financial Engineering. Springer, 2006. [11] Ronald Aylmer Fisher and Leonard Henry Caleb Tippett. Limiting forms of the frequency distribution of the largest or smallest member of a sample. Mathematical Proceedings of the Cambridge Philosophical Society, 24:180, 1928. [12] Boris Gnedenko. Sur la distribution limite du terme maximum d une s erie al eatoire. The Annals of Mathematics, 44(3):423 453, 1943. [13] Peter Hall and Alan H. Welsh. Best Attainable Rates of Convergence for Estimates of Parameters of Regular Variation. The Annals of Statistics, 12(3):1079 1084, 1984. [14] Tze L. Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6(1):4 22, 1985. [15] Keqin Liu and Qing Zhao. Dynamic Intrusion Detection in Resource-Constrained Cyber Networks. In IEEE International Symposium on Information Theory Proceedings, 2012. [16] Markos Markou and Sameer Singh. Novelty detection: a review, part 1: statistical approaches. Signal Process., 83(12):2481 2497, 2003. [17] Daniel B. Neill and Gregory F. Cooper. A multivariate Bayesian scan statistic for early event detection and characterization. Machine Learning, 79:261 282, 2010. [18] Carey E. Priebe, John M. Conroy, David J. Marchette, and Youngser Park. Scan Statistics on Enron Graphs. In Computational and Mathematical Organization Theory, volume 11, pages 229 247, 2005. [19] Ingo Steinwart, Don Hush, and Clint Scovel. A Classification Framework for Anomaly Detection. Journal of Machine Learning Research, 6:211 232, 2005. [20] Matthew J. Streeter and Stephen F. Smith. A Simple Distribution-Free Approach to the Max k-Armed Bandit Problem. In Principles and Practice of Constraint Programming, volume 4204, pages 560 574, 2006. [21] Matthew J. Streeter and Stephen F. Smith. An Asymptotically Optimal Algorithm for the Max k-Armed Bandit Problem. In AAAI Conference on Artificial Intelligence Intelligence, pages 135 142, 2006. [22] Ryan Turner, Zoubin Ghahramani, and Steven Bottone. Fast online anomaly detection using scan statistics. IEEE Workshop on Machine Learning for Signal Processing, 2010.