# active_learning_from_imperfect_labelers__6f01dcc1.pdf Active Learning from Imperfect Labelers Songbai Yan University of California, San Diego yansongbai@eng.ucsd.edu Kamalika Chaudhuri University of California, San Diego kamalika@cs.ucsd.edu Tara Javidi University of California, San Diego tjavidi@eng.ucsd.edu We study active learning where the labeler can not only return incorrect labels but also abstain from labeling. We consider different noise and abstention conditions of the labeler. We propose an algorithm which utilizes abstention responses, and analyze its statistical consistency and query complexity under fairly natural assumptions on the noise and abstention rate of the labeler. This algorithm is adaptive in a sense that it can automatically request less queries with a more informed or less noisy labeler. We couple our algorithm with lower bounds to show that under some technical conditions, it achieves nearly optimal query complexity. 1 Introduction In active learning, the learner is given an input space X, a label space L, and a hypothesis class H such that one of the hypotheses in the class generates ground truth labels. Additionally, the learner has at its disposal a labeler to which it can pose interactive queries about the labels of examples in the input space. Note that the labeler may output a noisy version of the ground truth label (a flipped label). The goal of the learner is to learn a hypothesis in H which is close to the hypothesis that generates the ground truth labels. There has been a significant amount of literature on active learning, both theoretical and practical. Previous theoretical work on active learning has mostly focused on the above basic setting [2, 4, 7, 10, 25] and has developed algorithms under a number of different models of label noise. A handful of exceptions include [3] which allows class conditional queries, [5] which allows requesting counterexamples to current version spaces, and [23, 26] where the learner has access to a strong labeler and one or more weak labelers. In this paper, we consider a more general setting where, in addition to providing a possibly noisy label, the labeler can sometimes abstain from labeling. This scenario arises naturally in difficult labeling tasks and has been considered in computer vision by [11, 15]. Our goal in this paper is to investigate this problem from a foundational perspective, and explore what kind of conditions are needed, and how an abstaining labeler can affect properties such as consistency and query complexity of active learning algorithms. The setting of active learning with an abstaining noisy labeler was first considered by [24], who looked at learning binary threshold classifiers based on queries to an labeler whose abstention rate is higher closer to the decision boundary. They primarily looked at the case when the abstention rate at a distance from the decision boundary is less than 1 Θ( α), and the rate of label flips at the same distance is less than 1 2 Θ( β); under these conditions, they provided an active learning algorithm that given parameters α and β, outputs a classifier with error ǫ using O(ǫ α 2β) queries to the labeler. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. However, there are several limitations to this work. The primary limitation is that parameters α and β need to be known to the algorithm, which is not usually the case in practice. A second major limitation is that even if the labeler has nice properties, such as, the abstention rates increase sharply close to the boundary, their algorithm is unable to exploit these properties to reduce the number of queries. A third and final limitation is that their analysis only applies to one dimensional thresholds, and not to more general decision boundaries. In this work, we provide an algorithm which is able to exploit nice properties of the labeler. Our algorithm is statistically consistent under very mild conditions when the abstention rate is nondecreasing as we get closer to the decision boundary. Under slightly stronger conditions as in [24], our algorithm has the same query complexity. However, if the abstention rate of the labeler increases strictly monotonically close to the decision boundary, then our algorithm adapts and does substantially better. It simply exploits the increasing abstention rate close to the decision boundary, and does not even have to rely on the noisy labels! Specifically, when applied to the case where the noise rate is at most 1 2 Θ( β) and the abstention rate is 1 Θ( α) at distance from the decision boundary, our algorithm can output a classifier with error ǫ based on only O(ǫ α) queries. An important property of our algorithm is that the improvement of query complexity is achieved in a completely adaptive manner; unlike previous work [24], our algorithm needs no information whatsoever on the abstention rates or rates of label noise. Thus our result also strengthens existing results on active learning from (non-abstaining) noisy labelers by providing an adaptive algorithm that achieves that same performance as [6] without knowledge of noise parameters. We extend our algorithm so that it applies to any smooth d-dimensional decision boundary in a non-parametric setting, not just one-dimensional thresholds, and we complement it with lower bounds on the number of queries that need to be made to any labeler. Our lower bounds generalize the lower bounds in [24], and shows that our upper bounds are nearly optimal. We also present an example that shows that at least a relaxed version of the monotonicity property is necessary to achieve this performance gain; if the abstention rate plateaus around the decision boundary, then our algorithm needs to query and rely on the noisy labels (resulting in higher query complexity) in order to find a hypothesis close to the one generating the ground truth labels. 1.1 Related work There has been a considerable amount of work on active learning, most of which involves labelers that are not allowed to abstain. Theoretical work on this topic largely falls under two categories the membership query model [6, 13, 18, 19], where the learner can request label of any example in the instance space, and the PAC model, where the learner is given a large set of unlabeled examples from an underlying unlabeled data distribution, and can request labels of a subset of these examples. Our work and also that of [24] builds on the membership query model. There has also been a lot of work on active learning under different noise models. The problem is relatively easy when the labeler always provides the ground truth labels see [8, 9, 12] for work in this setting in the PAC model, and [13] for the membership query model. Perhaps the simplest setting of label noise is random classification noise, where each label is flipped with a probability that is independent of the unlabeled instance. [14] shows how to address this kind of noise in the PAC model by repeatedly querying an example until the learner is confident of its label; [18, 19] provide more sophisticated algorithms with better query complexities in the membership query model. A second setting is when the noise rate increases closer to the decision boundary; this setting has been studied under the membership query model by [6] and in the PAC model by [10, 4, 25]. A final setting is agnostic PAC learning when a fixed but arbitrary fraction of labels may disagree with the label assigned by the optimal hypothesis in the hypothesis class. Active learning is known to be particularly difficult in this setting; however, algorithms and associated label complexity bounds have been provided by [1, 2, 4, 10, 12, 25] among others. Our work expands on the membership query model, and our abstention and noise models are related to a variant of the Tsybakov noise condition. A setting similar to ours was considered by [6, 24]. [6] considers a non-abstaining labeler, and provides a near-optimal binary search style active learning algorithm; however, their algorithm is non-adaptive. [24] gives a nearly matching lower and upper query complexity bounds for active learning with abstention feedback, but they only give a nonadaptive algorithm for learning one dimensional thresholds, and only study the situation where the abstention rate is upper-bounded by a polynomial function. Besides [24] , [11, 15] study active learning with abstention feedback in computer vision applications. However, these works are based on heuristics and do not provide any theoretical guarantees. Notation. 1 [A] is the indicator function: 1 [A] = 1 if A is true, and 0 otherwise. For x = (x1, . . . , xd) Rd (d > 1), denote (x1, . . . , xd 1) by x. Define ln x = loge x, log x = log 4 [ln ln]+ (x) = ln ln max{x, ee}. We use O and Θ to hide logarithmic factors in 1 Definition. Suppose γ 1. A function g : [0, 1]d 1 R is (K, γ)-Hölder smooth, if it is continuously differentiable up to γ -th order, and for any x, y [0, 1]d 1, g(y) P γ m=0 mg(x) m! (y x)m K y x γ. We denote this class of functions by Σ(K, γ). We consider active learning for binary classification. We are given an instance space X = [0, 1]d and a label space L = {0, 1}. Each instance x X is assigned to a label l {0, 1} by an underlying function h : X {0, 1} unknown to the learning algorithm in a hypothesis space H of interest. The learning algorithm has access to any x X, but no access to their labels. Instead, it can only obtain label information through interactions with a labeler, whose relation to h is to be specified later. The objective of the algorithm is to sequentially select the instances to query for label information and output a classifier ˆh that is close to h while making as few queries as possible. We consider a non-parametric setting as in [6, 17] where the hypothesis space is the smooth boundary fragment class H = {hg(x) = 1 [xd > g( x)] | g : [0, 1]d 1 [0, 1] is (K, γ)-Hölder smooth}. In other words, the decision boundaries of classifiers in this class are epigraph of smooth functions (see Figure 1 for example). We assume h (x) = 1 [xd > g ( x)] H. When d = 1, H reduces to the space of threshold functions {hθ(x) = 1 [x > θ] : θ [0, 1]}. The performance of a classifier h(x) = 1 [xd > g( x)] is evaluated by the L1 distance between the decision boundaries g g = [0,1]d 1 |g( x) g ( x)| d x. The learning algorithm can only obtain label information by querying a labeler who is allowed to abstain from labeling or return an incorrect label (flipping between 0 and 1). For each query x [0, 1]d, the labeler L will return y Y = {0, 1, } ( means that the labeler abstains from providing a 0/1 label) according to some distribution PL(Y = y | X = x). When it is clear from the context, we will drop the subscript from PL(Y | X). Note that while the labeler can declare its indecision by outputting , we do not allow classifiers in our hypothesis space to output . In our active learning setting, our goal is to output a boundary g that is close to g while making as few interactive queries to the labeler as possible. In particular, we want to find an algorithm with low query complexity Λ(ǫ, δ, A, L, g ), which is defined as the minimum number of queries that Algorithm A, acting on samples with ground truth g , should make to a labeler L to ensure that the output classifier hg(x) = 1 [xd > g( x)] has the property g g = [0,1]d 1 |g( x) g ( x)| d x ǫ with probability at least 1 δ over the responses of L. 2.1 Conditions We now introduce three conditions on the response of the labeler with increasing strictness. Later we will provide an algorithm whose query complexity improves with increasing strictness of conditions. Condition 1. The response distribution of the labeler P(Y | X) satisfies: (abstention) For any x [0, 1]d 1, xd, x d [0, 1], if |xd g ( x)| |x d g ( x)| then P( | ( x, xd)) P( | ( x, x d)); (noise) For any x [0, 1]d, P(Y = 1 [xd > g ( x)] | x, Y = ) 1 Condition 1 means that the closer x is to the decision boundary ( x, g ( x)), the more likely the labeler is to abstain from labeling. This complies with the intuition that instances closer to the decision boundary are harder to classify. We also assume the 0/1 labels can be flipped with probability as large as 1 2. In other words, we allow unbounded noise. Figure 1: A classifier with boundary g( x) = (x1 0.4)2 + 0.1 for d = 2. Label 1 is assigned to the region above, 0 to the below (red region) P(Y = | X = x) P(Y = 1 | X = x) P(Y = 0 | X = x) Figure 2: The distributions above satisfy Conditions 1 and 2, but the abstention feedback is useless since P( | x) is flat between x = 0.2 and 0.4 P(Y = | X = x) P(Y = 1 | X = x) P(Y = 0 | X = x) Figure 3: Distributions above satisfy Conditions 1, 2, and 3. Condition 2. Let C, β be non-negative constants, and f : [0, 1] [0, 1] be a nondecreasing function. The response distribution P(Y | X) satisfies: (abstention) P( | x) 1 f (|xd g ( x)|); (noise) P(Y = 1 [xd > g ( x)] | x, Y = ) 1 2 1 C |xd g ( x)|β . Condition 2 requires the abstention and noise probabilities to be upper-bounded, and these upper bounds decrease as x moves further away from the decision boundary. The abstention rate can be 1 at the decision boundary, so the labeler may always abstain at the decision boundary. The condition on the noise satisfies the popular Tsybakov noise condition [22]. Condition 3. Let f : [0, 1] [0, 1] be a nondecreasing function such that 0 < c < 1, 0 < a 1 0 b 2 f(a) 1 c. The response distribution satisfies: P( | x) = 1 f (|xd g ( x)|). An example where Condition 3 holds is P( | x) = 1 (x 0.3)α (α > 0). Condition 3 requires the abstention rate to increase monotonically close to the decision boundary as in Condition 1. In addition, it requires the abstention probability P( |( x, xd)) not to be too flat with respect to xd. For example, when d = 1, P( | x) = 0.68 for 0.2 x 0.4 (shown as Figure 2) does not satisfy Condition 3, and abstention responses are not informative since this abstention rate alone yields no information on the location of the decision boundary. In contrast, P( | x) = 1 p |x 0.3| (shown as Figure 3) satisfies Condition 3, and the learner could infer it is getting close to the decision boundary when it starts receiving more abstention responses. Note that here c, f, C, β are unknown and arbitrary parameters that characterize the complexity of the learning task. We want to design an algorithm that does not require knowledge of these parameters but still achieves nearly optimal query complexity. 3 Learning one-dimensional thresholds In this section, we start with the one dimensional case (d = 1) to demonstrate the main idea. We will generalize these results to multidimensional instance space in the next section. When d = 1, the decision boundary g becomes a point in [0, 1], and the corresponding classifier is a threshold function over [0,1]. In other words the hypothesis space becomes H = {fθ(x) = 1 [x > θ] : θ [0, 1]}). We denote the ground truth decision boundary by θ [0, 1]. We want to find a ˆθ [0, 1] such that |ˆθ θ | is small while making as few queries as possible. 3.1 Algorithm The proposed algorithm is a binary search style algorithm shown as Algorithm 1. (For the sake of simplicity, we assume log 1 2ǫ is an integer.) Algorithm 1 takes a desired precision ǫ and confidence Algorithm 1 The active learning algorithm for learning thresholds 1: Input: δ, ǫ 2: [L0, R0] [0, 1] 3: for k = 0, 1, 2, . . . , log 1 2ǫ 1 do 4: Define three quartiles: Uk 3Lk+Rk 4 , Mk Lk+Rk 2 , Vk Lk+3Rk 4 5: A(u), A(m), A(v), B(u), B(v) Empty Array 6: for n = 1, 2, . . . do 7: Query at Uk, Mk, Vk, and receive labels X(u) n , X(m) n , X(v) n 8: for w {u, m, v} do 9: We record whether X(w) = in A(w), and the 0/1 label (as -1/1) in B(w) if X(w) = 10: if X(w) = then 11: A(w) A(w).append(1) , B(w) B(w).append(21 X(w) = 1 1) 12: else 13: A(w) A(w).append(0) 14: end if 15: end for 16: Check if the differences of abstention responses are statistically significant 17: if CHECKSIGNIFICANT-VAR( n A(u) i A(m) i on i=1, δ 4 log 1 2ǫ ) then 18: [Lk+1, Rk+1] [Uk, Rk]; break 19: else if CHECKSIGNIFICANT-VAR( n A(v) i A(m) i on i=1, δ 4 log 1 2ǫ ) then 20: [Lk+1, Rk+1] [Lk, Vk]; break 21: end if 22: Check if the differences between 0 and 1 labels are statistically significant 23: if CHECKSIGNIFICANT( n B(u) i o B(u).length i=1 , δ 4 log 1 2ǫ ) then 24: [Lk+1, Rk+1] [Uk, Rk]; break 25: else if CHECKSIGNIFICANT( n B(v) i o B(v).length i=1 , δ 4 log 1 2ǫ ) then 26: [Lk+1, Rk+1] [Lk, Vk]; break 27: end if 28: end for 29: end for 30: Output: ˆθ = Llog 1 2ǫ + Rlog 1 2ǫ level δ as its input, and returns an estimation ˆθ of the decision boundary θ . The algorithm maintains an interval [Lk, Rk] in which θ is believed to lie, and shrinks this interval iteratively. To find the subinterval that contains θ , Algorithm 1 relies on two auxiliary functions (marked in Procedure 2) to conduct adaptive sequential hypothesis tests regarding subintervals of interval [Lk, Rk]. Suppose θ [Lk, Rk]. Algorithm 1 tries to shrink this interval to a 3 4 of its length in each iteration by repetitively querying on quartiles Uk = 3Lk+Rk 4 , Mk = Lk+Rk 2 , Vk = Lk+3Rk 4 . To determine which specific subinterval to choose, the algorithm uses 0/1 labels and abstention responses simultaneously. Since the ground truth labels are determined by 1 [x > θ ], one can infer that if the number of queries that return label 0 at Uk (Vk) is statistically significantly more (less) than label 1, then θ should be on the right (left) side of Uk (Vk). Similarly, from Condition 1, if the number of non-abstention responses at Uk (Vk) is statistically significantly more than non-abstention responses at Mk, then θ should be closer to Mk than Uk (Vk). Algorithm 1 relies on the ability to shrink the search interval via statistically comparing the numbers of obtained labels at locations Uk, Mk, Vk. As a result, a main building block of Algorithm 1 is to test whether i.i.d. bounded random variables Yi are greater in expectation than i.i.d. bounded random variables Zi with statistical significance. In Procedure 2, we have two test functions Check Significant and Check Significant-Var that take i.i.d. random variables {Xi = Yi Zi} (|Xi| 1) and confidence level δ as their input, and output whether it is statistically significant to conclude EXi > 0. Procedure 2 Adaptive sequential testing 1: D0, D1 are absolute constants defined in Proposition 1 and Proposition 2 2: {Xi} are i.i.d. random variables bounded by 1. δ is the confidence level. Detect if EX > 0 3: function CHECKSIGNIFICANT({Xi}n i=1 , δ) 4: p(n, δ) D0 1 + ln 1 4n [ln ln]+ 4n + ln 1 5: Return Pn i=1 Xi p(n, δ) 6: end function 7: function CHECKSIGNIFICANT-VAR({Xi}n i=1 , δ) 8: Calculate the empirical variance Var = n n 1 Pn i=1 Xi 2 1 n (Pn i=1 Xi)2 9: q(n, Var, δ) D1 1 + ln 1 δ + q Var + ln 1 δ + 1 [ln ln]+ Var + ln 1 δ + 1 + ln 1 10: Return n ln 1 δ AND Pn i=1 Xi q(n, Var, δ) 11: end function Check Significant is based on the following uniform concentration result regarding the empirical mean: Proposition 1. Suppose X1, X2, . . . are a sequence of i.i.d. random variables with X1 [ 2, 2], EX1 = 0. Take any 0 < δ < 1. Then there is an absolute constant D0 such that with probability at least 1 δ, for all n > 0 simultaneously, 4n [ln ln]+ 4n + ln 1 In Algorithm 1, we use Check Significant to detect whether the expected number of queries that return label 0 at location Uk (Vk) is more/less than the expected number of label 1 with a statistical significance. Check Significant-Var is based on the following uniform concentration result which further utilizes the empirical variance Vn = n n 1 Pn i=1 X2 i 1 n (Pn i=1 Xi)2 : Proposition 2. There is an absolute constant D1 such that with probability at least 1 δ, for all n ln 1 δ simultaneously, [ln ln]+ (1 + ln 1 δ + Vn) + ln 1 The use of variance results in a tighter bound when Var(Xi) is small. In Algorithm 1, we use Check Significant-Var to detect the statistical significance of the relative order of the number of queries that return non-abstention responses at Uk (Vk) compared to the number of non-abstention responses at Mk. This results in a better query complexity than using Check Significant under Condition 3, since the variance of the number of abstention responses approaches 0 when the interval [Lk, Rk] zooms in on θ .1 3.2 Analysis For Algorithm 1 to be statistically consistent, we only need Condition 1. Theorem 1. Let θ be the ground truth. If the labeler L satisfies Condition 1 and Algorithm 1 stops to output ˆθ, then θ ˆθ ǫ with probability at least 1 δ 1We do not apply Check Significant-Var to 0/1 labels, because unlike the difference between the numbers of abstention responses at Uk (Vk) and Mk, the variance of the difference between the numbers of 0 and 1 labels stays above a positive constant. Under additional Conditions 2 and 3, we can derive upper bounds of the query complexity for our algorithm. (Recall f and β are defined in Conditions 2 and 3.) Theorem 2. Let θ be the ground truth, and ˆθ be the output of Algorithm 1. Under Conditions 1 and 2, with probability at least 1 δ, Algorithm 1 makes at most O 1 f( ǫ 2 )ǫ 2β queries. Theorem 3. Let θ be the ground truth, and ˆθ be the output of Algorithm 1. Under Conditions 1 and 3, with probability at least 1 δ, Algorithm 1 makes at most O 1 f( ǫ 2 ) queries. The query complexity given by Theorem 3 is independent of β that decides the flipping rate, and consequently smaller than the bound in Theorem 2. This improvement is due to the use of abstention responses, which become much more informative under Condition 3. 3.3 Lower Bounds In this subsection, we give lower bounds of query complexity in the one-dimensional case and establish near optimality of Algorithm 1. We will give corresponding lower bounds for the highdimensional case in the next section. The lower bound in [24] can be easily generalized to Condition 2: Theorem 4. ([24]) There is a universal constant δ0 (0, 1) and a labeler L satisfying Conditions 1 and 2, such that for any active learning algorithm A, there is a θ [0, 1], such that for small enough ǫ, Λ(ǫ, δ0, A, L, θ ) Ω 1 f(ǫ)ǫ 2β . Our query complexity (Theorem 3) for the algorithm is also almost tight under Conditions 1 and 3 with a polynomial abstention rate. Theorem 5. There is a universal constant δ0 (0, 1) and a labeler L satisfying Conditions 1, 2, and 3 with f(x) = C xα (C > 0 and 0 < α 2 are constants), such that for any active learning algorithm A, there is a θ [0, 1], such that for small enough ǫ, Λ(ǫ, δ0, A, L, θ ) Ω(ǫ α). 3.4 Remarks Our results confirm the intuition that learning with abstention is easier than learning with noisy labels. This is true because a noisy label might mislead the learning algorithm, but an abstention response never does. Our analysis shows, in particular, that if the labeler never abstains, and outputs completely noisy labels with probability bounded by 1 |x θ |γ (i.e., P(Y = I [x > θ ] | x) 1 2 (1 |x θ |γ)), then the near optimal query complexity of O ǫ 2γ is significantly larger than the near optimal O (ǫ γ) query complexity associated with a labeler who only abstains with probability P(Y = | x) 1 |x θ |γ and never flips a label. More precisely, while in both cases the labeler outputs the same amount of corrupted labels, the query complexity of the abstention-only case is significantly smaller than the noise-only case. Note that the query complexity of Algorithm 1 consists of two kinds of queries: queries which return 0/1 labels and are used by function Check Significant, and queries which return abstention and are used by function Check Significant-Var. Algorithm 1 will stop querying when the responses of one of the two kinds of queries are statistically significant. Under Condition 2, our proof actually shows that the optimal number of queries is dominated by the number of queries used by Check Significant function. In other words, a simplified variant of Algorithm 1 which excludes use of abstention feedback is near optimal. Similarly, under Condition 3, the optimal query complexity is dominated by the number of queries used by Check Significant-Var function. Hence the variant of Algorithm 1 which disregards 0/1 labels would be near optimal. 4 The multidimensional case We follow [6] to generalize the results from one-dimensional thresholds to the d-dimensional (d > 1) smooth boundary fragment class Σ(K, γ). Algorithm 3 The active learning algorithm for the smooth boundary fragment class 1: Input: δ, ǫ, γ 2: M Θ ǫ 1/γ . L 0 M , . . . , M 1 3: For each l L, apply Algorithm 1 with parameter (ǫ, δ/M d 1) to learn a threshold gl that approximates g (l) 4: Partition the instance space into cells {Iq} indexed by q n 0, 1, . . . , M γ 1 od 1 , where M , (q1 + 1)γ M , (qd 1 + 1)γ 5: For each cell Iq, perform a polynomial interpolation: gq( x) = P l Iq L gl Qq,l( x), where j=0,j =Mli γqi xi (γqi + j)/M li (γqi + j)/M 6: Output: g( x) = P q {0,1,..., M γ 1} d 1 gq( x)1 [ x q] 4.1 Lower bounds Theorem 6. There are universal constants δ0 (0, 1), c0 > 0, and a labeler L satisfying Conditions 1 and 2, such that for any active learning algorithm A, there is a g Σ(K, γ), such that for small enough ǫ, Λ(ǫ, δ0, A, L, g ) Ω 1 f(c0ǫ)ǫ 2β d 1 Theorem 7. There is a universal constant δ0 (0, 1) and a labeler L satisfying Conditions 1, 2, and Condition 3 with f(x) = C xα (C > 0 and 0 < α 2 are constants), such that for any active learning algorithm A, there is a g Σ(K, γ), such that for small enough ǫ, Λ(ǫ, δ0, A, L, g ) 4.2 Algorithm and Analysis Recall the decision boundary of the smooth boundary fragment class can be seen as the epigraph of a smooth function [0, 1]d 1 [0, 1]. For d > 1, we can reduce the problem to the one-dimensional problem by discretizing the first d 1 dimensions of the instance space and then perform a polynomial interpolation. The algorithm is shown as Algorithm 3. For the sake of simplicity, we assume γ, M/γ in Algorithm 3 are integers. We have similar consistency guarantee and upper bounds as in the one-dimensional case. Theorem 8. Let g be the ground truth. If the labeler L satisfies Condition 1 and Algorithm 3 stops to output g, then g g ǫ with probability at least 1 δ 2. Theorem 9. Let g be the ground truth, and g be the output of Algorithm 3. Under Conditions 1 and 2, with probability at least 1 δ, Algorithm 3 makes at most O d f(ǫ/2)ǫ 2β d 1 Theorem 10. Let g be the ground truth, and g be the output of Algorithm 3. Under Conditions 1 and 3, with probability at least 1 δ, Algorithm 3 makes at most O d f(ǫ/2)ǫ d 1 Acknowledgments. We thank NSF under IIS-1162581, CCF-1513883, and CNS-1329819 for research support. [1] M.-F. Balcan and P. M. Long. Active and passive learning of linear separators under log-concave distributions. In COLT, 2013. [2] Maria-Florina Balcan, Alina Beygelzimer, and John Langford. Agnostic active learning. In Proceedings of the 23rd international conference on Machine learning, pages 65 72. ACM, 2006. [3] Maria-Florina Balcan and Steve Hanneke. Robust interactive learning. In Proceedings of The 25th Conference on Learning Theory, 2012. [4] A. Beygelzimer, D. Hsu, J. Langford, and T. Zhang. Agnostic active learning without constraints. In NIPS, 2010. [5] Alina Beygelzimer, Daniel Hsu, John Langford, and Chicheng Zhang. Search improves label for active learning. ar Xiv preprint ar Xiv:1602.07265, 2016. [6] Rui M. Castro and Robert D. Nowak. Minimax bounds for active learning. IEEE Transactions on Information Theory, 54(5):2339 2353, 2008. [7] Yuxin Chen, S Hamed Hassani, Amin Karbasi, and Andreas Krause. Sequential information maximization: When is greedy near-optimal? In Proceedings of The 28th Conference on Learning Theory, pages 338 363, 2015. [8] D. A. Cohn, L. E. Atlas, and R. E. Ladner. Improving generalization with active learning. Machine Learning, 15(2), 1994. [9] S. Dasgupta. Coarse sample complexity bounds for active learning. In NIPS, 2005. [10] S. Dasgupta, D. Hsu, and C. Monteleoni. A general agnostic active learning algorithm. In NIPS, 2007. [11] Meng Fang and Xingquan Zhu. I don t know the label: Active learning with blind knowledge. In Pattern Recognition (ICPR), 2012 21st International Conference on, pages 2238 2241. IEEE, 2012. [12] Steve Hanneke. Teaching dimension and the complexity of active learning. In Learning Theory, pages 66 81. Springer, 2007. [13] Tibor Heged us. Generalized teaching dimensions and the query complexity of learning. In Proceedings of the eighth annual conference on Computational learning theory, pages 108 117. ACM, 1995. [14] M. Kääriäinen. Active learning in the non-realizable case. In ALT, 2006. [15] Christoph Kading, Alexander Freytag, Erik Rodner, Paul Bodesheim, and Joachim Denzler. Active learning and discovery of object categories in the presence of unnameable instances. In Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on, pages 4343 4352. IEEE, 2015. [16] Yuan-Chuan Li and Cheh-Chih Yeh. Some equivalent forms of bernoulli s inequality: A survey. Applied Mathematics, 4(07):1070, 2013. [17] Stanislav Minsker. Plug-in approach to active learning. Journal of Machine Learning Research, 13(Jan):67 90, 2012. [18] Mohammad Naghshvar, Tara Javidi, and Kamalika Chaudhuri. Bayesian active learning with non-persistent noise. IEEE Transactions on Information Theory, 61(7):4080 4098, 2015. [19] R. D. Nowak. The geometry of generalized binary search. IEEE Transactions on Information Theory, 57(12):7893 7906, 2011. [20] Maxim Raginsky and Alexander Rakhlin. Lower bounds for passive and active learning. In Advances in Neural Information Processing Systems, pages 1026 1034, 2011. [21] Aaditya Ramdas and Akshay Balsubramani. Sequential nonparametric testing with the law of the iterated logarithm. In Proceedings of the Conference on Uncertainty in Artificial Intelligence, 2016. [22] A. B. Tsybakov. Optimal aggregation of classifiers in statistical learning. Annals of Statistics, 32:135 166, 2004. [23] Ruth Urner, Shai Ben-david, and Ohad Shamir. Learning from weak teachers. In International Conference on Artificial Intelligence and Statistics, pages 1252 1260, 2012. [24] Songbai Yan, Kamalika Chaudhuri, and Tara Javidi. Active learning from noisy and abstention feedback. In Communication, Control, and Computing (Allerton), 2015 53th Annual Allerton Conference on. IEEE, 2015. [25] Chicheng Zhang and Kamalika Chaudhuri. Beyond disagreement-based agnostic active learning. In Advances in Neural Information Processing Systems, pages 442 450, 2014. [26] Chicheng Zhang and Kamalika Chaudhuri. Active learning from weak and strong labelers. In Advances in Neural Information Processing Systems, pages 703 711, 2015.