# collaborative_pac_learning__177df4c0.pdf Collaborative PAC Learning Avrim Blum Toyota Technological Institute at Chicago Chicago, IL 60637 avrim@ttic.edu Nika Haghtalab Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 nhaghtal@cs.cmu.edu Ariel D. Procaccia Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 arielpro@cs.cmu.edu Mingda Qiao Institute for Interdisciplinary Information Sciences Tsinghua University Beijing, China 100084 qmd14@mails.tsinghua.edu.cn We consider a collaborative PAC learning model, in which k players attempt to learn the same underlying concept. We ask how much more information is required to learn an accurate classifier for all players simultaneously. We refer to the ratio between the sample complexity of collaborative PAC learning and its non-collaborative (single-player) counterpart as the overhead. We design learning algorithms with O(ln(k)) and O(ln2(k)) overhead in the personalized and centralized variants our model. This gives an exponential improvement upon the naïve algorithm that does not share information among players. We complement our upper bounds with an Ω(ln(k)) overhead lower bound, showing that our results are tight up to a logarithmic factor. 1 Introduction According to Wikipedia, collaborative learning is a situation in which two or more people learn ... something together, e.g., by capitalizing on one another s resources and asking one another for information. Indeed, it seems self-evident that collaboration, and the sharing of information, can make learning more efficient. Our goal is to formalize this intuition and study its implications. As an example, suppose k branches of a department store, which have sales data for different items in different locations, wish to collaborate on learning which items should be sold at each location. In this case, we would like to use the sales information across different branches to learn a good policy for each branch. Another example is given by k hospitals with different patient demographics, e.g., in terms of racial or socio-economic factors, which want to predict occurrence of a disease in patients. In addition to requiring a classifier that performs well on the population served by each hospital, it is natural to assume that all hospitals deploy a common classifier. Motivated by these examples, we consider a model of collaborative PAC learning, in which k players attempt to learn the same underlying concept. We then ask how much information is needed for all players to simultaneously succeed in learning a desirable classifier. Specifically, we focus on the classic probably approximately correct (PAC) setting of Valiant [14], where there is an unknown target function f F. We consider k players with distributions D1, . . . , Dk that are labeled according to f . Our goal is to learn f up to an error of ϵ on each and every player distribution while requiring only a small number of samples overall. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. A natural but naïve algorithm that forgoes collaboration between players can achieve our objective by taking, from each player distribution, a number of samples that is sufficient for learning the individual task, and then training a classifier over all samples. Such an algorithm uses k times as many samples as needed for learning an individual task we say that this algorithm incurs O(k) overhead in sample complexity. By contrast, we are interested in algorithms that take advantage of the collaborative environment, learn k tasks by sharing information, and incur o(k) overhead in sample complexity. We study two variants of the aforementioned model: personalized and centralized. In the personalized setting (as in the department store example), we allow the learning algorithm to return different functions for different players. That is, our goal is to return classifiers f1, . . . , fk that have error of at most ϵ on player distributions D1, . . . , Dk, respectively. In the centralized setting (as in the hospital example), the learning algorithm is required to return a single classifier f that has an error of at most ϵ on all player distributions D1, . . . , Dk. Our results provide upper and lower bounds on the sample complexity overhead required for learning in both settings. 1.1 Overview of Results In Section 3, we provide algorithms for personalized and centralized collaborative learning that obtain exponential improvements over the sample complexity of the naïve approach. In Theorem 3.1, we introduce an algorithm for the personalized setting that has O(ln(k)) overhead in sample complexity. For the centralized setting, in Theorem 3.2, we develop an algorithm that has O(ln2(k)) overhead in sample complexity. At a high level, the latter algorithm first learns a series of functions on adaptively chosen mixtures of player distributions. These mixtures are chosen such that for any player a large majority of the functions perform well. This allows us to combine all functions into one classifier that performs well on every player distribution. Our algorithm is an improper learning algorithm, as the combination of these functions may not belong to F. In Section 4, we present lower bounds on the sample complexity of collaborative PAC learning for the personalized and centralized variants. In particular, in Theorem 4.1 we show that any algorithm that learns in the collaborative setting requires Ω(ln(k)) overhead in sample complexity. This shows that our upper bound for the personalized setting, as stated in Theorem 3.1, is tight. Furthermore, in Theorem 4.5, we show that obtaining uniform convergence across F over all k player distributions requires Ω(k) overhead in sample complexity. Interestingly, our centralized algorithm (Theorem 3.2) bypasses this lower bound by using arguments that do not depend on uniform convergence. Indeed, this can be seen from the fact that it is an improper learning algorithm. In Appendix D, we discuss the extension of our results to the non-realizable setting. Specifically, we consider a setting where there is a good but not perfect target function f F that has a small error with respect to every player distribution, and prove that our upper bounds carry over. 1.2 Related Work Related work in computational and statistical learning has examined some aspects of the general problem of learning multiple related tasks simultaneously. Below we discuss papers on multi-task learning [4, 3, 7, 5, 10, 13], domain adaptation [11, 12, 6], and distributed learning [2, 8, 15], which are most closely related. Multi-task learning considers the problem of learning multiple tasks in series or in parallel. In this space, Baxter [4] studied the problem of model selection for learning multiple related tasks. In their work, each learning task is itself randomly drawn from a distribution over related tasks, and the learner s goal is to find a hypothesis space that is appropriate for learning all tasks. Ben-David and Schuller [5] also studied the sample complexity of learning multiple related tasks. However, in their work similarity between two tasks is represented by existence of transfer functions though which underlying distributions are related. Mansour et al. [11, 12] consider a multi-source domain adaptation problem, where the learner is given k distributions and k corresponding predictors that have error at most ϵ on individual distributions. The goal of the learner is to combine these predictors to obtain error of kϵ on any unknown mixture of player distributions. Our work is incomparable to this line of work, as our goal is to learn classifiers, rather than combining existing ones, and our benchmark is to obtain error ϵ on each individual distribution. Indeed, in our setting one can learn a hypothesis that has error kϵ on any mixture of players with no overhead in sample complexity. Distributed learning [2, 8, 15] also considers the problem of learning from k different distributions simultaneously. However, the main objective in this space is to learn with limited communication between the players, rather than with low sample complexity. Let X be an instance space and Y = {0, 1} be the set of labels. A hypothesis is a function f : X Y that maps any instance x X to a label y Y. We consider a hypothesis class F with VC dimension d. Given a distribution D over X Y, the error of a hypothesis f is defined as err D(f) = Pr(x,y) D [f(x) = y]. In the collaborative learning setting, we consider k players with distributions D1, . . . , Dk over X Y. We focus on the realizable setting, where all players distributions are labeled according to a common target function f F, i.e., err Di(f ) = 0 for all i [k] (but see Appendix D for an extension to the non-realizable setting). We represent an instance of the collaborative PAC learning setting with the 3-tuple (F, f , {D}i [k]). Our goal is to learn a good classifier with respect to every player distribution. We call this (ϵ, δ)- learning in the collaborative PAC setting, and study two variants: the personalized setting, and the centralized setting. In the personalized setting, our goal is to learn functions f1, . . . , fk, such that with probability 1 δ, err Di(fi) ϵ for all i [k]. In the centralized setting, we require all the output functions to be identical. Put another way, our goal is to return a single f, such that with probability 1 δ, err Di(f) ϵ for all i [k]. In both settings, we allow our algorithm to be improper, that is, the learned functions need not belong to F. We compare the sample complexity of our algorithms to their PAC counterparts in the realizable setting. In the traditional realizable PAC setting, mϵ,δ denotes the number of samples needed for (ϵ, δ)-learning F. That is, mϵ,δ is the total number of samples drawn from a realizable distribution D, such that, with probability 1 δ, any classifier f F that is consistent with the sample set satisfies err D(f) ϵ. We denote by OF( ) the function that, for any set S of labeled samples, returns a function f F that is consistent with S if such a function exists (and outputs none otherwise). It is well-known that sampling a set S of size mϵ,δ = O 1 δ , and applying OF(S), is sufficient for (ϵ, δ)-learning a hypothesis class F of VC dimension d [1]. We refer to the ratio of the sample complexity of an algorithm in the collaborative PAC setting to that of the (non-collaborative) realizable PAC setting as the overhead. For ease of exposition, we only consider the dependence of the overhead on parameters k, d, and ϵ. 3 Sample Complexity Upper Bounds In this section, we prove upper bounds on the sample complexity of (ϵ, δ)-learning in the collaborative PAC setting. We begin by providing a simple algorithm with O(ln(k)) overhead (in terms of sample complexity, see Section 2) for the personalized setting. We then design and analyze an algorithm for the centralized setting with O(ln2(k)) overhead, following a discussion of additional challenges that arise in this setting. 3.1 Personalized Setting The idea underlying the algorithm for the personalized setting is quite intuitive: If we were to learn a classifier that is on average good for the players, then we have learned a classifier that is good for a large fraction of the players. Therefore, a large fraction of the players can be simultaneously satisfied by a single good global classifier. This process can be repeated until each player receives a good classifier. In more detail, let us consider an algorithm that pools together a sample set of total size mϵ/4,δ from the uniform mixture D = 1 i [k] Di over individual player distributions, and finds f F that is consistent with this set. Clearly, with probability 1 δ, f has a small error of ϵ/4 with respect to distribution D. However, we would like to understand how well f performs on each individual player s distribution. Since err D(f) ϵ/4 is also the average error of f on player distributions, with probability 1 δ, f must have error of at most ϵ/2 on at least half of the players. Indeed, one can identify such players by taking additional O( 1 ϵ ) samples from each player and asking whether the empirical error of f on these sample sets is at most 3ϵ/4. Using a variant of the VC theorem, it is not hard to see that for any player i such that err Di(f) ϵ/2, the empirical error of f is at most 3ϵ/4, and no player with empirical error at most 3ϵ/4 has true error that is worst than ϵ. Once players with empirical error 3ϵ/4 are identified, one can output fi = f for any such player, and repeat the procedure for the remaining players. After log(k) rounds, this process terminates with all players having received functions with error of at most ϵ on their respective distributions, with probability 1 log(k)δ. We formalize the above discussion via Algorithm 1 and Theorem 3.1. For completeness, a more rigorous proof of the theorem is given in Appendix A. Algorithm 1 PERSONALIZED LEARNING N1 [k]; δ δ/2 log(k); for r = 1, . . . , log(k) do Dr 1 |Nr| P Let S be a sample of size mϵ/4,δ drawn from Dr, and f (r) OF(S); Let Gr TEST(f (r), Nr, ϵ, δ ); Nr+1 Nr \ Gr; for i Gr do fi f (r); end return f1, . . . , fk TEST(f, N, ϵ, δ): for i N do take sample set Ti of size O 1 ϵ ln |N| ϵδ from Di ; return {i | err Ti(f) 3 Theorem 3.1. For any ϵ, δ > 0, and hypothesis class F of VC dimension d, Algorithm 1 (ϵ, δ)-learns F in the personalized collaborative PAC setting using m samples, where m = O ln(k) (d + k) ln 1 Note that Algorithm 1 has O(ln(k)) overhead when k = O(d). 3.2 Centralized Setting We next present a learning algorithm with O(ln2(k)) overhead in the centralized setting. Recall that our goal is to learn a single function f that has an error of ϵ on every player distribution, as opposed to the personalized setting where players can receive different functions. A natural first attempt at learning in the centralized setting is to combine the classifiers f1, . . . , fk that we learned in the personalized setting (Algorithm 1), say, through a weighted majority vote. One challenge with this approach is that, in general, it is possible that many of the functions fj perform poorly on the distribution of a different player i. The reason is that when Algorithm 1 finds a suitable f (r) for players in Gr, it completely removes them from consideration for future rounds; subsequent functions may perform poorly with respect to the distributions associated with those players. Therefore, this approach may lead to a global classifier with large error on some player distributions. To overcome this problem, we instead design an algorithm that continues to take additional samples from players for whom we have already found suitable classifiers. The key idea behind the centralized learning algorithm is to group the players at every round based on how many functions learned so far have large error rates on those players distributions, and to learn from data sampled from all the groups simultaneously. This ensures that the function learned in each round performs well on a large fraction of the players in each group, thereby reducing the likelihood that in later stages of this process a player appears in a group for which a large fraction of the functions perform poorly. In more detail, our algorithm learns t = Θ(ln(k)) classifiers f (1), f (2), . . . , f (t), such that for any player i [k], at least 0.6t functions among them achieve an error below ϵ = ϵ/6 on Di. The algorithm then returns the classifier maj({f (r)}t r=1), where, for a set of hypotheses F, maj(F) denotes the classifier that, given x X, returns the label that the majority of hypotheses in F assign to x. Note that any instance that is mislabeled by this classifier must be mislabeled by at least 0.1t functions among the 0.6t good functions, i.e., 1/6 of the good functions. Hence, maj({f (r)}t r=1) has an error of at most 6ϵ = ϵ on each distribution Di. Throughout the algorithm, we keep track of counters α(r) i for any round r [t] and player i [k], which, roughly speaking, record the number of classifiers among f (1), f (2), . . . , f (r) that have an error of more than ϵ on distribution Di. To learn f (r+1), we first group distributions D1, . . . , Dk based on the values of α(r) i , draw about mϵ ,δ samples from the mixture of the distributions in each group, and return a function f (r+1) that is consistent with all of the samples. Similarly to Section 3.1, one can show that f (r+1) achieves O(ϵ ) error with respect to a large fraction of player distributions in each group. Consequently, the counters are increased, i.e., α(r+1) i > α(r) i , only for a small fraction of players. Finally, we show that with high probability, α(t) i 0.4t for any player i [k], i.e., on each distribution Di, at least 0.6t functions achieve error of at most ϵ . The algorithm is formally described in Algorithm 2. The next theorem states our sample complexity upper bound for the centralized setting. Algorithm 2 CENTRALIZED LEARNING α(0) i 0 for each i [k]; t l 5 2 log8/7(k) m ; ϵ ϵ/6; N (0) 0 [k]; N (0) c for each c [t]; for r = 1, 2, . . . , t do for c = 0, 1, . . . , t 1 do if N (r 1) c = then Draw a sample set S(r) c of size mϵ /16,δ/(2t2) from e D(r 1) c = 1 |N(r 1) c | P i N(r 1) c Di; else S(r) c ; end f (r) OF St 1 c=0 S(r) c ; Gr TEST(f (r), [k], ϵ , δ/(2t)); for i = 1, . . . , k do α(r) i α(r 1) i + I [i / Gr]; for c = 0, . . . , t do N (r) c {i [k] : α(r) i = c}; end return maj({f (r)}t r=1); Theorem 3.2. For any ϵ, δ > 0, and hypothesis class F of VC dimension d, Algorithm 2 (ϵ, δ)-learns F in the centralized collaborative PAC setting using m samples, where m = O ln2(k) (d + k) ln 1 In particular, Algorithm 2 has O(ln2(k)) overhead when k = O(d). Turning to the theorem s proof, note that in Algorithm 2, N (r 1) c represents the set of players for whom c out of the r 1 functions learned so far have a large error, and e D(r 1) c represents the mixture of distribution of players in N (r 1) c . Moreover, Gr is the set of players for whom f (r) has a small error. The following lemma, whose proof appears in Appendix B.1, shows that with high probability each function f (r) has a small error on e D(r 1) c for all c. Here and in the following, t stands for l 5 2 log8/7(k) m as in Algorithm 2. Lemma 3.3. With probability 1 δ, the following two properties hold for all r [t]: 1. For any c {0, . . . , t 1} such that N (r 1) c is non-empty, err e D(r 1) c (f (r)) ϵ /16. 2. For any i Gr, err Di(f (r)) ϵ , and for any i / Gr, err Di(f (r)) > ϵ /2. The next lemma gives an upper bound on |N (r) c | the number of players for whom c out of the r learned functions have a large error. Lemma 3.4. With probability 1 δ, for any r, c {0, . . . , t}, we have |N (r) c | r c k Proof. Let nr,c = |N (r) c | = |{i [k] : α(r) i = c}| be the number of players for whom c functions in f (1), . . . , f (r) do not have a small error. We note that n0,0 = k and n0,c = 0 for c {1, . . . , t}. The next technical claim, whose proof appears in Appendix B.2, asserts that to prove this lemma, it is sufficient to show that for any r {1, . . . , t} and c {0, . . . , t}, nr,c nr 1,c + 1 8nr 1,c 1. Here we assume that nr 1, 1 = 0. Claim 3.5. Suppose that n0,0 = k, n0,c = 0 for c {1, . . . , t}, and nr,c nr 1,c + 1 8nr 1,c 1 holds for any r {1, . . . , t} and c {0, . . . , t}. Then for any r, c {0, . . . , t}, nr,c r c k By definition of α(r) c , N (r) c , and nr,c, we have nr,c = {i [k] : α(r) i = c} {i [k] : α(r 1) i = c} + {i [k] : α(r 1) i = c 1 i / Gr} =nr 1,c + N (r 1) c 1 \ Gr . It remains to show that |N (r 1) c 1 \ Gr| 1 8nr 1,c 1. Recall that e D(r 1) c 1 is the mixture of all distributions in N (r 1) c 1 . By Lemma 3.3, with probability 1 δ, err e D(r 1) c 1 (f (r)) < ϵ /16. Put another i N(r 1) c 1 err Di(f (r)) < ϵ 16 |N (r 1) c 1 |. Thus, at most 1 8|N (r 1) c 1 | players i N (r 1) c 1 can have err Di(f (r)) > ϵ /2. Moreover, by Lemma 3.3, for any i / Gr, we have that err Di(f (r)) > ϵ /2. Therefore, N (r 1) c 1 \ Gr {i N (r 1) c 1 : err Di(f (r)) > ϵ /2} 1 N (r 1) c 1 = 1 This completes the proof. We now prove Theorem 3.2 using Lemma 3.4. Proof of Theorem 3.2. We first show that, with high probability, for any i [k], at most 0.4t functions among f (1), . . . , f (t) have error greater than ϵ , i.e., α(t) i < 0.4t for all i [k]. Note that by our choice of t = l 5 2 log8/7(k) m , we have (8/7)0.4t k. By Lemma 3.4 and an upper bound on binomial coefficients, with probability 1 δ, for any integer c [0.4t, t], |N (t) c | t c 8c < k (8/7)c 1, which implies that N (t) c = . Therefore, with probability 1 δ, α(t) i < 0.4t for all i [k]. Next, we prove that f = maj({f (r)}t r=1) has error at most ϵ on every player distribution. Consider distribution Di of player i. By definition, t α(t) i functions have error at most ϵ on Di. We refer to these functions as good functions. Note that for any instance x that is mislabeled by f, at least 0.5t α(t) i good functions must make a wrong prediction. Therefore, (t α(t) i )ϵ (0.5t α(t) i ) err Di(f). Moreover, with probability 1 δ, α(t) i < 0.4t for all i [k]. Hence, err Di(f) t α(t) i 0.5t α(t) i ϵ 0.6t with probability 1 δ. This proves that Algorithm 2 (ϵ, δ)-learns F in the centralized collaborative PAC setting. Finally, we bound the sample complexity of Algorithm 2. Recall that t = Θ(ln(k)) and ϵ = ϵ/6. At each iteration of Algorithm 2, we draw total of t mϵ /16,δ/(4t2) samples from t mixtures. Therefore, over t time steps, we draw a total of t2 mϵ /16,δ/(4t2) = O ln2(k) samples for learning f (1), . . . , f (t). Moreover, the total number samples requested for subroutine TEST(f (r), [k], ϵ , δ/(4t)) for r = 1 . . . , t is We conclude that the total sample complexity is (d + k) ln 1 We remark that Algorithm 2 is inspired by the classic boosting scheme. Indeed, an algorithm that is directly adapted from boosting attains a similar performance guarantee as in Theorem 3.2. The algorithm assigns a uniform weight to each player, and learns a classifier with O(ϵ) error on the mixture distribution. Then, depending on whether the function achieves an O(ϵ) error on each distribution, the algorithm updates the players weights, and learns the next classifier from the weighted mixture of all distributions. An analysis similar to that of Ada Boost [9] shows that the majority vote of all the classifiers learned over Θ(ln(k)) iterations of the above procedure achieves a small error on every distribution. Similar to Algorithm 2, this algorithm achieves an O(ln2(k)) overhead for the centralized setting. 4 Sample Complexity Lower Bounds In this section, we present lower bounds on the sample complexity of collaborative PAC learning. In Section 4.1, we show that any learning algorithm for the collaborative PAC setting incurs Ω(log(k)) overhead in terms of sample complexity. In Section 4.2, we consider the sample complexity required for obtaining uniform convergence across F in the collaborative PAC setting. We show that Ω(k) overhead is necessary to obtain such results. 4.1 Tight Lower Bound for the Personalized Setting We now turn to establishing the Ω(log(k)) lower bound mentioned above. This lower bound implies the tightness of the O(log(k)) overhead upper bound obtained by Theorem 3.1 in the personalized setting. Moreover, the O(log2(k)) overhead obtained by Theorem 3.2 in the centralized setting is nearly tight, up to a log(k) multiplicative factor. Formally, we prove the following theorem. Theorem 4.1. For any k N, ϵ, δ (0, 0.1), and (ϵ, δ)-learning algorithm A in the collaborative PAC setting, there exist an instance with k players, and a hypothesis class of VC-dimension k, on which A requires at least 3k ln[9k/(10δ)]/(20ϵ) samples in expectation. Hard instance distribution. We show that for any k N and ϵ, δ (0, 0.1), there is a distribution Dk,ϵ of hard instances, each with k players and a hypothesis class with VC-dimension k, such that any (ϵ, δ)-learning algorithm A requires Ω(k log(k)/ϵ) samples in expectation on a random instance drawn from the distribution, even in the personalized setting. This directly implies Theorem 4.1, since A must take Ω(k log(k)/ϵ) samples on some instance in the support of Dk,ϵ. We define Dk,ϵ as follows: Instance space: Xk = {1, 2, . . . , k, }. Hypothesis class: Fk is the collection of all binary functions on Xk that map to 0. Target function: f is chosen from Fk uniformly at random. Players distributions: The distribution Di of player i is either a degenerate distribution that assigns probability 1 to , or a Bernoulli distribution on {i, } with Di(i) = 2ϵ and Di( ) = 1 2ϵ. Di is chosen from these two distributions independently and uniformly at random. Note that the VC-dimension of Fk is k. Moreover, on any instance in the support of Dk,ϵ, learning in the personalized setting is equivalent to learning in the centralized setting. This is due to the fact that given functions f1, f2, . . . , fk for the personalized setting, where fi is the function assigned to player i, we can combine these functions into a single function f Fk for the centralized setting by defining f( ) = 0 and f(i) = fi(i) for all i [k]. Then, err Di(f) err Di(fi) for all i [k].1 Therefore, without loss of generality we focus below on the centralized setting. Lower bound for k = 1. As a building block in our proof of Theorem 4.1, we establish a lower bound for the special case of k = 1. For brevity, let Dϵ denote the instance distribution D1,ϵ. We say that A is an (ϵ, δ)-learning algorithm for the instance distribution Dϵ if and only if on any instance in the support of Dϵ, with probability at least 1 δ, A outputs a function f with error below ϵ. The following lemma, proved in Appendix C, states that any (ϵ, δ)-learning algorithm for Dϵ takes Ω(log(1/δ)/ϵ) samples on a random instance drawn from Dϵ.2 Lemma 4.2. For any ϵ, δ (0, 0.1) and (ϵ, δ)-learning algorithm A for Dϵ, A takes at least ln(1/δ)/(6ϵ) samples in expectation on a random instance drawn from Dϵ. Here the expectation is taken over both the randomness in the samples and the randomness in drawing the instance from Dϵ. Now we prove Theorem 4.1 by Lemma 4.2 and a reduction from a random instance sampled from Dϵ to instances sampled from Dk,ϵ. Intuitively, a random instance drawn from Dk,ϵ is equivalent to k independent instances from Dϵ. We show that any learning algorithm A that simultaneously solves k tasks (i.e., an instance from Dk,ϵ) with probability 1 δ can be transformed into an algorithm A that solves a single task (i.e., an instance from Dϵ) with probability 1 O(δ/k). Moreover, the expected sample complexity of A is only an O(1/k) fraction of the complexity of A. This transformation, together with Lemma 4.2, gives a lower bound on the sample complexity of A. Proof Sketch of Theorem 4.1. We construct an algorithm A for the instance distribution Dϵ from an algorithm A that (ϵ, δ)-learns in the centralized setting. Recall that on an instance drawn from Dϵ, A has access to a distribution D, i.e., the single player s distribution. A generates an instance (Fk, f , {Di}i [k]) from the distribution Dk,ϵ (specifically, A knows the target function f and the distributions), and then chooses l [k] uniformly at random. A simulates A on instance (Fk, f , {Di}i [k]), with Dl replaced by the distribution D. Specifically, every time A draws a sample from Dj for some j = l, A samples Dj and forwards the sample to A. When A asks for a sample from Dl, A samples the distribution D instead and replies to A accordingly, i.e., A returns l, together with the label, if the sample is 1 (recall that X1 = {1, }), and returns otherwise. Finally, when A terminates and returns a function f on Xk, A checks whether err Dj(f) < ϵ for every j = l. If so, A returns the function f defined as f (1) = f(l) and f ( ) = f( ). Otherwise, A repeats the simulation process on a new instance drawn from Dk,ϵ. Let mi be the expected number of samples drawn from the i-th distribution when A runs on an instance drawn from Dk,ϵ. We have the following two claims, whose proofs are relegated to Appendix C. Claim 4.3. A is an (ϵ, 10δ/(9k))-learning algorithm for Dϵ. Claim 4.4. A takes at most 10/(9k) Pk i=1 mi samples in expectation. Applying Lemma 4.2 to A gives Pk i=1 mi 3k ln[9k/(10δ)] 20ϵ , which proves Theorem 4.1. 4.2 Lower Bound for Uniform Convergence We next examine the sample complexity required for obtaining uniform convergence across the hypothesis class F in the centralized collaborative PAC setting, and establish an overhead lower bound of Ω(k). Interestingly, our centralized learning algorithm (Algorithm 2) achieves O(log2(k)) overhead it circumvents the lower bound by not relying on uniform convergence. 1In fact, when fi Fk, err Di(f) = err Di(fi) for all i [k]. 2 Here we only assume that A is correct for instances in the support of Dϵ, rather than being correct on every instance. To be more formal, we first need to define uniform convergence in the cooperative PAC learning setting. We say that a hypothesis class F has the uniform convergence property with sample size m(k) ϵ,δ if for any k distributions D1, . . . , Dk, there exist integers m1, . . . , mk that sum up to m(k) ϵ,δ , such that when mi samples are drawn from Di for each i [k], with probability 1 δ, any function in F that is consistent with all the m(k) ϵ,δ samples achieves error at most ϵ on every distribution Di. Note that the foregoing definition is a relatively weak adaptation of uniform convergence to the cooperative setting, as the integers mi are allowed to depend on the distributions Di. But this observation only strengthens our lower bound, which holds despite the weak requirement. Theorem 4.5. For any k, d N and (ϵ, δ) (0, 0.1), there exists a hypothesis class F of VCdimension d, such that m(k) ϵ,δ dk(1 δ)/(4ϵ). Proof Sketch of Theorem 4.5. Fix k, d N and ϵ, δ (0, 0.1). We define instance (F, f , {Di}k i=1) as follows. The instance space is X = ([k] [d]) { }, and the hypothesis class F contains all binary functions on X that map to 0 and take value 1 on at most d points. The target function f maps every element in X to 0. Finally, the distribution of each player i [k] is given by Di((i, j)) = 2ϵ/d for any j [d] and Di( ) = 1 2ϵ. Note that if a sample set contains strictly less than d/2 elements in {(i , 1), (i , 2), . . . , (i , d)} for some i , there is a consistent function in F with error strictly greater than ϵ on Di , namely, the function that maps (i, j) to 1 if and only if i = i and (i , j) is not in the sample set. Therefore, to achieve uniform convergence, at least d/2 elements from X \ { } must be drawn from each distribution. Since the probability that each sample is different from is 2ϵ, drawing d/2 such samples from k distribution requires Ω(dk/ϵ) samples. A complete proof of Theorem 4.5 appears in Appendix C. Acknowledgments We thank the anonymous reviewers for their helpful remarks and suggesting an alternative boostingbased approach for the centralized setting. This work was partially supported by the NSF grants CCF-1525971, CCF-1536967, CCF-1331175, IIS-1350598, IIS-1714140, CCF-1525932, and CCF1733556, Office of Naval Research grants N00014-16-1-3075 and N00014-17-1-2428, a Sloan Research Fellowship, and a Microsoft Research Ph.D. fellowship. This work was done while Avrim Blum was working at Carnegie Mellon University. [1] M. Anthony and P. L. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge University Press, 1999. [2] Maria Florina Balcan, Avrim Blum, Shai Fine, and Yishay Mansour. Distributed learning, communication complexity and privacy. In Proceedings of the 25th Conference on Computational Learning Theory (COLT), pages 26.1 26.22, 2012. [3] Jonathan Baxter. A Bayesian/information theoretic model of learning to learn via multiple task sampling. Machine learning, 28(1):7 39, 1997. [4] Jonathan Baxter. A model of inductive bias learning. Journal of Artificial Intelligence Research, 12:149 198, 2000. [5] Shai Ben-David and Reba Schuller. Exploiting task relatedness for mulitple task learning. In Proceedings of the 16th Conference on Computational Learning Theory (COLT), pages 567 580, 2003. [6] Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. A theory of learning from different domains. Machine learning, 79(1): 151 175, 2010. [7] Rich Caruana. Multitask learning. Machine Learning, 28(1):41 75, 1997. [8] Ofer Dekel, Ran Gilad-Bachrach, Ohad Shamir, and Lin Xiao. Optimal distributed online prediction. In Proceedings of the 28th International Conference on Machine Learning (ICML), pages 713 720, 2011. [9] Yoav Freund and Robert E Schapire. A desicion-theoretic generalization of on-line learning and an application to boosting. In Proceedings of the 2nd European Conference on Computational Learning Theory (Euro COLT), pages 23 37, 1995. [10] Abhishek Kumar and Hal Daumé III. Learning task grouping and overlap in multi-task learning. In Proceedings of the 29th International Conference on Machine Learning (ICML), pages 1103 1110, 2012. [11] Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. Domain adaptation: Learning bounds and algorithms. In Proceedings of the 22nd Conference on Computational Learning Theory (COLT), pages 19 30, 2009. [12] Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. Domain adaptation with multiple sources. In Proceedings of the 23rd Annual Conference on Neural Information Processing Systems (NIPS), pages 1041 1048, 2009. [13] Massimiliano Pontil and Andreas Maurer. Excess risk bounds for multitask learning with trace norm regularization. In Proceedings of the 26th Conference on Computational Learning Theory (COLT), pages 55 76, 2013. [14] Leslie G. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134 1142, 1984. [15] Jialei Wang, Mladen Kolar, and Nathan Srerbo. Distributed multi-task learning. In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics (AISTATS), pages 751 760, 2016.