# generation_from_noisy_examples__6b69749f.pdf Generation from Noisy Examples Ananth Raman * 1 Vinod Raman * 2 We continue to study the learning-theoretic foundations of generation by extending the results from Kleinberg & Mullainathan (2024) and Li et al. (2024) to account for noisy example streams. In the noiseless setting of Kleinberg & Mullainathan (2024) and Li et al. (2024), an adversary picks a hypothesis from a binary hypothesis class and provides a generator with a sequence of its positive examples. The goal of the generator is to eventually output new, unseen positive examples. In the noisy setting, an adversary still picks a hypothesis and a sequence of its positive examples. But, before presenting the stream to the generator, the adversary inserts a finite number of negative examples. Unaware of which examples are noisy, the goal of the generator is to still eventually output new, unseen positive examples. In this paper, we provide necessary and sufficient conditions for when a binary hypothesis class can be noisily generatable. We provide such conditions with respect to various constraints on the number of distinct examples that need to be seen before perfect generation of positive examples. Interestingly, for finite and countable classes we show that generatability is largely unaffected by the presence of a finite number of noisy examples. 1. Introduction Generation is an important paradigm in machine learning with promising applications to natural language processing (Wolf et al., 2020), computer vision (Khan et al., 2022), and computational chemistry (Vanhaelen et al., 2020). In contrast to its practical applications, a strong theoretical foundation of generation is largely missing from literature. Recently, Kleinberg & Mullainathan (2024), inspired by the seminal work of E Mark Gold on language identification *Equal contribution 1Bridgewater-Raritan Regional High School 2University of Michigan, Ann Arbor. Correspondence to: Vinod Raman . Proceedings of the 42 nd International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s). in the limit (Gold, 1967), introduced a theoretical model of language generation called Language Generation in the Limit. In this model, there is a countable set U of strings and a countable language family C = {L1, L2, . . . }, where Li U for all i N. An adversary picks a language K C and begins to enumerate the strings one by one to the player in rounds t = 1, 2, . . . . After observing the string wt in round t N, the player guesses a string ˆwt U in the hope that ˆwt K \{w1, . . . , wt}. The player has generated from K in the limit, if there exists a finite time step t N such that for all s t, we have that ˆws K \{w1, . . . , ws}. The class C is generatable in the limit, if the player can generate from all K C. Unlike language identification in the limit, Kleinberg & Mullainathan (2024) prove that generation in the limit is possible for every countable language family C. Kleinberg & Mullainathan (2024) s positive result has spawned a surge of new work, extending the results of Kleinberg & Mullainathan (2024) in various ways (Kalavasis et al., 2024b; Li et al., 2024; Charikar & Pabbaraju, 2024; Kalavasis et al., 2024a). Of particular interest to us is the work by Li et al. (2024). They frame the results of Kleinberg & Mullainathan (2024) through a learning-theoretic lens by taking the language family C to be a binary hypothesis class H {0, 1}X defined over some countable instance space X. By doing so, Li et al. (2024) identify stronger notions of generatability, termed uniform 1 and non-uniform generatability, and provide characterizations of which classes are uniformly and non-uniformly generatable in terms of new combinatorial dimensions. A commonality of Kleinberg & Mullainathan (2024) and its follow-up work is the assumption that the adversary presents to the player a noiseless stream of examples one where every example/string must be contained in the hypothesis/language chosen by the adversary. In practice, such an assumption is unrealistic, as one would still like to generate well even if a few examples in the dataset are imperfect. For example, Large Language Models (LLM) are often trained using the potentially hallucinatory outputs of other LLMs (Burns et al., 2023; Briesch et al., 2023). More generally, one might want a generative model to be robust 1Uniform generatability was also informally considered by Kleinberg & Mullainathan (2024) Generation from Noisy Examples to data contamination/poisoning attacks (Shang et al., 2018; Zhang et al., 2023; Jiang et al., 2023). Motivated by these concerns, we study generation, in the framework posed by Kleinberg & Mullainathan (2024) and Li et al. (2024), under noisy example streams. In particular, we focus on a simple, but natural noising process: after selecting a positive example stream, the adversary is allowed to insert a finite number of negative examples in any way it likes, unbeknownst to the player. Such a noising process, as well as others, have been extensively studied in the context language identification in the limit or inductive inference (Sch afer, 1985; Fulk & Jain, 1989; Baliga et al., 1992; Jain, 1994; Stephan, 1997; Case et al., 1997; Lange & Grieser, 2002; Mukouchi & Sato, 2003; Tantini et al., 2006). In this paper, we extend this study to generation. To that end, our main contributions are summarized below. (1) We extend the notions of uniform generatability, nonuniform generatability, and generatability in the limit from Kleinberg & Mullainathan (2024) and Li et al. (2024) to account for finite, noisy example streams. We call these new settings uniform noise-dependent generatability, non-uniform noise-dependent generatability, and noisy generatability in the limit respectively. (2) We provide a complete characterization of which classes are uniformly noise-dependent generatable in terms of a new scale-sensitive dimension we call the Noisy Closure dimension. Theorem (Informal). A class H {0, 1}X is uniformly noise-dependent generatable if and only if NCn(H) < for every n N, where NCn(H) is the Noisy Closure dimension of H at noise-level n. We show that uniform noise-dependent generation is strictly harder than noiseless uniform generation. In fact, we construct a countably infinite class which is trivially uniformly generatable when there is no noise, but not uniformly noise-dependent generatable even when the adversary is only allowed to perturb a single example. Despite this hardness, we show that all finite classes are still uniformly noise-dependent generatable. (3) We provide a sufficient condition for which classes are non-uniformly noise-dependent generatable. Lemma (Informal). A class H {0, 1}X is nonuniformly noise-dependent generatable if there exists a non-decreasing sequence of classes H1 H2 . . . such that H = S i=1 Hi and NCi(Hi) < for all i N. Using the result that all finite classes are uniformly noise-dependent generatable, we prove as a corollary that all countable classes are non-uniformly noisedependent generatable. Since non-uniform noisedependent generation implies noisy generation in the limit, this corollary also implies that all countable classes are noisily generatable in the limit. Although not matching, we also provide a necessary condition for non-uniform noise-dependent generation in terms of the Noisy Closure dimension. Lemma (Informal). A class H {0, 1}X is nonuniformly noise-dependent generatable only if for every n N, there exists a non-decreasing sequence of classes H1 H2 . . . such that H = S i=1 Hi and NCn(Hi) < for all i N. We leave the complete characterization of non-uniform noise-dependent generatability as an open question. (4) We provide two different sufficiency conditions for noisy generatability in the limit. The first shows that (noiseless) non-uniform generatability is sufficient for noisy generatability in the limit. Theorem (Informal). If a class H {0, 1}X is (noiseless) non-uniformly generatable, then it is noisily generatable in the limit. Since Li et al. (2024) prove that all countable classes are (noiseless) non-uniform generatable, this result also shows that all countable classes are noisily generatable in the limit. In addition, we also give a sufficiency condition in terms of an even stronger notion of noisy generatability called uniform noise-independent generatability. Theorem (Informal). If there exists a finite sequence of uniformly noise-independent generatable classes H1, H2, . . . , Hk such that H = Sk i=1 Hi, then H is noisily generatable in the limit. One can think of this latter sufficiency condition as the analog of Theorem 3.10 in Li et al. (2024) which shows that the ability to write a class as the finite union of (noiseless) uniformly generatable classes is sufficient for (noiseless) generatability in the limit. While our theorem statements look similar to that of Li et al. (2024), our proof techniques are different due to the fact that the Noisy Closure dimension is scale-sensitive. This difference manifests even when characterizing noisy uniform generation. In particular, our uniform generator effectively requires combining different generators for each noise-level whereas the noiseless uniform generator from Li et al. (2024) does not. 1.1. Related Work Language Identification in the Limit. In his seminal 1967 paper, E Mark Gold introduced the model of Language Generation from Noisy Examples Identification in the Limit (Gold, 1967). In this model, there is a countable set U of strings and a countable language family C = {L1, L2, . . . }, where Li U for all i N. An adversary picks a language K C, and begins to enumerate the strings in K one by the one to the player in rounds t = 1, 2, . . . . After observing the string wt in round t N, the player guesses an index it N with the hope that Lit = K. The player has identified K in the limit, if there exists a finite time step t N such that for all s t, we have that Lis = K. The class C is identifiable in the limit, if the player can identify all K C. Gold showed that while all finite language families are identifiable in the limit, there are simple countable language families which are not. In a follow-up work, Angluin (1979; 1980) provides a precise characterization of language families C for which language identification in the limit is possible. The results by Gold and Angluin emphasized the impossibility of language identification in the limit by ruling out the vast majority of language families. Since Gold s seminal work on language identification in the limit, there has been extensive follow-up work on this model and we refer the reader to the excellent survey by Lange et al. (2008). Language Identification from Noisy Examples. In addition to the work by Angluin (1979; 1980), a different line of follow-up work focused on extending Gold s model to account for noisy example streams. There have been several proposed noise models, and we review a few of them here. The most standard noise model for language identification in the limit allows the adversary to first pick an enumeration of its chosen language K L and then insert a finite number of strings that do not belong to K (Baliga et al., 1992; Fulk & Jain, 1989; Sch afer, 1985; Case et al., 1997; Stephan, 1997; Lange & Grieser, 2002). We use the term insert to emphasize that every w K must still be present in the sequence chosen by the adversary. Such noisy streams are often referred to as noisy texts. Jain (1994) go beyond the finite nature of the noise by considering sequences of strings with an infinite number of noisy incorrect strings, where the amount of noise is measured using certain density notions. In a different direction, Mukouchi & Sato (2003) and Tantini et al. (2006) model noise at the string-level by defining a distance metric on the space of all strings and allowing the adversaries to insert and delete strings in the original enumeration of K which are at most some distance away from some true string in K. Language Generation in the Limit. Inspired by large language models and recent interest in generative machine learning, Kleinberg & Mullainathan (2024) study the problem of language generation in the limit. In this problem, the adversary also picks a language K C, and begins to enumerate the strings one by the one to the player in rounds t = 1, 2, . . . . However, now, after observing the string wt in round t N, the player guesses a string ˆwt U with the hope that ˆwt K \{w1, . . . , wt}. The player has generated from K in the limit, if there exists a finite time step t N such that for all s t, we have that ˆws K \{w1, . . . , ws}. Kleinberg & Mullainathan (2024) prove a strikingly different result than Gold and Angluin generation in the limit is possible for every countable language family C. This positive result has spurred a number of follow-up works, which we briefly review below. Kalavasis et al. (2024b) study generation in the stochastic setting, where the positive examples revealed to the generator are sampled i.i.d. from some unknown distribution. In this model, they study the trade-offs between generating with breadth and generating with consistency and show that in general, achieving both is impossible, resolving an open question posed by Kleinberg & Mullainathan (2024) for a large family of language models. More recently, Charikar & Pabbaraju (2024) and Kalavasis et al. (2024a) further formalize this tension between consistency and breadth by defining various notions of breadth and providing complete characterizations of which language families are generatable in the limit with breadth. In a different direction, and most closely related to our work, Li et al. (2024) reinterpret the results of Kleinberg & Mullainathan (2024) through a binary hypothesis class H {0, 1}X defined over a countable example space X. By doing so, Li et al. (2024) extend the results of Kleinberg & Mullainathan (2024) beyond language generation while also formalizing two stronger settings of generation they term uniform and non-uniform generation. Unlike Kleinberg & Mullainathan (2024) and Kalavasis et al. (2024a) who focus on computable learners and countable language families, Li et al. (2024) place no such restrictions and provide complete, information-theoretic characterizations of which hypothesis classes are uniformly and non-uniformly generatable. Li et al. (2024) leave the characterization of generatability in the limit as an open question (see Section 4). 2. Preliminaries Let X denote a countable example space and H {0, 1}X denote a binary hypothesis class. Let X denote the set of all finite subsets of X. Let [N] := {1, . . . , N} and abbreviate a finite sequence x1, . . . , xn as x1:n. For any h H, an enumeration of supp(h) is any infinite sequence x1, x2, . . . such that S i N{xi} = supp(h). In other words, for every x supp(h), there exists an i N such that xi = x and for every i N, we have that xi supp(h). For any h H, a noisy enumeration of supp(h) is any infinite sequence x1, x2, . . . such that for every x supp(h), there exists an i N such that xi = x and P t=1 1{xt supp(h)} < . For a finite sequence of examples x1, . . . , xd and n N {0}, define H(x1:d; n) := {h H : |{x1:d} supp(h)| Generation from Noisy Examples d n}. For any class H, define its induced closure operator at noise-level n as H,n such that x1:d H,n := h H(x1:d;n) supp(h), if |H(x1:d; n)| 1 , if |H(x1:d; n)| = 0 . Note that H,0 is exactly the closure operator from Section 2.1 of Li et al. (2024). We will make the following assumption about hypothesis classes. Assumption 2.1 (Uniformly Unbounded Support (UUS) (Li et al., 2024)). A hypothesis class H {0, 1}X satisfies the Uniformly Unbounded Support (UUS) property if | supp(h)| = for every h H. Remark 2.2. The UUS assumption is mainly needed for bookkeeping purposes to prevent the adversary from exhausting all positive examples and thus making the task of generating unseen positive examples impossible. This is consistent with the assumption that each language is countably infinite in size in the work of Kleinberg & Mullainathan (2024). 2.1. Generatability In this paper, we adopt the learning-theoretic framework for generation introduced by Li et al. (2024). To that end, we define a generator as a deterministic map from a finite sequence of examples to a new example. Definition 2.3 (Generator). A generator is a map G : X X that takes a finite sequence of examples x1, x2, . . . and outputs a new example x. 2.1.1. NOISELESS GENERATION In the noiseless setting, an adversary plays a game against a generator G. Before the game begins, the adversary picks a hypothesis h H and a sequence of examples x1, x2, . . . such that {x1, x2, . . . } supp(h). The adversary reveals the examples as a stream to G one at a time, and the goal of the generator is to eventually output new, unseen positive examples ˆxt supp(h) \ {x1, . . . , xt}. Depending on how one quantifies eventually, one can get various notions of noiseless generatability. For example, if one requires the generator to perfectly generate new unseen examples after observing d N examples regardless of the hypothesis or stream chosen by the adversary, this is called uniform generation. If the number of positive examples required before perfect generation can depend on the hypothesis chosen by the adversary, but not the stream, then this is non-uniform generation. Finally, if the number of positive examples before perfect generation can depend on both the hypothesis and the stream selected by the adversary, this is called generation in the limit. We refer the reader to Appendix A for complete definitions and Appendix B for a summary of results. 2.1.2. NOISY GENERATION In the noisy model, we consider the following game. Like before, the adversary picks a hypothesis h H, and a sequence of positive examples z1, z2, supp(h). But now, the adversary picks a noise-level n N, and inserts at most n negative examples in z1, z2, . . . to obtain a noisy stream x1, x2, . . . . The adversary then presents the examples in the noisy stream to generator G one at a time. Without knowledge of the noise-level or the location of the negative examples, the goal of the generator is to eventually output new, positive examples ˆxt supp(h)\{x1, . . . , xt}. Like the noiseless case, there are several definitions for noisy generatability based on how one quantifies eventually. The most extreme case is what we term uniform noiseindependent generatability. Definition 2.4 (Uniform Noise-Independent Generatability). Let H {0, 1}X be any hypothesis class satisfying the UUS property. Then, H is uniformly noiseindependent generatable, if there exists a generator G and d N, such that for every h H and any sequence x1, x2, . . . with P t=1 1{xt / supp(h)} < , if there exists t N such that |{x1, . . . , xt }| = d , then G(x1:s) supp(h) \ {x1, . . . , xs} for all s t . Here, the generator must perfectly generate after seeing d N examples, where d can only depend on the class H itself, and thus is uniform over the noise-level n , the hypothesis h H, and the stream x1, x2, . . . selected by the adversary. Unsurprisingly, as we show in Section 3.1, uniform noise-independent generatability is impossible even for simple classes with just two hypotheses. In Appendix C, we show this is also the case if we measure sample complexity in terms of just the number of distinct positive examples (as opposed to all examples). These results motivate weakening the definition by allowing d to depend on the noise-level. We call this setting Uniform Noise-dependent Generatability. Definition 2.5 (Uniform Noise-dependent Generatability). Let H {0, 1}X be any hypothesis class satisfying the UUS property. Then, H is uniformly noise-dependent generatable, if there exists a generator G such that for every noise-level n N, there exists a d N, such that for every h H and any sequence x1, x2, . . . with P t=1 1{xt / supp(h)} n , if there exists t N such that |{x1, . . . , xt }| = d , then G(x1:s) supp(h) \ {x1, . . . , xs} for all s t . Uniform noise-dependent generatability does not suffer from the same hardness of uniform noise-independent generatability. In fact, in Section 3.2, we show that all finite classes are uniformly noise-dependent generatable. Nevertheless, we can continue weakening the notion of noisy generatability, by allowing d to also depend on the hypoth- Generation from Noisy Examples esis chosen by the adversary. Definition 2.6 (Non-uniform Noise-dependent Generatability). Let H {0, 1}X be any hypothesis class satisfying the UUS property. Then, H is non-uniformly noisedependent generatable, if there exists a generator G, such that for every noise-level n N and any h H there exists d N such that for any sequence x1, x2, . . . with P t=1 1{xt / supp(h)} n , if there exists t N such that |{x1, . . . , xt }| = d , then G(x1:s) supp(h) \ {x1, . . . , xs} for all s t . The term non-uniform here is used to refer to the fact that the number of unique examples needed by the generator before perfect generation can depend on the selected h H, and hence it is non-uniform over the hypothesis class, unlike the previous two definitions. Finally, the weakest form of noisy generation allows d to depend on the noiselevel, hypothesis, and stream selected by the adversary. Definition 2.7 (Noisy Generatability in the Limit). Let H {0, 1}X be any hypothesis class satisfying the UUS property. Then, H is noisily generatable in the limit, if there exists a generator G, such that for every h H and any noisy enumeration x1, x2, . . . of supp(h), there exists t N such that G(x1:s) supp(h) \ {x1, . . . , xs} for all s t . Note that in Definition 2.7, we require the noisy stream picked by the adversary to still contain every example in the support of the selected hypothesis. This is consistent with the model of noisy texts from the literature in language identification where one allows the adversary to insert noisy examples in the stream, as opposed to replacing positive examples (Stephan, 1997). Remark 2.8. The astute reader might notice that there is actually a fifth setting of noisy generatability that we did not cover. In this fifth setting, which we will call Non-uniform Noise-independent Generatability, the number of positive examples needed before perfect generation can depend on the hypothesis chosen by the adversary, but must still be uniform over the noise-level and the noisy example stream chosen by the adversary. However, as in the case of uniform noise-independent generatability, non-uniform noiseindependent generatability is still too strong as there exists a class of just two hypotheses that is not non-uniformly noiseindependent generatable. We refer the reader to Appendix D for more details. 3. Towards Characterizations of Noisy Generation 3.1. Uniform Noise-independent Generatability We start by providing a characterization of the strongest form of noisy generation uniform noise-independent gen- eratability. Theorem 3.1 (Characterization of Uniform Noise-independent Generatability). Let X be countable and H {0, 1}X satisfy the UUS property. Then, H is uniformly noiseindependent generatable if and only if T h H supp(h) = . Theorem 3.1 is a hardness result it shows that even finite classes with just two hypotheses may not be uniformly noise-independent generatable. In fact, one way to interpret Theorem 3.1 is that uniform noise-independent generation is only possible for trivial classes where the generator can perfectly generate without observing any examples from the adversary. Indeed, this is what condition in Theorem 3.1 implies, since if this condition is true, the generator can simply compute T h H supp(h) and always play from this set. In Appendix C, we prove a similar statement even if we only measure the sample complexity in terms of the number of distinct positive examples. Since the sufficiency direction of Theorem 3.1 follows from this observation, we only prove the necessity direction. Proof. (of necessity in Theorem 3.1) Let X be countable and H {0, 1}X satisfy the UUS property. Suppose that T h H supp(h) =: n < . We need to show that for every G and sufficiently large d N, there exists a h H and a sequence x1, x2, . . . with P t=1 1{xt / supp(h)} < , such that for every t N where |{x1, . . . , xt}| = d, there exists an s t such that G(x1:s) / supp(h) \ {x1, . . . , xs}. To that end, fix a generator G and a number d n. Let x1, . . . , xn be the sequence of n examples in T h H supp(h) sorted in their natural order. Pick any sequence of distinct examples xn+1, xn+2, . . . , xd and concatenate them to the end x1:n. Let ˆx = G(x1:d) and suppose without loss of generality that ˆx / {x1:d}. Then, by construction, there exists a h H such that ˆx / supp(h). Finally, complete the stream by picking distinct {xd+1, xd+2, . . . } supp(h). First, by construction, note that P t=1 1{xt / supp(h)} d n < . Next, note that t = d is the only time point such that |{x1:d}| = d. Finally, note that when s = t = d, we have that G(x1:s) = ˆx / supp(h) \ {x1:s} by definition. If however d < n, then as soon as the common intersection x1:n is enumerated, the generator is forced to produce an output that necessarily lies outside of the support of some hypothesis in H. This completes the proof. 3.2. Uniform Noise-dependent Generatability Theorem 3.1 shows that obtaining guarantees that are uniform over the noise-level is generally hopeless. In this section, we study the easier setting of uniform noise-dependent generation, where the number of examples needed for perfect generation can depend on the noise-level. At a high Generation from Noisy Examples level, the results in this section extend the techniques from Li et al. (2024) to account for noisy example streams. As such, we first define a noisy analog of the Closure dimension (see Appendix B for a definition). Definition 3.2 (n-Noisy Closure dimension). The Noisy Closure dimension of H at noise-level n N, denoted NCn(H), is the largest natural number d for which there exist distinct x1, . . . , xd X such that x1, . . . , xd H,n = and | x1, . . . , xd H,n| < . If this is true for arbitrarily large d N, then we say that NCn(H) = . On the other hand, if this is not true for d = 1, we say that NCn(H) = 0. Unlike the Closure dimension, the Noisy Closure dimension is scale-sensitive, as it is defined with respect to every noiselevel n N. Scale-sensitive dimensions are not new to learning theory, but have also been defined and used to characterize other properties of hypothesis classes like PAC and online learnability for regression problems (Rakhlin et al., 2015; Bartlett et al., 1994). Our main theorem in this section uses the Noisy Closure dimension to provide a complete characterization of which classes are uniform noise-dependent generatable. Theorem 3.3 (Characterization of Uniform Noise-dependent Generatability). Let X be countable and H {0, 1}X satisfy the UUS property. Then, H is uniformly noisedependent generatable if and only if NCn(H) < for all n N. Our proof of Theorem 3.3 is constructive. To prove necessity, we consider the case where there exists a n N such that NCn(H) = . Given any generator G and any number of distinct elements d N, we explicitly pick an h H and a valid noisy stream x1, x2, . . . such that G makes a mistake even after observing d distinct examples. In fact, a simple modification of our proof shows that if NCn(H) = d, then any generator G must observe at least d distinct examples before perfectly generating positive examples when the noise-level is n. To prove sufficiency, we explicitly construct a generator G, which only needs to observe NCn(H) + 1 distinct examples before being able to perfectly generate when the noise-level is n N. Crucially, our generator G does not need to know the noise-level picked by the adversary. Together, our necessity and sufficiency directions show that not only does the finiteness of NCn(H) at every n N provide a qualitative characterization of uniform noise-dependent generatability, but it also provides a quantitative one the sample complexity at noise-level n N is Θ(NCn(H)) analogous to the mistake bound in online learning, which quantifies learnability. We defer the full proof to Appendix E. Our characterization of uniform noise-dependent generatability in terms of the Noisy Closure dimension allows us to show that all finite classes are uniformly noise-dependent generatable. This contrasts uniform noise-independent gen- eratable, where even simple classes of size two may not be uniformly noise-independent generatable. Corollary 3.4 (All Finite Classes are Uniformly Noise-dependent Generatable). Let X be countable and H {0, 1}X satisfy the UUS property. If H is finite, then H is uniformly noise-dependent generatable. Proof. Let X be countable and H {0, 1}X be a finite hypothesis class with q := |H| satisfying the UUS property. To show that H is uniformly noise-dependent generatable, it suffices to show that NCn(H) < for all n N. For any subset V H, define V := T h V supp(h) and F = {V H : | V | < } to be the set of examples common to all hypotheses in V and the subsets of H whose intersection of supports has finite cardinality, respectively. Then, let d := max V F | V | be the maximum size of a finite set of examples common to a subset of H. Note that d is finite because H is finite. Fix any noise-level n N and for the sake of contradiction, suppose NCn(H) = . This means that for every s N, there exists t s and a sequence of distinct examples x1, . . . , xt such that | x1, . . . , xt H,n| < . Pick s = nq + d + 1 and consider the stream x1, . . . , xt such that | x1, . . . , xt H,n| < for t s. Recall that H(x1:t; n) = {h H : |{x1, . . . , xt} supp(h)| t n}. That is, H(x1:t; n) contains all hypotheses in H that are inconsistent with x1:t on at most n examples. Since |H(x1:t; n)| |H| = q, there are at least t nq (nq+d+1) nq = d+1 distinct examples in x1:t contained in the support of all hypotheses in H(x1:t; n). In other words, | x1, . . . , xt H,n| = | H(x1:t;n)| d + 1 > d. This is a contradiction as d = max V F | V | by definition and H(x1:t; n) F. Thus, NCn(H) < nq + d + 1 < for all n N, completing the proof. We end this section by establishing that uniform noisedependent generatability is strictly harder than noiseless uniform generatability. This is similar to online learnability for classification problems, where typically the noisefree (realizable) and noisy (agnostic) characterizations of learnability are not equivalent for deterministic learning algorithms (Littlestone, 1987; Ben-David et al., 2009). Lemma 3.5 (Uniform Generatability = Uniform Noise-dependent Generatability). Let X be countable. There exists a countable class H {0, 1}X satisfying the UUS property such that C(H) = 0 but NC1(H) = . Proof. Let P = {p1, p2, . . . } be the set of primes, Sq = { qn : n N}, S = S q P Sq, and X = P S. Let the support of each hypothesis hp be supp(hp) = P\{p} S\Sp and H := {hp : p P}. Observe that H satisfies the UUS property and X is countable. Generation from Noisy Examples We now prove that the Closure dimension (see Appendix B) C(H) = 0. Suppose the first element in the sequence is x1 = pn for some p P and n N. Then | x1 H| = |{ pm : m N }| = . If the first element in the sequence is x1 = p P, then | x1 H| = |{ pn : n N }| = . This is because observing any example immediately rules out the one hypothesis that predicts 0 for that example. Note that these are the only two possible cases as it must be the case that x1 supp(h) for some h H. Now, we show that NC1(H) = . Fix some d N. Consider the stream p1, p2, . . . , pd of the first d prime numbers. Then, | p1, . . . , pd H,1| = | | = 0, due to the fact that H(p1, . . . , pd; 1) = H and the intersection of the supports of all h H is empty. So, NC1(H) d. Since d N was picked arbitrarily, this is true for all d N and therefore, H is not uniformly noise-dependent generatable. Lemma 3.5 shows a strong separation between noise-free and uniform noise-dependent generation there is a class that is uniformly generatable, but not uniformly noisedependent generatable even when the adversary is allowed to perturb one example. 3.3. Non-uniform Noise-dependent Generatability Similar to the characterization of non-uniform generatability in the noise-free setting, we can use uniform noisedependent generatability to provide sufficiency and necessary conditions for non-uniform noise-dependent generatability. Lemma 3.6 (Sufficiency for Non-uniform Noise-dependent Generatability). Let X be countable and H {0, 1}X satisfy the UUS property. If there exists a non-decreasing sequence of classes H1 H2 such that H = S i=1 Hi and NCi(Hi) < for all i N, then H is non-uniformly noise-dependent generatable. To prove Lemma 3.6, we construct a non-uniform noisedependent generator G, which at each round t N, computes an index it N based on the number of distinct examples seen in the stream so far. G then computes the noisy closure of Hit at noise-level it and then plays from this set. We defer details to Appendix F. As a corollary of Lemma 3.6 and Corollary 3.4, we can show that all countable classes are non-uniformly noisedependent generatable, and hence noisily generatable in the limit. In some sense, this result shows that amongst countable classes, noisy generation comes for free! Corollary 3.7 (All Countable Classes are Noisily Non-uniformly Generatable). Let X be countable and H {0, 1}X satisfy the UUS property. If H is countable, then H is nonuniformly noise-dependent generatable, and therefore also noisily generatable in the limit. Proof. Since non-uniform noise-dependent generatability implies noisy generatability in the limit, it suffices to show that every countable H is non-uniformly noise-dependent generatable. Let X be countable and H {0, 1}X be any countable class satisfying the UUS property. Let h1, h2, . . . be some fixed enumeration of H and define Hn = {hi : i n} for all n N. Then, observe that H1 H2 . . . and that H = S n=1 Hn. Next, observe that |Hn| = n < , which, using Corollary 3.4, implies that NCn(Hn) < . Finally, Lemma 3.6 gives that H is non-uniformly noisedependent generatable. Corollary 3.7 with Lemma 3.5 also establishes that uniform noise-dependent generatability is strictly harder than nonuniform noise-dependent generatability. We now move to our necessity condition for non-uniform noise-dependent generatability. Lemma 3.8 (Necessity for Non-uniform Noise-dependent Generatability). Let X be countable and H {0, 1}X satisfy the UUS property. If H is non-uniformly noisedependent generatable, then for every n N, there exists a non-decreasing sequence of classes H1 H2 such that H = S i=1 Hi and NCn(Hi) < for all i N. Proof. Let X be countable and H {0, 1}X be any nonuniformly noise-dependent generatable class satisfying the UUS property. Let G be a non-uniform noise-dependent generator for H. Fix n N. For every h H, let dh,n N be the smallest natural number such that for any sequence x1, x2, . . . with P t=1 1{xt / supp(h)} n, if there exists a t N such that |{x1, . . . , xt}| = dh,n, then G(x1:s) supp(h) \ {x1, . . . , xs} for all s t. Let Hi = {h H : dh,n i} for all i N. Note that H = S i N Hi because dh,n < for all h H. Moreover, we have that Hi Hi+1 for all i N. Finally, for every i N, observe that G is a uniform generator for Hi at noise-level n N, implying that NCn(Hi) < i < . Note the sufficiency and necessary conditions in Lemmas 3.6 and 3.8 respectively are not matching. Indeed, the condition in Lemma 3.6 is stronger than that in Lemma 3.8. We leave as an open question a complete characterization of non-uniform noise-dependent generatability. 3.4. Noisy Generatability in the Limit Corollary 3.7 showed that all countable classes are noisily generatable in the limit. Here, we provide alternate sufficiency conditions for noisy generatability in the limit. Our first result shows that (noiseless) non-uniform generatability is sufficient for noisy generatability in the limit. Since all countable classes are (noiseless) non-uniform generatable (Li et al., 2024), this result also shows that all countable classes are noisily generatable in the limit. Generation from Noisy Examples Algorithm 1 Generator G Input: Hypothesis class H and non-uniform generator Q for t = 1, 2, . . . do Adversary reveals example xt Let dt = |{x1, . . . , xt}| and rt t be the largest time point such that |{xrt, . . . , xt}| = dt 2 Initialize ˆzt 1 = Q(xrt:t) for i = 1, . . . , rt 1 do if ˆzt i / {x1, . . . , xt} then G outputs ˆzt i and moves to next round. else Update ˆzt i+1 = Q(xrt, . . . , xt, ˆzt 1, . . . , ˆzt i) end if end for G outputs ˆzt rt and moves to next round. end for Theorem 3.9 (Non-uniform Generatability = Noisy Generatability in the Limit). Let X be countable and H {0, 1}X be any class satisfying the UUS property. If H is (noiseless) non-uniformly generatable, then H is noisily generatable in the limit. Proof. Let X be countable and H {0, 1}X be any class satisfying the UUS property. Consider the generator G, which uses a non-uniform generator Q as in Algorithm 1. We will show that G noisily generates from H in the limit. Let h H and x1, x2, . . . be the chosen hypothesis and noisy enumeration of supp(h ) picked by the adversary, respectively. Let d Q(h ) N be the number of noiseless distinct examples that Q needs for h to perfectly generate from supp(h ). Let t N be such that for all t t , we have that xt supp(h ). Such a t must exist because there are at most a finite number of noisy examples in the stream. Also, because x1, x2, . . . is a noisy enumeration, at some point s N, rs t and ds 2 d Q(h ). Fix some s s . We will prove that G generates perfectly on round s. First, we claim that for all i [rs], we have that ˆzs i supp(h ) \ {xrs, . . . , xs, ˆzs 1, . . . , ˆzs i 1}. Our proof will be by induction. For the base case, consider i = 1. We need to show that ˆzs 1 supp(h ) \ {xrs, . . . , xs}. However, this just follows from the fact that ˆzs 1 = Q(xrs:s), |{xrs, . . . , xs}| d Q(h ), and {xrs, . . . , xs} supp(h ). Next, for the induction step, suppose that for all i m, we have that ˆzs i supp(h )\{xrs, . . . , xs, ˆzs 1, . . . , ˆzs i 1}. Then, by definition, we have that ˆzs m+1 = Q(xrs, . . . , xs, ˆzs 1, . . . , ˆzs m). The proof of the claim is complete after noting that {xrs, . . . , xs, ˆzs 1, . . . , ˆzs m} supp(h ) and |{xrs, . . . , xs, ˆzs 1, . . . , ˆzs m}| d Q(h ). Now we will complete the overall proof by showing that the output of G on round s lies in supp(h ) \ {x1, . . . , xs}. Suppose that G outputs ˆzs j for some j [rs]. If j < rs, then it must be the case that Line 7 fired and so by construction ˆzs j supp(h ) \ {x1, . . . , xs}. Thus, suppose that j = rs. If j = rs, then it must be the case that {ˆzs 1, . . . , ˆzs rs 1} = {x1, . . . , xrs 1}. Accordingly, we have that {xrs, . . . , xs, ˆzs 1, . . . , ˆzs rs 1} = {x1, . . . , xs}, which means that ˆzs rs supp(h ) \ {xrs, . . . , xs, ˆzs 1, . . . , ˆzs rs 1} and therefore ˆzs rs supp(h ) \ {x1, . . . , xs}. Thus, in either case, G perfectly generates on round s. Since s s is arbitrary, our proof is complete. In similar spirit to Theorem 3.10 from Li et al. (2024), we provide another sufficiency condition for noisy generatability in the limit in terms of uniform noise-independent generatability. Theorem 3.10. Let X be countable and H {0, 1}X satisfy the UUS property. If there exists a finite sequence of uniformly noise-independent generatable classes H1, . . . , Hk such that H = Sk i=1 Hi, then H is noisily generatable in the limit. The proof of Theorem 3.10 is similar to the proof Theorem 3.10 from Li et al. (2024) and so we defer it to Appendix G. 4. Discussion and Open Questions In this paper, we introduced several notions of noisy generatability and made progress towards characterizing which classes are noisily generatable. We highlight some important directions of future work. Characterizations of Non-uniform Noise-dependent Generatability and Noisy Generatability in the Limit. An important direction of future work is to provide a complete and concise characterization of which classes are non-uniformly noise-dependent generatable and noisily generatable in the limit. Note that Li et al. (2024) also pose the complete characterization of (noiseless) generatability in the limit as an open question, but do provide a complete characterization of (noiseless) non-uniform generatability. We conjecture that the correct characterization of non-uniform noise-dependent generatability will need to go beyond our sufficiency and necessity conditions in Section 3.3. Noisy Generatability in the Limit via Membership Oracles. In addition to showing that all countable classes are generatable in the limit, Kleinberg & Mullainathan (2024) provide a computable algorithm for doing so when given access to a membership oracle. A membership oracle, OH : H X {0, 1}, takes as input a hypothesis h H and an example x X and outputs 1{h(x) = 1}. Although we show that all countable classes are also noisily generatable in the limit, the focus of this paper was information-theoretic in nature. This motivates our next open question. Given access to a membership oracle, does Generation from Noisy Examples there exists a computable algorithm that can noisily generate in the limit from any countable class? Acknowledgements The authors thank anonymous reviewer gx U2 for catching a bug in Theorem 3.1 and providing a fix. VR acknowledges support from the NSF Graduate Research Fellowship Program (GRFP). Impact Statement This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here. Angluin, D. Finding patterns common to a set of strings. In Proceedings of the eleventh annual ACM Symposium on Theory of Computing, pp. 130 141, 1979. Angluin, D. Inductive inference of formal languages from positive data. Information and control, 45(2):117 135, 1980. Baliga, G., Jain, S., and Sharma, A. Learning from multiple sources of inaccurate data. In Analogical and Inductive Inference: International Workshop AII 92 Dagstuhl Castle, Germany, October 5 9, 1992 Proceedings 3, pp. 108 128. Springer, 1992. Bartlett, P. L., Long, P. M., and Williamson, R. C. Fatshattering and the learnability of real-valued functions. In Proceedings of the seventh annual conference on Computational learning theory, pp. 299 310, 1994. Ben-David, S., P al, D., and Shalev-Shwartz, S. Agnostic online learning. In COLT, volume 3, pp. 1, 2009. Briesch, M., Sobania, D., and Rothlauf, F. Large language models suffer from their own output: An analysis of the self-consuming training loop. ar Xiv preprint ar Xiv:2311.16822, 2023. Burns, C., Izmailov, P., Kirchner, J. H., Baker, B., Gao, L., Aschenbrenner, L., Chen, Y., Ecoffet, A., Joglekar, M., Leike, J., et al. Weak-to-strong generalization: Eliciting strong capabilities with weak supervision. ar Xiv preprint ar Xiv:2312.09390, 2023. Case, J., Jain, S., and Sharma, A. Synthesizing noisetolerant language learners. In Algorithmic Learning Theory: 8th International Workshop, ALT 97 Sendai, Japan, October 6 8, 1997 Proceedings 8, pp. 228 243. Springer, 1997. Charikar, M. and Pabbaraju, C. Exploring facets of language generation in the limit. ar Xiv preprint ar Xiv:2411.15364, 2024. Fulk, M. and Jain, S. Learning in the presence of inaccurate information. 1989. Gold, E. M. Language identification in the limit. Information and control, 10(5):447 474, 1967. Jain, S. Program synthesis in the presence of infinite number of inaccuracies. In International Workshop on Analogical and Inductive Inference, pp. 333 348. Springer, 1994. Jiang, S., Kadhe, S. R., Zhou, Y., Cai, L., and Baracaldo, N. Forcing generative models to degenerate ones: The power of data poisoning attacks. ar Xiv preprint ar Xiv:2312.04748, 2023. Kalavasis, A., Mehrotra, A., and Velegkas, G. Characterizations of language generation with breadth. ar Xiv preprint ar Xiv:2412.18530, 2024a. Kalavasis, A., Mehrotra, A., and Velegkas, G. On the limits of language generation: Trade-offs between hallucination and mode collapse. ar Xiv preprint ar Xiv:2411.09642, 2024b. Khan, S., Naseer, M., Hayat, M., Zamir, S. W., Khan, F. S., and Shah, M. Transformers in vision: A survey. ACM computing surveys (CSUR), 54(10s):1 41, 2022. Kleinberg, J. and Mullainathan, S. Language generation in the limit. ar Xiv preprint ar Xiv:2404.06757, 2024. Lange, S. and Grieser, G. On the power of incremental learning. Theoretical Computer Science, 288(2):277 307, 2002. Lange, S., Zeugmann, T., and Zilles, S. Learning indexed families of recursive languages from positive data: a survey. Theoretical Computer Science, 397(1-3):194 232, 2008. Li, J., Raman, V., and Tewari, A. Generation through the lens of learning theory. ar Xiv preprint ar Xiv:2410.13714, 2024. Littlestone, N. Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning, 2:285 318, 1987. Mukouchi, Y. and Sato, M. Refutable language learning with a neighbor system. Theoretical computer science, 298(1):89 110, 2003. Rakhlin, A., Sridharan, K., and Tewari, A. Sequential complexities and uniform martingale laws of large numbers. Probability theory and related fields, 161:111 153, 2015. Generation from Noisy Examples Sch afer, G. Some results in the theory of effective program synthesis: Learning by defective information. In Int. Spring School on Mathematical Methods of Specification and Synthesis of Software Systems, pp. 219 225. Springer, 1985. Shang, M., Fu, Z., Peng, N., Feng, Y., Zhao, D., and Yan, R. Learning to converse with noisy data: Generation with calibration. In IJCAI, volume 7, 2018. Stephan, F. Noisy inference and oracles. Theoretical Computer Science, 185(1):129 157, 1997. Tantini, F., de La Higuera, C., and Janodet, J.-C. Identification in the limit of systematic-noisy languages. In Grammatical Inference: Algorithms and Applications: 8th International Colloquium, ICGI 2006, Tokyo, Japan, September 20-22, 2006. Proceedings 8, pp. 19 31. Springer, 2006. Vanhaelen, Q., Lin, Y.-C., and Zhavoronkov, A. The advent of generative chemistry. ACS Medicinal Chemistry Letters, 11(8):1496 1505, 2020. Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., et al. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations, pp. 38 45, 2020. Zhang, S., Qian, Z., Huang, K., Zhang, R., Xiao, J., He, Y., and Lu, C. Robust generative adversarial network. Machine Learning, 112(12):5135 5161, 2023. Generation from Noisy Examples A. Definitions of Noiseless Generation Definition A.1 (Uniform Generatability (Li et al., 2024)). Let H {0, 1}X be any hypothesis class satisfying the UUS property. Then, H is uniformly generatable, if there exists a generator G and d N, such that for every h H and any sequence x1, x2, . . . with {x1, x2, . . . } supp(h), if there exists t N such that |{x1, . . . , xt }| = d , then G(x1:s) supp(h) \ {x1, . . . , xs} for all s t . Definition A.2 (Non-uniform Generatability (Li et al., 2024)). Let H {0, 1}X be any hypothesis class satisfying the UUS property. Then, H is non-uniformly generatable if there exists a generator G such that for every h H, there exists a d N such that for any sequence x1, x2, . . . with {x1, x2, . . . } supp(h), if there exists t N such that |{x1, . . . , xt }| = d , then G(x1:s) supp(h) \ {x1, . . . , xs} for all s t . Definition A.3 (Generatability in the Limit (Li et al., 2024)). Let H {0, 1}X be any hypothesis class satisfying the UUS property. Then, H is generatable in the limit if there exists a generator G such that for every h H, and any enumeration x1, x2, . . . of supp(h), there exists a t N such that G(x1:s) supp(h) \ {x1, . . . , xs} for all s t . B. Summary of Results for Noiseless Generation In the noiseless setting, Kleinberg & Mullainathan (2024) initiated the study of generation in the limit by showing that all countable classes are generatable in the limit and that all finite classes are uniformly generatable. Following this result, Li et al. (2024) provided a complete characterization of uniform generation in terms of a new combinatorial parameter termed the Closure dimension. Definition B.1 (Closure dimension (Li et al., 2024)). The Closure dimension of H, denoted C(H), is the largest natural number d N for which there exists distinct x1, . . . , xd X such that x1, . . . , xd H,0 = and | x1, . . . , xd H,0| < . If this is true for arbitrarily large d N, then we say that C(H) = . On the other hand, if this is not true for d = 1, we say that C(H) = 0. In particular, a class is uniformly generatable if and only its its Closure dimension is finite. Theorem B.2 (Theorem 3.3. in Li et al. (2024)). A class H {0, 1}X is uniformly generatable if and only if C(H) < . In addition, Li et al. (2024) defined an intermediate setting termed non-uniform generatability, and provided a complete characterization of which classes are non-uniformly generatable. Theorem B.3 (Theorem 3.5 in Li et al. (2024)). A class H {0, 1}X is non-uniformly generatable if and only if there exists a non-decreasing sequence of classes H1 H2 . . . such that H = S i=1 Hi and C(Hi) < for every i N. As a corollary, Li et al. (2024) establish that all countable classes are actually non-uniformly generatable. Charikar & Pabbaraju (2024) also established this result independently. With regards to generatability in the limit, Kleinberg & Mullainathan (2024) prove that all countable classes are generatable in the limit. Li et al. (2024) provide an alternate sufficiency condition in terms of the Closure dimension. Theorem B.4 (Theorem 3.10 in Li et al. (2024)). A class H {0, 1}X is generatable in the limit if there exists a finite sequence of classes H1, H2, . . . , Hn such that H = Sn i=1 Hi and C(Hi) < for all i [n]. In this paper, we provide analogous results for noisy generation in terms of a different combinatorial parameter termed the Noisy Closure dimension. C. An Alternate Version of Uniform Noise-independent Generatability In this section, we consider an alternate (weaker) version of uniform noise-independent generatability by measuring the sample complexity only in terms of the valid positive examples observed. Definition C.1 (Alternate Uniform Noise-Independent Generatability). Let H {0, 1}X be any hypothesis class satisfying the UUS property. Then, H is uniformly noise-independent generatable, if there exists a generator G and d N, such that for every h H and any sequence x1, x2, . . . with P t=1 1{xt / supp(h)} < , if there exists t N such that |{x1, . . . , xt } supp(h)| = d , then G(x1:s) supp(h) \ {x1, . . . , xs} for all s t . Note that the only difference between Definition C.1 and D.1 is how d is measured. In Definition C.1, d captures only the number of distinct positive examples in the stream, while d in Definition D.1 measures the total number of distinct examples Generation from Noisy Examples in the stream. We start off by proving a simple necessity condition for our alternate notion of uniform noise-independent generatability. Lemma C.2 (Necessary condition for Alternate Uniform Noise-independent Generatability). Let X be countable and H {0, 1}X satisfy the UUS property. If there exists a subclass F H and a hypothesis f F such that T h F supp(h) < h F\{f} supp(h) = , then H is not uniformly noise-independent generatable according to Definition C.1. Like Theorem 3.1, Lemma C.2 is a hardness result it shows that finite classes with just two hypotheses may not be uniformly noise-independent generatable even when the sample complexity is measured in terms of positive example. Proof. (of Lemma C.2) Let X be countable and H {0, 1}X satisfy the UUS property. Let F H be any subset of H such that T h F supp(h) < and for which there exists an f F such that T h F\{f} supp(h) = . We need to show that H is not uniformly noise-independent generatable. We will show that F is not uniformly noise-independent generatable which will imply that H is not uniformly noise-independent generatable. To that end, we need to show that for every G and any d N, there exists a h F and a sequence x1, x2, . . . with P t=1 1{xt / supp(h)} < , such that for every t N where |{x1, . . . , xt} supp(h)| = d, there exists an s t such that G(x1:s) / supp(h) \ {x1, . . . , xs}. To that end, fix a generator G and a number d N. Let x1, . . . , xd be a sequence of d unique examples in T h F\{f} supp(h). Let z1, . . . , zd be any d unique examples in supp(f) \ T h F supp(h) and consider the sequence of 2d unique examples obtained by concatenating z1, . . . , zd to the end of x1, . . . , xd. Denote this new sequence by x1:2d. Let q = | T h F supp(h)| and let v1, . . . , vq be its elements sorted in its natural order. Concatenate v1, . . . , vq to the end of x1:2d and denote this new sequence of unique examples as x1:2d+q. Observe that for every h F, we have that | supp(h) {x1, . . . x2d+q}| d. Let ˆx2d+q = G(x1, . . . , x2d+q) and suppose without loss of generality that ˆx2d+q / {x1, . . . , x2d+q}. Let h F be a be hypothesis such that ˆx2d+q / supp(h ). Such a hypothesis must exist because ˆx2d+q / {x1, . . . , x2d+q} and T h F supp(h) {x1, . . . , x2d+q}. Let x2d+q+1, x2d+q+2, . . . be any completion of the stream such that {x2d+q+t} t=1 supp(h ) and {x2d+q+t} t=1 {xt}2d+q t=1 = . We are now ready to complete the proof. Let h and x1, x2, . . . be the hypothesis and sequence chosen above. Then, by definition, observe that P t=1 1{xt / supp(h )} = P2d+q t=1 1{xt / supp(h )} d < . Moreover, we also know that | supp(h ) {x1, . . . , x2d+q}| d and the first 2d examples are distinct, implying that there exists exactly one time point t 2d + q such that | supp(h ) {x1, . . . x2d+q}| = d. Finally, noting that 2d + q t and that G(x1, . . . , x2d+q) / supp(h )\{x1, . . . , x2d+1} completes the proof that F is not uniformly noise-independent generatable since G and d were chosen arbitrarily. Finally, Theorem C.3 provides a full characterization of uniform noise-independent generatability (Definition D.1 in terms of the noisy closure dimension. Theorem C.3 (Characterization of Uniform Noise-independent Generatability). Let X be countable and H {0, 1}X satisfy the UUS property. Then, H is uniformly noise-independent generatable if and only if supn(NCn N(H) n) < . As the proof follows similar techniques as those in the main text, we only provide the proof sketch here. Proof. (sketch of Theorem C.3) For the necessity direction, suppose that supn(NCn(H) n) = . Then for every d N, we can find a t d and a sequence x1, . . . , xt, such that | x1, ..., xt H,t d| < . Hence, by padding x1, . . . , xt with any remaining elements in x1, ..., xt H,t d, we can force the Generator to make a mistake while ensuring that the hypothesis chosen is consistent with at least d examples in the stream. For the sufficiency direction, if supn(NCn(H) n) < , then there exists a d N such that for every t d and distinct x1, ..., xt, we have that either x1, ..., xt H,t d = or | x1, ..., xt H,t d| = . Thus, the algorithm which for t d plays from x1, ..., xt H,t d \ {x1, . . . , xt} if x1, ..., xt H,t d = is guaranteed to succeed. D. The Hardness of Non-uniform Noise-independent Generatability As alluded to in Remark 2.8, there is a fifth setting of noisy generation, which we did not define in the main text. In this setting, the sample complexity of the generator can be non-uniform over the class H but must still be uniform over the noise-level and stream picked by the adversary. We define this formally below. Generation from Noisy Examples Definition D.1 (Non-uniform Noise-independent Generatability). Let H {0, 1}X be any hypothesis class satisfying the UUS property. Then, H is non-uniformly noise-independent generatable, if there exists a generator G such that for every h H, there exists d N such that for any sequence x1, x2, . . . with t=1 1{xt / supp(h)} < , if there exists t N such that |{x1, . . . , xt } supp(h)| = d , then G(x1:s) supp(h) \ {x1, . . . , xs} for all s t . Unfortunately, non-uniform noise-independent generatability is still a restrictive property as not all finite classes are non-uniformly noise-independent generatable. This is in contrast to uniform noise-dependent generatability as shown by Corollary 3.4. Lemma D.2 (Not All Finite Classes are Non-uniformly Noise-independent Generatable). Let X be countable. There exists a finite class H {0, 1}X which satisfies the UUS property that is not non-uniformly noise-independent generatable. Proof. Let X = N and consider the class H = {he, ho} such that he(x) = 1{x is even} and ho(x) = 1{x is odd}. We will show that H is not non-uniformly noise-independent generatable. We need to show that for every generator G, there exists a hypothesis h H such that for every d N, there exists a sequence x1, x2, . . . with t=1 1{xt / supp(h)} < , so that for every t N such that |{x1, . . . , xt} supp(h)| = d, there exists a s t where G(x1:s) / supp(h)\{x1, . . . , xs}. To that end, fix a generator G. Let S = {G(1, 2, . . . , i)}i N. There are three cases to consider: (1) the number of even numbers in S is finite (2) the number of odd numbers in S is finite and (3) there are infinite number of both even and odd numbers in S. Cases (1) and (2) are symmetric. So, without loss of generality, it suffices only to consider Cases (1) and (3). Starting with Case (1), let p N be the smallest time point such that for all t p, we have that G(1, . . . , t) is odd. We will pick he to use for the lower bound. Fix some d N. Consider the sequence x1, x2, . . . such that xi = i for all i max{p, 2d} and xi = 2i for all i > max{p, 2d}. Note that t=1 1{xt / supp(he)} max{p, 2d} Suppose 2d < p. Then, observe that only on time point t = 2d do we have that |{x1, . . . , xt} supp(he)| = d. However, by definition, on time point s = p 2d = t, we have that G(x1, . . . , xs) is odd and therefore not in supp(he). Suppose that 2d p. Then, like before, only on time point t = 2d do we have that |{x1, . . . , xt} supp(he)| = d. However, by definition, also on time point s = 2d p, we have that G(x1, . . . , xs) is odd and therefore not in supp(he). Since d N is arbitrary, this is true for all d N, completing the proof for Case (1). As mentioned before, the proof for Case (2) follows by symmetry. We now consider Case (3). For this case, we will pick he for the lower bound. Fix some d N. Let p 2d be the smallest time point after and including time point 2d such that G(1, 2, . . . , p) is odd. Such a p N must exist since S contains an infinite number of odd numbers. Now, consider the sequence x1, x2, . . . such that xi = i for i p and xi = 2i for all i > p. Note that t=1 1{xt / supp(he)} p Observe that only at time point t = 2d do we have that |{x1, . . . , xt} supp(he)| = d. However, by definition, also on time point s = p 2d = t, we have that G(x1, . . . , xp) is odd and therefore does not lie in supp(he). Since d N is arbitrary, this is true for all d N, completing the proof for Case (3) and the overall proof. Generation from Noisy Examples E. Proof of Theorem 3.3 Proof. (of necessity in Theorem 3.3) Let X be countable and H {0, 1}X satisfy the UUS property. Suppose there exists a n N such that NCn(H) = . We need to show that this means that H is not uniformly noise-dependent generatable. In particular, we need to show that for every generator G, there exists a noise level n N such that for every d N, there exists a hypothesis h H and a sequence x1, x2, . . . with P t=1 1{xt / supp(h)} n , so that for every t N with |{x1, . . . , xt}| = d, there exists s t where G(x1:s) / supp(h) \ {x1, . . . , xs}. To that end, let G be any generator and consider the noise level n = n so that NCn (H) = . Fix any d N. We will now pick such a hypothesis h H and a valid stream x1, x2, . . . . Since NCn (H) = , we know there exists a d d and a sequence of distinct examples x1, . . . , xd such that x1, . . . , xd H,n = and | x1, . . . , xd H,n | < . Let q = | x1, . . . , xd H,n \ {x1, . . . , xd }| and z1, . . . , zq be the elements of x1, . . . , xd H,n \ {x1, . . . , xd } sorted in their natural order. Let x1:d +q denote the sequence obtained by concatenating z1, . . . , zq to the end of x1, . . . , xd . Let ˆxd +q = G(x1:d +q) and suppose without loss of generality that ˆxd +q / {x1, . . . , xd +q}. Let h H(x1:d , n ) be any hypothesis such that ˆxd +q / supp(h ). Such a hypothesis must exist because ˆxd +q / {x1, . . . , xd +q} and x1, . . . , xd H,n {x1, . . . , xd +q}. Finally, let xd +q+1, xd +q+2, . . . be any completion of the stream such that {xd +q+t} t=1 supp(h ) and {xd +q+t} t=1 {xt}d +q t=1 = . We now complete the proof. Let h and x1, x2, . . . be the hypothesis and stream chosen above. First, observe that t=1 1{xt / supp(h )} = t=1 1{xt / supp(h )} n . Second, there exists only one t N, namely t = d, such that |{x1, . . . , xt}| = d, as the first d + 1 > d examples of the stream are distinct by construction. However, observe that by construction, at time point s = d + q d, we have that G(x1, . . . , xs) = ˆxd +q / supp(h ) \ {x1, . . . , xs} by choice of h . Since G and d N were picked arbitrarily, this is true for all G and d N, which completes the proof. Proof. (of sufficiency in Theorem 3.3) The key intuition is that the generator, without knowing the adversary s noise level, continually computes the largest noise level for which it has observed enough unique examples to generate perfectly. Eventually, its calculated noise level will surpass and thus account for the adversary s noise level, ensuring that the generator is perfect from then on. Let X be countable and H {0, 1}X satisfy the UUS property. We need to show that if NCn(H) < for all n N, then H is uniformly noise-dependent generatable. For a sequence x1, x2, . . . , let dt := |{x1, . . . , xt}| denote the number of distinct examples amongst the first t examples. Consider the following generator G. At time t N, G computes nt := max{n [t] : NCn(H) < dt} {0}. If nt = 0, G plays any x X. Otherwise, it plays from x1, . . . , xt H,nt \ {x1, x2, . . . , xt}. We now show that G is a uniform noise-dependent generator. To that end, let n N be the noise level chosen by the adversary. We claim that G can perfectly generate after observing max{NCn (H) + 1, n } distinct examples. To see why, let h H be any hypothesis, and x1, x2, . . . be any stream such that P t=1 1{xt / supp(h)} n . Without loss of generality, suppose there exists t N such that dt = max{NCn (H) + 1, n }. Observe that t n . Fix any s t . By definition of nt, it must be the case that ns n . Thus, h H(x1, . . . , xs, ns) and x1, . . . , xs H,ns supp(h). Moreover, because ds > NCns(H), again by definition, it must be the case that | x1, . . . , xs H,ns| = . Accordingly, x1, . . . , xs H,ns \ {x1, . . . , xs} = and G is guaranteed to output an example in supp(h) on round s. Since s t was chosen arbitrarily, this is true for all such s, completing the proof. F. Proof of Lemma 3.6 Proof. Let X be countable and H {0, 1}X be any class satisfying the UUS property. Suppose there exists a nondecreasing sequence of classes H1 H2 such that H = S i=1 Hi and NCi(Hi) < for all i N. For a sequence x1, x2, . . . , let dt := |{x1, . . . , xt}| denote the number of distinct examples amongst the first t examples. Consider the following generator G. At time t N, G computes jt := max{i [t] : NCi(Hi) < dt} {0}. If jt = 0, G plays any x X. Otherwise, it plays from x1, . . . , xt Hjt,jt \ {x1, x2, . . . , xt}. We claim that G is a non-uniform noise-dependent generator for H. To see why, let n N and h H be the noise level and hypothesis picked by the adversary. Let i N be the smallest index such that h Hi and define j = max{i , n }. We claim that G perfectly generates after observing max{NCj (Hj ) + 1, j } distinct examples. To that end, let x1, x2, . . . be any Generation from Noisy Examples sequence such that P t=1 1{xt / supp(h)} n . Without loss of generality, suppose there exists t N such that dt = max{NCj (Hj ) + 1, j }. Observe that t j . Fix any s t . By definition of jt, it must be the case that js j n . Moreover, because js i , we also have that h Hjs. Together, this means that h Hjs(x1, . . . , xs, js) and x1, . . . , xs Hjs,js supp(h). Also, because ds > NCjs(Hjs) it must be the case that | x1, . . . , xs Hjs,js| = . Accordingly, x1, . . . , xs Hjs,js \ {x1, . . . , xs} = and G is guaranteed to output an example in supp(h) on round s. Since s t was chosen arbitrarily, this is true for all such s, completing the proof. G. Proof of Theorem 3.10 Proof. Let X be countable and H {0, 1}X satisfy the UUS property. Suppose there exists a finite sequence of uniformly noise-independent generatable classes H1, H2, . . . , Hk such that H = Sk i=1 Hi. By Theorem 3.1, we know that for all i [k], we have that T h Hi supp(h) = . For every i [k], let zi 1, zi 2, . . . denote the elements of T h Hi supp(h) sorted in their natural ordering. Consider the following generator G. Given any noisy enumeration x1, x2, . . . and any t N, G first computes pi t := max{p N : {zi 1, . . . , zi p} {x1, . . . , xt}} for every i [k]. Then, G computes it = arg maxi [k] pi t. Finally, G plays arbitrarily from {zit 1 , zit 2 , . . . } \ {x1, . . . xt}. We claim that G generates from H in the limit. To see why, let h H and x1, x2, . . . be the hypothesis and noisy enumeration selected by the adversary. Let i [k] be such that h Hi and s N be such that for all t s , we have that xt supp(h). Such an s exists because P t=1 1{xt / supp(h)} < . Let S [k] be such that i S if and only if {zi 1, zi 2, . . . } {x1, x2, . . . }. Note that i S by definition of zi 1 , zi 2 , . . . and the fact that x1, x2, . . . is a noisy enumeration. By definition of S and pi t it must be the case that for all j / S , we have that supt N pj t < . Since k < , we then get that maxj / S supt N pj t < . On the other hand, for all i S , we have that pi t . More precisely, for every i S , there exists a finite ti N such that pi ti maxj / S supt N pj t. Since |S | < , we also have that t := maxi S ti < . Our proof is complete after noting that for all t t , we have that it S and {zit 1 , zit 2 , . . . } \ {x1, . . . xt} supp(h).