# credit_attribution_and_stable_compression__795b899f.pdf Credit Attribution and Stable Compression Roi Livni Shay Moran Kobbi Nissim Chirag Pabbaraju Credit attribution is crucial across various fields. In academic research, proper citation acknowledges prior work and establishes original contributions. Similarly, in generative models, such as those trained on existing artworks or music, it is important to ensure that any generated content influenced by these works appropriately credits the original creators. We study credit attribution by machine learning algorithms. We propose new definitions relaxations of Differential Privacy that weaken the stability guarantees for a designated subset of π‘˜datapoints. These π‘˜datapoints can be used non-stably with permission from their owners, potentially in exchange for compensation. Meanwhile, each of the remaining datapoints is guaranteed to have no significant influence on the algorithm s output. Our framework extends well-studied notions of stability, including Differential Privacy (π‘˜= 0), differentially private learning with public data (where the π‘˜ public datapoints are fixed in advance), and stable sample compression (where the π‘˜datapoints are selected adaptively by the algorithm). We examine the expressive power of these stability notions within the PAC learning framework, provide a comprehensive characterization of learnability for algorithms adhering to these principles, and propose directions and questions for future research. 1 Introduction Many tasks that use machine learning algorithms require proper credit attribution. For example, consider a model trained on scientific papers that needs to reason about facts and figures based on existing literature. Most academic literature is protected under copyright licenses such as CC-BY 4.0 which allows adapting, remixing, transforming, and to copy and redistribute in any medium or format, as long as attribution is given to the creator. In another setting, a learner generating content, such as images or music, may benefit from creating derivative works from copyrighted materials without violating the creator s rights (either through proper attribution or monetary compensation, depending on the context and licensing). The increasing use of ML algorithms and the need for greater transparency is reflected by the recently implemented EU AI Act, which mandates the disclosure of training data [14]. However, disclosure of training data and proper attribution are not necessarily equivalent. In particular, mere transparency of the dataset does not reveal whether certain elements of certain content have been derived, nor does it provide proper attribution when particular content is heavily built upon. Therefore, there is a need to develop more nuanced notions and definitions that enable learning under the constraint that works are properly attributed. This paper focuses on this challenge, exploring theoretical models of credit attribution to provide rigorous and meaningful definitions for the task. Tel Aviv University. rlivni@tauex.tau.ac.il. Technion and Google Research. smoran@technion.ac.il. Georgetown University and Google Research. kobbi.nissim@georgetown.edu. Stanford University. cpabbara@cs.stanford.edu. 38th Conference on Neural Information Processing Systems (Neur IPS 2024). Credit attribution is part of a much larger problem of learning under copyright constraints. Copyright issues in machine learning models are becoming increasingly prominent as these models are trained on vast amounts of data, often some of which is copyrighted. Consequently, the resulting models might contain content from copyrighted data in their training sets. Previous work suggests it may be mathematically challenging to capture algorithms that protect copyright. Specifically, attempts to regulate copyright often focus on protecting against substantial similarity between output content and training data by, for example, employing stable algorithms that are not sensitive to individual training points [9, 22, 24]. This is an important aspect of copyright; however, substantial similarity is only one piece of the puzzle. Another piece of the puzzle involves the use of original elements from copyrighted works in a legally permissible manner, such as through de minimis quotations, transformative use, and other types of fair uses, such as learning and research [13]. To fully utilize ML in many practical scenarios, it is desirable for learning models to be allowed to use original elements in a similar manner. To address this second piece, we focus on designing algorithms that, while allowed to use and be influenced by copyrighted material, must provide proper attribution. Such models would enable users to inspect these influences and verify that they conform to legal standards, or take necessary measures (such as monetary compensation or requesting permission). Despite credit attribution being narrower in scope than copyright protection in general, even this concept may be nuanced to be captured mathematically. Therefore, we focus on formalizing a specific (but arguably basic) aspect of it counterfactual attribution: This principle asserts that any previous work that influenced the result should be credited. Counterfactually, if the creator of a work π‘Šdoes not acknowledge another work π‘Š , they should be able to produce π‘Šas if they had no knowledge of π‘Š . For example, an argument based on this principle in the extreme case when π‘Š= π‘Š is found in a U.S. Supreme Court opinion: . . . a work may be original even though it closely resembles other works, so long as the similarity is fortuitous, not the result of copying. To illustrate, assume that two poets, each ignorant of the other, compose identical poems. Neither work is novel, yet both are original. . . Feist Publications, Inc. v. Rural Telephone Service Company, Inc. 499 U.S. 340 (1991) 2 Definitions and Examples In this section, we introduce the two main definitions we study. We first recall some standard notation from learning theory and differential privacy. Let Z be an input data domain and C denote an output space. We denote by Z the set of all finite sequences with elements from Z. Two sequences 𝑆 , 𝑆 X are called neighbors if they have the same length |𝑆 | = |𝑆 | and there is a unique index 𝑖such that 𝑆 𝑖 𝑆 𝑖. Let Ξ΅, Ξ΄ > 0 and let 𝑝, π‘žbe probability distributions defined over the same space. We let 𝑝 Ξ΅,Ξ΄ π‘ždenote the following relation: 𝑝(𝐸) exp(Ξ΅) π‘ž(𝐸) + Ξ΄ and π‘ž(𝐸) exp(Ξ΅) 𝑝(𝐸) + Ξ΄ for every event 𝐸. Algorithms with Credit Attribution. Consider a mechanism 𝑀: Z C Z that, for every possible input sequence 𝑆= (𝑧1, . . . , 𝑧𝑛), outputs a pair (𝑐, 𝑅), where 𝑐 C and 𝑅 Z . Intuitively, 𝑅is the list of inputs being credited by the mechanism, and 𝑐is the model/content produced by the mechanism. Thus, we require that each data point 𝑧𝑖 𝑅is also an input data point 𝑧𝑖 𝑆. For such a mechanism 𝑀and an input sequence 𝑆, we let 𝑀(𝑆) denote the probability distribution over outputs of 𝑀given 𝑆as input, where the probability is induced by the internal randomness of the mechanism. For example, if 𝑀is deterministic, then 𝑀(𝑆) is a Dirac distribution (i.e., it assigns probability 1 to the deterministic output of 𝑀on 𝑆). The definition below uses the following notation: for a sequence 𝑆= (𝑧1, . . . , 𝑧𝑛) and an index 𝑖 [𝑛], we let 𝑆 𝑖denote the subsequence of 𝑆obtained by omitting 𝑧𝑖. Let 𝑧𝑖 𝑆be a data point such that Pr(𝑐,𝑅) 𝑀(𝑆) [𝑧𝑖 𝑅] < 1. That is, there is a positive probability that 𝑧𝑖is not credited Example: Support Vector Machine Figure 1: Support Vector Machine (SVM) as an (Ξ΅ = Ξ΄ = 0)-counterfactual credit attributor: The SVM algorithm identifies a maximum-margin separating hyperplane, which is defined by the subsample of the support vectors. Any input point which is not a support vector does not influence the output: even if it is removed from the input sample, the output hyperplane does not change. by 𝑀when executed on 𝑆. In this case, we let 𝑀(𝑆 𝑖) denote the distribution of 𝑀(𝑆) conditioned on the event that 𝑧𝑖 𝑅. We are now ready to present our first definition5 of counterfactual credit attribution. Definition 1 (Counterfactual Credit Attribution). Let Ξ΅, Ξ΄ > 0. A mechanism 𝑀: Z C Z is called an (Ξ΅, Ξ΄)-counterfactual credit attributor (CCA) if for every input sequence 𝑆= (𝑧1, . . . , 𝑧𝑛) and every index 𝑖 [𝑛] the following holds: either Pr(𝑐,𝑅) 𝑀(𝑆) [𝑧𝑖 𝑅] = 1, or 𝑀(𝑆 𝑖) Ξ΅,Ξ΄ 𝑀(𝑆 𝑖), where 𝑀(𝑆 𝑖) is the output distribution on the dataset 𝑆 𝑖= 𝑆\ {𝑧𝑖}, and 𝑀(𝑆 𝑖) is the output distribution on the dataset 𝑆, conditioned on 𝑧𝑖 𝑅. To emphasize, in Definition 1 the conditional output distribution 𝑀(𝑆 𝑖) models the condition if data-point 𝑧𝑖is not credited by 𝑀, whereas the output distribution 𝑀(𝑆 𝑖) represents the counterfactual scenario had the data-point 𝑧𝑖not been seen by 𝑀. Example 2.1 (Stable Sample Compression [8, 20]). A mechanism 𝐴: Z C is a stable sample compression scheme of size π‘˜if for every input sequence 𝑆= (𝑧1, . . . , 𝑧𝑛) there is a subsequence ΞΊ(𝑆) 𝑆of size |ΞΊ(𝑆)| π‘˜such that 𝐴(𝑆) = 𝐴(𝑇) for every intermediate subsequence ΞΊ(𝑆) 𝑇 𝑆. See Figure 1 for an example. Each stable compression scheme corresponds to an (Ξ΅ = 0, Ξ΄ = 0)-CCA which credits the datapoints in ΞΊ(𝑆). That is, 𝑀(𝑆) = (𝐴(𝑆), ΞΊ(𝑆)). Stable sample compression thus provides something stronger: group-counterfactuality, meaning any subset of datapoints that is not selected does not influence the output. Definition 1 not only relaxes stable sample compression, but also extends the concept of differential privacy with public data, known as semi-private learning. In semi-private learning, the learner s input includes public examples (which can be processed non-stably) and private examples (for which the algorithm must satisfy differential privacy guarantees). Semi-private learning [1, 4] has been extensively studied in recent years [19], for example, in the context of query release [3, 18], distribution learning [5, 6], computational efficiency [7, 21], as well as in other contexts. Definition 2 (Semi-Differentially Private Mechanism). Let Ξ΅, Ξ΄ > 0; an (Ξ΅, Ξ΄)-semi differentially private (semi-DP) mechanism is a mapping 𝑀: Z Z C such that for every 𝑆pub Z and every pair of neighboring sequences , 𝑆 priv, 𝑆 priv:6 𝑀(𝑆pub, 𝑆 priv) Ξ΅,Ξ΄ 𝑀(𝑆pub, 𝑆 priv). 5In analogy to the variant of differential privacy where the unit of protection is the addition or removal of a data point, our definition uses omissions of data points. This aligns with our motivation for counterfactual credit attribution: if a non-credited data point is omitted (rather than replaced), the output does not change. Omission is crucial here, as replacing a non-credited data point with a credited one could drastically alter the output. 6Note that the special case of 𝑆pub = gives a DP mechanism. Remark 1. Any semi-DP mechanism 𝑀that uses π‘˜public points can be turned into a CCA mechanism as follows: on an input sequence 𝑆, the CCA mechanism outputs (𝑐, 𝑅), where 𝑅= 𝑆 π‘˜, and 𝑐= 𝑀(𝑆 π‘˜, 𝑆>π‘˜). That is, 𝑀uses the first π‘˜points in 𝑆as public data, and the rest are private. Private learning with public data is sometimes likened to semi-supervised learning, where private data corresponds to unlabeled data and public data to labeled data. In both scenarios, the learner accesses many less informative examples (unlabeled or private) and fewer more informative examples (labeled or public). Expanding on this analogy, Definition 1 is akin to active learning, where the learner adaptively chooses which data points to credit, similar to selecting which data points to label in active learning. Semi-differential privacy (Definition 2) provides stronger stability guarantees than counterfactual credit attribution (Definition 1), including for the selection process. In contrast, Definition 1 allows for a highly non-stable selection process (e.g., SVM). This leads us to consider a more direct hybrid of semi-DP and sample compression, suggesting the following definition: Definition 3 (Sample DP-Compression Scheme). Let Ξ΅, Ξ΄ 0 and π‘˜ 𝑛. An (Ξ΅, Ξ΄) sample differentially private (𝑛 π‘˜)-compression scheme is a mechanism 𝑀: Z𝑛 C which consists of two functions: 1. Compression: an (Ξ΅, Ξ΄)-DP mechanism ΞΊ : Z𝑛 [𝑛]π‘˜, called the compression function, and 2. Reconstruction: an (Ξ΅, Ξ΄) semi-DP mechanism ρ : Z Z C called the reconstruction function. Then, for every input sequence 𝑆: 𝑀(𝑆) = ρ(𝑆|ΞΊ(𝑆), 𝑆| ΞΊ(𝑆)), where 𝑆|ΞΊ(𝑆) = (𝑆𝑖)𝑖 ΞΊ(𝑆) and 𝑆| ΞΊ(𝑆) = (𝑆𝑖)𝑖 ΞΊ(𝑆). Note that the compression function ΞΊ selects the indices of the compressed subsample (rather than the subsample itself, as in classical sample compression). This technical difference allows us to pose the requirement of differential privacy on the compression function ΞΊ. Going back to the analogy with active learning, Definition 2 also imposes stability of the labeling function (i.e. the function that decides which labels to query). Example 2.2 (Randomized Response). We next describe a simple task which can be performed by sample DP-compression schemes, but not by semi-DP mechanisms. Imagine that the data is drawn from a distribution where each datapoint is useful with probability 0.1 and is otherwise garbage with probability 0.9. The goal is to select π‘˜datapoints while maximizing the number of useful datapoints that are selected. If we select datapoints obliviously, for example by simply taking the first π‘˜examples, we would expect that only about 10% of them will be useful. However, by using a mechanism compliant with Definition 3, we can increase the proportion of useful examples. This mechanism is based on randomized response and operates as follows: each example is independently assigned a random label in {0, 1}, where a useful example is assigned a label of 1 with probability 𝑝> 1/2, and each garbage example is assigned a label of 1 with probability 1 𝑝< 1/2. The value of 𝑝is set as a function of the privacy parameter Ξ΅.7 Then, the compression function ΞΊ selects the first π‘˜indices whose label is 1. This way, the fraction of useful points among the points labeled 1 is 0.1𝑝 0.1𝑝+0.9(1 𝑝) = 1 9/𝑝 8 > 0.1 (the last inequality holds for 𝑝> 1/2). See Appendix B for a more detailed argument. 3 Main Theorems In this section, we present our main theorems that characterize the expressivity of learning rules satisfying our proposed definitions. We focus on the PAC (Probably Approximately Correct) learning model [23] and employ its standard definitions (explicitly provided in Section 4). 7We get Ξ΅ = ln 𝑝 1 𝑝 and Ξ΄ = 0. Question (Guiding Question). Is learnability subject to counterfactual credit attribution (Definition 1) more restricted than unconstrained learnability? Is learnability subject to sample DPcompression (Definition 3) more restricted than unconstrained learnability? How do these restrictions compare to differentially private learning? Note that with respect to both Definition 1 and Definition 3, it is clear that if π‘˜, the number of credited points, is sufficiently large, then it is possible to learn any PAC-learnable class C. Indeed, if π‘˜equals the PAC sample complexity of C, then an oblivious selection, such as the first π‘˜points, will suffice. Therefore, the above question is particularly interesting for values of π‘˜that are significantly smaller than the PAC sample complexity of C. Our first theorem demonstrates that every PAC-learnable class can be learned using an (Ξ΅ = 0, Ξ΄ = 0)- counterfactual credit attribution learning rule, which selects at most a logarithmic number of sample points for attribution. Remarkably, this can be achieved using the Ada Boost algorithm. Theorem 1 (PAC Learning with Credit Attribution = PAC Learning). Let C be a concept class with VC dimension VC(C) = 𝑑< , and let Ξ±, Ξ² denote the error and confidence parameters. Then, there exists an (Ξ΅ = 0, Ξ΄ = 0)-CCA learning rule 𝑀that learns C with sample complexity 𝑛= 𝑂 𝑑log(𝑑/Ξ±)+𝑑log(1/Ξ²) Ξ± , while selecting only π‘˜= 𝑂(𝑑log 𝑛) examples for attribution. We leave as an open question whether π‘˜can be made independent of 𝑛, possibly by allowing Ξ΅ and Ξ΄ to be positive. Note that an affirmative answer to this question might also shed light on the sample compression conjecture [17, 25]. Our second theorem establishes a limitation for sample DP-compression schemes, showing that they do not offer more expressivity than differentially private PAC learning [15]. Theorem 2 (Sublinear Sample DP-Compression = DP Learning). Every concept class C satisfies exactly one of the following: 1. C is learnable by a DP-learner. 2. Any sample DP-compression scheme that learns C has size at least π‘˜= Ω(1/Ξ±). Theorem 2 implies a stark dichotomy: either a class C can be learned by a DP algorithm (equivalently, a sample DP-compression of size π‘˜= 0), or it is impossible to learn it unless π‘˜= Ω(1/Ξ±). Notice that with π‘˜= 𝑂(𝑑/Ξ±), public examples are sufficient to learn without any private examples. Theorem 2 generalizes a result by [1, Theorem 4.2] who proved it in the special case of semi-DP learning. In our setting though, we need to crucially handle scenarios where the credited (or rather, public) datapoints are chosen adaptively as a function of the full dataset. This is not the case in semi-DP learning, and requires us to use novel technical tools (like Theorem 3 ahead). Thus, in the PAC setting, sample DP-compression schemes do not offer any advantage over semi-DP learners. However, Example 2.2 demonstrates that using sample DP-compression, it is possible to select the π‘˜points in the compression set so that the frequency of useful examples among these π‘˜ points is boosted. Our next theorem addresses the limits of handpicking π‘˜points by sample DP-compression. We formalize this task as follows: given a distribution 𝐷over Z and an event 𝐸of good points, the goal is to design a DP-compression function ΞΊ : Z𝑛 [𝑛]π‘˜that maximizes the number of selected data points that belong to 𝐸. That is, the goal is to maximize 𝑖 ΞΊ(𝑆) 1[𝑧𝑖 𝐸]. Example 2.2 illustrates a method that selects roughly exp(Ξ΅) π‘˜ 𝐷(𝐸) points from 𝐸by an (Ξ΅ 0, Ξ΄ = 0)-compression function. This is a factor of exp(Ξ΅) better than obliviously selecting the π‘˜ points, which yields π‘˜ 𝐷(𝐸) points from 𝐸. Is this factor of exp(Ξ΅) optimal? Can one do better, possibly by increasing Ξ΄? The following result shows that exp(Ξ΅) is asymptotically optimal. Theorem 3. Let 𝑀be an (Ξ΅, Ξ΄) sample DP-compression scheme, let D be a distribution over Z, and let 𝐸 Z be any event, with 𝑝= D(𝐸). For an input sample 𝑆= (𝑧1, . . . , 𝑧𝑛) D𝑛, define 𝑍= 𝑍(𝑆) as the random variable denoting the fraction of selected indices in ΞΊ(𝑆) whose corresponding data points belong to 𝐸. That is, 𝑍= 1 |ΞΊ(𝑆) | P 𝑖 ΞΊ(𝑆) 1[𝑧𝑖 𝐸]. Then, 𝑝𝑒 Ξ΅ δ𝑛 𝔼[𝑍] 𝑝𝑒Ρ + δ𝑛. (1) Indeed, since by convention Ξ΄ = Ξ΄(𝑛) 1/𝑛, the above theorem implies that exp(Ξ΅) is asymptotically optimal. We note that Theorem 3 is also key in the proof of Theorem 2. We elaborate on this in Section 4.2. Generalization. Definition 1 and Definition 3 can also be examined from a learning theoretic perspective as notions of algorithmic stability. Algorithmic stability is particularly useful in the context of generalization because, roughly speaking, stable algorithms typically generalize well. We note in passing that this is indeed the case for Definition 1 and Definition 3: any learning rule adhering to either definition satisfies that its empirical error and population error are typically close. One natural way to prove this is by following the argument that shows sample compression schemes generalize. In a nutshell, the argument proceeds as follows: first, if we fix the selected π‘˜-tuple, the obtained hypothesis generalizes well. Then, we apply a union bound over all possible π‘›π‘˜choices of π‘˜-tuples from the input sample. 4 Technical Background and Proofs We study our main definitions in the context of PAC learning. Concretely, we assume that the input data domain Z in Section 2 is X Y, for an input space X and label space Y. For our purposes, Y = {0, 1}. Learning rules are mechanisms A : Z C Z , where C is the set of all functions mapping X to Y, denoted as YX. We say that a distribution D over Z is realizable by a hypothesis class H YX if for every finite sequence (π‘₯1, 𝑦1), . . . , (π‘₯𝑛, 𝑦𝑛) drawn i.i.d from D, there exists some hypothesis β„Ž H that satisfies β„Ž(π‘₯𝑖) = 𝑦𝑖, 𝑖 [𝑛]. For any hypothesis β„Ž YX, we denote its risk with respect to a distribution D by 𝑅D(β„Ž) = Pr(π‘₯,𝑦) D[β„Ž(π‘₯) 𝑦]. Definition 4 (CCA PAC learning rule). A mechanism A is a CCA PAC learning rule for a hypothesis class H, if A satisfies Definition 1, and for any distribution D realizable by H, for any Ξ±, Ξ² > 0, there exists a finite 𝑛= 𝑛A(Ξ±, Ξ²), such that with probability at least 1 Ξ² over a sample 𝑆 D𝑛and the randomness of A, the hypothesis β„Žin the output (β„Ž, 𝑆 ) of A on 𝑆satisfies 𝑅D(β„Ž) Ξ±. Definition 5 (Sample DP-Compression learning rule). An (Ξ΅, Ξ΄) sample differentially private (𝑛 π‘˜) compression scheme 𝑀learns a hypothesis class H YX, if for any distribution D realizable by H, for any Ξ±, Ξ² > 0, with probability at least 1 Ξ² over a sample 𝑆 D𝑛and the randomness of 𝑀, the hypothesis 𝑀(𝑆) output by the reconstruction function in 𝑀satisfies 𝑅D(𝑀(𝑆)) Ξ±. Remark 2. Note that if π‘˜= 0 above, we recover the standard definition of an (Ξ±, Ξ², Ξ΅, Ξ΄)-DP PAC learner (where Ξ± is the error, Ξ² is the failure probability, and Ξ΅, Ξ΄ are the privacy parameters) [15]. 4.1 Upper Bound: PAC learnability implies (Ξ΅ = Ξ΄ = 0)-counterfactual credit attribution learning Our CCA learning rule crucially uses the notion of a randomized stable sample compression scheme, which is a generalization of stable sample compression schemes (Example 2.1) and was developed in a recent work by [12]. We use the notation 𝑆 𝑆for sequences 𝑆, 𝑆 (X Y) that satisfy: ( (π‘₯, 𝑦)) : (π‘₯, 𝑦) 𝑆 = (π‘₯, 𝑦) 𝑆. Definition 6 (Stable Randomized Sample Compression Scheme). A randomized sample compression scheme (DΞΊ, ρ) for a class H having failure probability ΞΎ comprises of a distribution DΞΊ over (deterministic) compression functions ΞΊ : (X Y) (X Y) and a deterministic reconstruction function8 ρ : (X Y) YX. The compression functions ΞΊ in the support of DΞΊ must satisfy 8It seems interesting to possibly consider randomized reconstruction functions as well; for our purposes, deterministic reconstruction functions suffice. For any 𝑆 (X Y) realizable by H, if ΞΊ(𝑆) = 𝑆 , then 𝑆 𝑆. The reconstruction function ρ must satisfy For any 𝑆 (X Y) realizable by H, PrΞΊ DΞΊ [ (π‘₯, 𝑦) 𝑆: ρ(ΞΊ(𝑆))(π‘₯) 𝑦] ΞΎ. (2) A randomized sample compression scheme (DΞΊ, ρ) for H is stable if for any 𝑆 (X Y) realizable by H and 𝑆 𝑆, the distribution of ΞΊ(𝑆 ) is the same as the distribution of ΞΊ(𝑆) conditioned on ΞΊ(𝑆) 𝑆 . The size 𝑠(𝑛) of the compression scheme is the supremum over 𝑆 (X Y)𝑛(realizable by H) and ΞΊ in the supportt of DΞΊ of the number of distinct elements in ΞΊ(𝑆). [12] show that stable randomized compression schemes imply generalization. Lemma 4.1 (Theorem 1.2 in [12]). Let (DΞΊ, ρ) be a stable randomized compression scheme for H of size 𝑠(𝑛) and failure probability ΞΎ. Let D be any distribution over X Y realizable by H. For any 𝑛and Ξ² > 2ΞΎ, with probability at least 1 Ξ² over 𝑆 D𝑛and ΞΊ DΞΊ, it holds that 𝑅D(ρ(ΞΊ(𝑆))) 𝑂 𝑠(𝑛) + log(1/Ξ²) Furthermore, they also show that there exists a stable randomized compression scheme for any hypothesis class H having finite VC dimension 𝑑. This compression scheme is based on a simple variant of Ada Boost (Algorithm 1 in [12]). The following is contained in their work:9 Lemma 4.2 ([12]). For any hypothesis class H with VC dimension 𝑑, there exists a stable randomized sample compression scheme (based on Ada Boost) having failure probability ΞΎ of size 𝑠(𝑛) = 𝑂(𝑑log(𝑛/ΞΎ)) . (3) We are now equipped with the necessary tools required to prove Theorem 1. Proof of Theorem 1. Let D be any distribution realizable by H, and let 𝑆be a sample of size 𝑛drawn from D𝑛. Given Ξ², fix ΞΎ = Ξ²/3. From Lemma 4.2, we know that there exists a stable randomized compression scheme (DΞΊ, ρ) for H of size 𝑠(𝑛) = 𝑂(𝑑log(𝑛/Ξ²)), and failure probability ΞΎ. Then, since Ξ² > 2ΞΎ, from Lemma 4.1, we know that with probability at least 1 Ξ² over 𝑆and ΞΊ DΞΊ, 𝑅D(ρ(ΞΊ(𝑆))) 𝑂 𝑑log(𝑛/Ξ²) For the right-hand size above to be at most Ξ±, it suffices to have 𝑛= 𝑂 𝑑log(𝑑/Ξ±)+𝑑log(1/Ξ²) Let A be the learning rule, which when given a sample 𝑆 D𝑛as input, runs the stable randomized compression scheme from above on 𝑆to obtain 𝑆 of size π‘˜= 𝑂(𝑑log(𝑛/Ξ²)). The learner then outputs (ρ(𝑆 ), 𝑆 ). By the reasoning above, ρ(𝑆 ) has error at most Ξ± with probability at least 1 Ξ². It remains to argue that A is a valid CCA mechanism. This follows by virtue of (DΞΊ, ρ) being a stable randomized compression scheme. Namely, for any 𝑖, 𝑆 𝑖 𝑆, and hence by Definition 6, the distribution of ΞΊ(𝑆 𝑖) is identical to the distribution of ΞΊ(𝑆) conditioned on 𝑆𝑖 ΞΊ(𝑆). Finally, since ρ is a deterministic function of its argument, A satisfies Definition 1 with Ξ΅ = Ξ΄ = 0. 4.2 Lower Bound: A dichotomy for sample DP-compression Towards proving Theorem 2, we first show that a sample DP-compression scheme for the class of thresholds can be used to construct a DP learner for it. This lemma has a similar flavor to the public data reduction lemma (Lemma 4.4) in [1]. For a set 𝑆= {π‘₯1, . . . , π‘₯π‘š}, the class of thresholds over 𝑆 comprises of π‘šfunctions β„Ž1, . . . , β„Žπ‘šsuch that β„Žπ‘–(π‘₯𝑗) = 1[𝑖 𝑗], 𝑖, 𝑗 [π‘š]. 9In more detail, this follows by setting the weak learning parameter Ξ³ to a constant (e.g., 1/8) in Algorithm 1 in [12], and noting that such a weak learner can be found via empirical risk minimization. Lemma 4.3 (Reduction from DP learner to sample DP-compression scheme). Let Hπ‘šbe the class of thresholds over {π‘₯1, . . . , π‘₯π‘š}. Suppose there exists an (Ξ΅, Ξ΄) sample DP-compression scheme A that learns Hπ‘šwith error Ξ± and failure probability Ξ² = 1 32, and has sample complexity 𝑛and compression size π‘˜ 𝑛. Let Ξ΄ 1 64𝑛2 . Then, there exists a 64π‘˜π‘’Ξ΅Ξ±, 1 16, 2Ξ΅, 3Ξ΄ -DP learner A for Hπ‘š 1, where Hπ‘š 1 is the class of thresholds over {π‘₯1, . . . , π‘₯π‘š 1}, with sample complexity 𝑛. Proof. Let D be any distribution over {π‘₯1, . . . , π‘₯π‘š 1} {0, 1} realizable by Hπ‘š 1. Given a sample 𝑆 D𝑛, the private learner A does the following. First, it constructs a sample 𝑆, also of size 𝑛, as follows. Initialize 𝑗= 1. For each 𝑖= 1, 2, . . . , 𝑛, toss a coin (independently of the data, and other coins) that lands heads with probability 𝑝, for 𝑝to be appropriately chosen later. If the coin lands heads, 𝑆(𝑖) = 𝑆( 𝑗), and 𝑗is incremented by 1. If the coin lands tails, 𝑆(𝑖) is set to the designated dummy example (π‘₯π‘š, 1). In this way, 𝑆is a sample of size 𝑛drawn from the mixture distribution 𝐷= 𝑝 D + (1 𝑝) 1(π‘₯π‘š,1), where 1(π‘₯π‘š,1) is a point mass on (π‘₯π‘š, 1). Note that since all the thresholds in Hπ‘šlabel π‘₯π‘šas 1, D is realizable by Hπ‘š. The learner A now invokes the sample DP-compression scheme A on 𝑆. If any of the π‘˜examples in the compression set constructed by 𝐴is a non-dummy element, A outputs a constant hypothesis that labels all points in {π‘₯1, . . . , π‘₯π‘š 1} as 1. On the other hand, if all of the π‘˜examples in the compression set are dummies, then A outputs the hypothesis that 𝐴outputs (restricted to {π‘₯1, . . . , π‘₯π‘š 1}). We first claim that the output of A is (2Ξ΅, 3Ξ΄)-private with respect to its input 𝑆. Claim 4.4 (A is private). A is (2Ξ΅, 3Ξ΄)-DP. The proof of this claim is given in Appendix A. At a high level, the privacy parameter deteriorates to 2Ξ΅ because of the two-step process of compressing 𝑆to π‘˜points in an Ξ΅-DP way, and then obtaining an Ξ΅-DP learner thereafter. Next, we claim that on average, there will be a lot of dummies in the compression set selected by 𝐴. This step crucially hinges on Theorem 3, where we substitute the event 𝐸in the statement of the theorem to be the event that a non-dummy element is selected (i.e., 𝐸is the support of the distribution D). In particular, we get that the expected number of non-dummy elements is at most π‘˜π‘π‘’Ξ΅ + Ξ΄π‘˜π‘›, which is at most 1 32, if we set 𝑝= 1 64π‘˜Ξ΅2 , and use that π‘˜ 𝑛, Ξ΄ 1 64𝑛2 . We can now reason about the error and failure probability parameters of A. Because 𝐴is an (Ξ΅, Ξ΄) sample DP-compression scheme that successfully learns Hπ‘šwith error Ξ± and failure probability 1 32, with probability at least 1 1 32 over the draw of 𝑆and the randomness of A, the hypothesis it outputs has error at most Ξ±. Furthermore, since the expected number of non-dummy elements chosen in the compression set is at most 1 32, Markov s inequality gives that with probability at least 1 1 32 over the draw of 𝑆and the randomness of A, all the π‘˜examples chosen by 𝐴in the compression set are dummies. By a union bound, with probability at least 1 1 16 over the draw of 𝑆and the randomness of A, all the examples chosen to be in the compression set by 𝐴are dummies and the hypothesis it outputs has error (with respect to D) less than Ξ±. But recall that the distribution D on 𝑆is induced by the distribution D on 𝑆, and that whenever all the examples chosen by 𝐴in the compression set are dummies, A returns 𝐴 s output. This implies that with probability at least 1 1 16 over the draw of 𝑆from D𝑛and the randomness of A, the hypothesis output by A has error at most Ξ± with respect to D. But since D is a mixture distribution, 𝑅 D(A(𝑆)) 𝑝 𝑅D(A(𝑆)), and hence we have that with probability at least 1 1 16, the error of A(𝑆) with respect to D is at most 𝑝 64π‘˜π‘’Ξ΅Ξ±. Thus, A is a 64π‘˜π‘’Ξ΅Ξ±, 1 16, 2Ξ΅, 3Ξ΄ -DP learner for Hπ‘š 1 as required. We next state a lower bound on the sample complexity of DP learners for thresholds [2, 10]. Theorem 4 (Theorem 1 in [2]). Let Hπ‘šbe the class of thresholds on {π‘₯1, . . . , π‘₯π‘š}. Let A be a 1 16, 1 16, 0.1, 1 1000𝑛2 log 𝑛 -DP learner for Hπ‘šwith sample complexity 𝑛. Then 𝑛 Ω(log π‘š). We are now ready prove Theorem 5, which shows that non-Littlestone [16] classes cannot be learnt by sublinear sample DP-compression schemes. Theorem 2 follows from Theorem 5, since classes that are DP-learnable are exactly the classes with finite Littlestone dimension [11]. Theorem 5. Let H be a hypothesis class over X that has infinite Littlestone dimension. For Ξ΅ = 0.05, Ξ΄ = 1 3000𝑛2 log 𝑛, let A be an (Ξ΅, Ξ΄) sample differentially private (𝑛 π‘˜) compression scheme that learns H with error Ξ± and failure probability 1 32. Then π‘˜ 1 68Ξ±. Proof. Because H has infinite Littlestone dimension, for any π‘š 1, there exist {π‘₯1, . . . , π‘₯π‘š} and Hπ‘š H such that Hπ‘šis the class of thresholds over {π‘₯1, . . . , π‘₯π‘š} [2, Theorem 3]. Now, A is an (𝑛 π‘˜) sample DP-compression scheme that learns H; in particular, this means that A has sample complexity 𝑛< , and also that A learns Hπ‘šwith the same parameters and sample complexity. By Lemma 4.3, we know that there then exists a 68π‘˜Ξ±, 1 16, 0.1, 1 1000𝑛2 log 𝑛 private learner for Hπ‘š with sample complexity 𝑛. Assume for the sake of contradiction that π‘˜< 1 68Ξ±. This means that there exists a Ξ±, 1 16, 0.1, 1 1000𝑛2 log 𝑛 private learner for Hπ‘šwith sample complexity 𝑛. By Theorem 4, it must be that 𝑛 Ω(log π‘š). Since we can find Hπ‘š H for any π‘š 1, this would mean that 𝑛= , which is a contradiction. Thus, it must be the case that π‘˜ 1 68Ξ±. 4.3 Bounded boosting of empirical measure We prove a simplified form of Theorem 3 (with slightly tighter bounds), where we consider the input to be a bit string. Theorem 3 as stated in terms of a general event can be immediately obtained as a corollory by interpreting the bits in the string as indicators for the event (details in Appendix A). Lemma 4.5 (Bounded Boosting of Empirical Measure). Let A : {0, 1}𝑛 [𝑛]π‘˜be an (Ξ΅, Ξ΄)-DP selection mechanism. Let D be the product distribution on {0, 1}𝑛where each bit is set to 1 with probability 𝑝. For 𝑋 D, let 𝑍denote the fraction of indices in A(𝑋) at which 𝑋is 1, i.e., 𝑍= 1 π‘˜ P 𝑗 A(𝑋) 1[𝑋𝑗= 1]. Then, we have that 𝑝+ (1 𝑝)𝑒Ρ E[𝑍] 𝑝𝑒Ρ + 𝑛𝑝(1 𝑝)Ξ΄ 1 𝑝+ 𝑝𝑒Ρ . (4) Proof Sketch. Let A(𝑋) = 𝐼= (𝐼1, 𝐼2, . . . , πΌπ‘˜) be the tuple of indices selected by the DP mechanism on input 𝑋. We first write 𝑍= 1 π‘˜ Pπ‘˜ 𝑗=1 P𝑛 𝑖=1 1[𝐼𝑗= 𝑖 𝑋𝑖= 1]. Thereafter, the main step of the proof uses that the mechanism is private in order to relate the conditional probability Pr[𝐼𝑗= 𝑖|𝑋𝑖= 1] to Pr[𝐼𝑗= 𝑖|𝑋𝑖= 0] for any 𝑗 [π‘˜]. Concretely, observe that Pr[𝐼𝑗= 𝑖|𝑋𝑖= 0] = Pr[𝑋𝑖= 0 𝐼𝑗= 𝑖] Pr[𝑋𝑖= 0] = P π‘₯ {0,1}𝑛,π‘₯𝑖=0 Pr[π‘₯] Pr[𝐼𝑗= 𝑖|π‘₯] P π‘₯ {0,1}𝑛,π‘₯𝑖=1 Pr[π‘₯ 𝑖] Pr[𝐼𝑗= 𝑖|π‘₯ 𝑖] P π‘₯ {0,1}𝑛,π‘₯𝑖=1 Pr[π‘₯] (𝑒Ρ Pr[𝐼𝑗= 𝑖|π‘₯] + Ξ΄) = 𝑒Ρ P π‘₯ {0,1}𝑛,π‘₯𝑖=1 Pr[π‘₯] Pr[𝐼𝑗= 𝑖|π‘₯] 𝑝 + Ξ΄ = 𝑒Ρ Pr[𝐼𝑗= 𝑖|𝑋𝑖= 1] + Ξ΄, where in the fourth inequality, we used that for π‘₯having π‘₯𝑖= 1, Pr D[π‘₯ 𝑖] = 1 𝑝 𝑝 Pr[π‘₯], and that A is an (Ξ΅, Ξ΄)-DP mechanism. This relation lets us express the joint probability term Pr[𝐼𝑗= 𝑖 𝑋𝑖= 1] in the expression E[𝑍] = 1 π‘˜ Pπ‘˜ 𝑗=1 P𝑛 𝑖=1 Pr[𝐼𝑗= 𝑖 𝑋𝑖= 1] simply in terms of Pr[𝐼𝑗= 𝑖]. Thereafter, noticing that P𝑛 𝑖=1 Pr[𝐼𝑗= 𝑖] = 1 yields the result. The complete details are provided in Appendix A. 5 Conclusion We study two natural definitions for algorithms satisfying credit attribution. In the context of PAC learning, we provide a characterization of learnability for algorithms that respect these definitions. Our work motivates the further study of these and other related definitions for credit attribution, and opens up interesting technical directions to pursue. However, as mentioned earlier, credit attribution is only part of the much more nuanced problem of copyright protection, and hence, our definitions only capture subtleties involved in the problem in part. With further exploration, and other suitable definitions, we will hopefully be able to ensure that algorithms (especially generative models) appropriately credit the work that they draw upon. Acknowledgements Shay Moran is a Robert J. Shillman Fellow; he acknowledges support by ISF grant 1225/20, by BSF grant 2018385, by an Azrieli Faculty Fellowship, by Israel PBC-VATAT, by the Technion Center for Machine Learning and Intelligent Systems (MLIS), and by the the European Union (ERC, GENERALIZATION, 101039692). Roi Livni is supported by an ERC grant (FOG, 101116258), as well as an ISF Grant (2188 \ 20). Chirag Pabbaraju is supported by Moses Charikar and Gregory Valiant s Simons Investigator Awards. Work of Kobbi Nissim was supported by NSF Grant No. CCF2217678 DASS: Co-design of law and computer science for privacy in sociotechnical software systems and a gift to Georgetown University. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them. [1] N. Alon, R. Bassily, and S. Moran. Limits of private learning with access to public data. Advances in neural information processing systems, 32, 2019. 2, 3, 4.2 [2] N. Alon, R. Livni, M. Malliaris, and S. Moran. Private pac learning implies finite littlestone dimension. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, pages 852 860, 2019. 4.2, 4, 4.2 [3] R. Bassily, A. Cheu, S. Moran, A. Nikolov, J. Ullman, and S. Wu. Private query release assisted by public data. In International Conference on Machine Learning, pages 695 703. PMLR, 2020. 2 [4] A. Beimel, K. Nissim, and U. Stemmer. Private learning and sanitization: Pure vs. approximate differential privacy. In International Workshop on Approximation Algorithms for Combinatorial Optimization, pages 363 378. Springer, 2013. 2 [5] S. Ben-David, A. Bie, C. L. Canonne, G. Kamath, and V. Singhal. Private distribution learning with public data: The view from sample compression. Advances in Neural Information Processing Systems, 36, 2024. 2 [6] A. Bie, G. Kamath, and V. Singhal. Private estimation with public data. Advances in neural information processing systems, 35:18653 18666, 2022. 2 [7] A. Block, M. Bun, R. Desai, A. Shetty, and S. Wu. Oracle-efficient differentially private learning with public data. ar Xiv preprint ar Xiv:2402.09483, 2024. 2 [8] O. Bousquet, S. Hanneke, S. Moran, and N. Zhivotovskiy. Proper learning, helly number, and an optimal svm bound. In Conference on Learning Theory, pages 582 609. PMLR, 2020. 2.1 [9] O. Bousquet, R. Livni, and S. Moran. Synthetic data generators sequential and private. Advances in Neural Information Processing Systems, 33:7114 7124, 2020. 1 [10] M. Bun, K. Nissim, U. Stemmer, and S. P. Vadhan. Differentially private release and learning of threshold functions. In V. Guruswami, editor, IEEE 56th Annual Symposium on Foundations of Computer Science, FOCS 2015, Berkeley, CA, USA, 17-20 October, 2015, pages 634 649. IEEE Computer Society, 2015. doi: 10.1109/FOCS.2015.45. URL https://doi.org/10. 1109/FOCS.2015.45. 4.2 [11] M. Bun, R. Livni, and S. Moran. An equivalence between private classification and online prediction. In 2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS), pages 389 402. IEEE, 2020. 4.2 [12] A. da Cunha, K. G. Larsen, and M. Ritzert. Boosting, voting classifiers and randomized sample compression schemes. ar Xiv preprint ar Xiv:2402.02976, 2024. 4.1, 4.1, 4.1, 4.2, 9 [13] N. Elkin-Koren, U. Hacohen, R. Livni, and S. Moran. Can copyright be reduced to privacy? ar Xiv preprint ar Xiv:2305.14822, 2023. 1 [14] Institute for Information Law (IVi R). Generative ai, copyright and the ai act. Kluwer Copyright Blog, May 2023. URL https://copyrightblog.kluweriplaw.com/2023/05/09/ generative-ai-copyright-and-the-ai-act/. Retrieved March 6, 2024. 1 [15] S. P. Kasiviswanathan, H. K. Lee, K. Nissim, S. Raskhodnikova, and A. Smith. What can we learn privately? SIAM Journal on Computing, 40(3):793 826, 2011. 3, 2 [16] N. Littlestone. Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine learning, 2:285 318, 1988. 4.2 [17] N. Littlestone and M. Warmuth. Relating data compression and learnability. 1986. 3 [18] T. Liu, G. Vietri, T. Steinke, J. Ullman, and S. Wu. Leveraging public data for practical private query release. In International Conference on Machine Learning, pages 6968 6977. PMLR, 2021. 2 [19] A. Lowy, Z. Li, T. Huang, and M. Razaviyayn. Optimal differentially private learning with public data. ar Xiv preprint ar Xiv:2306.15056, 2023. 2 [20] Z. Nikita. Optimal learning via local entropies and sample compression. In Conference on Learning Theory, pages 2023 2065. PMLR, 2017. 2.1 [21] F. Pinto, Y. Hu, F. Yang, and A. Sanyal. Pillar: How to make semi-private learning more effective. In 2024 IEEE Conference on Secure and Trustworthy Machine Learning (Sa TML), pages 110 139. IEEE, 2024. 2 [22] S. Scheffler, E. Tromer, and M. Varia. Formalizing human ingenuity: A quantitative framework for copyright law s substantial similarity. In Proceedings of the 2022 Symposium on Computer Science and Law, pages 37 49, 2022. 1 [23] L. G. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134 1142, 1984. 3 [24] N. Vyas, S. M. Kakade, and B. Barak. On provable copyright protection for generative models. In International Conference on Machine Learning, pages 35277 35299. PMLR, 2023. 1 [25] M. K. Warmuth. Compressing to vc dimension many points. In COLT, volume 3, pages 743 744. Springer, 2003. 3 A Supplementary Proofs Proof of Claim 4.4. Consider any 2 neighboring datasets 𝑆= (𝑧1, . . . , 𝑧𝑖, . . . , 𝑧𝑛) and 𝑆 = (𝑧1, . . . , 𝑧 𝑖, . . . , 𝑧𝑛) that differ at index 𝑖. Here, we are using the shorthand 𝑧𝑖= (π‘₯𝑖, 𝑦𝑖). We want to argue that the distribution of A(𝑆) =2Ξ΅,3Ξ΄ A(𝑆 ). Let 𝑂be any subset of the output space of A. Recall that A first constructs the sample 𝑆from 𝑆and then passes it to the semi-private learner A. Then, Pr[A(𝑆) 𝑂] = Pr[A(𝑆) 𝑂|𝑧𝑖 𝑆] Pr[𝑧𝑖 𝑆] + Pr[A(𝑆) 𝑂|𝑧𝑖 𝑆] Pr[𝑧𝑖 𝑆] = Pr[A(𝑆) 𝑂|𝑧𝑖 𝑆] Pr[𝑧 𝑖 𝑆 ] + Pr[A(𝑆 ) 𝑂|𝑧 𝑖 𝑆 ] Pr[𝑧 𝑖 𝑆 ] (5) where we used that the coins that deterine whether 𝑧𝑖 𝑆(or 𝑧 𝑖 𝑆 ) are tossed independently of the data, and that the distribution of 𝑆 conditioned on 𝑧 𝑖 𝑆 , is identical to the distribution of 𝑆 conditioned on 𝑧𝑖 𝑆. Hence, we focus on the term Pr[A(𝑆) 𝑂|𝑧𝑖 𝑆] in (5). We can decompose this as Pr[A(𝑆) 𝑂|𝑧𝑖 𝑆] = 𝑠:𝑧𝑖 𝑠 Pr[A(𝑆) 𝑂|𝑧𝑖 𝑆, 𝑆= 𝑠] Pr[ 𝑆= 𝑠|𝑧𝑖 𝑆] 𝑠 :𝑧 𝑖 𝑠 Pr[A(𝑆) 𝑂|𝑧𝑖 𝑆, 𝑆= 𝑠] Pr[ 𝑆 = 𝑠 |𝑧 𝑖 𝑆 ]. (6) Here, for every term in the summation, 𝑠 differs from 𝑠at exactly one index 𝑖, and we again used that the coins used to construct 𝑆and 𝑆 are independent of the data. Let 𝐸( 𝑠) be the event that all the π‘˜samples chosen by the semi-private learner A when it is given 𝑠as input are dummies. Since 𝑠and 𝑠 differ in exactly one element, because of the special property of the selection mechanism of A, we have that Pr[𝐸( 𝑠)|𝑧𝑖 𝑆, 𝑆= 𝑠] 𝑒Ρ Pr[𝐸( 𝑠 )|𝑧 𝑖 𝑆 , 𝑆 = 𝑠 ] + Ξ΄ (7) Pr[ 𝐸( 𝑠)|𝑧𝑖 𝑆, 𝑆= 𝑠] 𝑒Ρ Pr[ 𝐸( 𝑠 )|𝑧 𝑖 𝑆 , 𝑆 = 𝑠 ] + Ξ΄. (8) But note that the set of public examples is exactly the same, if 𝐸( 𝑠) and 𝐸( 𝑠 ) respectively occur hence, the learner in A (which is a function of the set of public examples) that operates on the private examples in either case is identical. Furthermore, the sets of private examples themselves differ in exactly one element; we can thus use the privacy guarantees of the learner in A to claim that Pr[A(𝑆) 𝑂|𝑧𝑖 𝑆, 𝑆= 𝑠, 𝐸( 𝑠)] min 1, 𝑒Ρ Pr[A(𝑆 ) 𝑂|𝑧 𝑖 𝑆 , 𝑆 = 𝑠 , 𝐸( 𝑠 )] + Ξ΄. (9) Combining (7) and (9), we get Pr[A(𝑆) 𝑂|𝑧𝑖 𝑆, 𝑆= 𝑠, 𝐸( 𝑠)] Pr[𝐸( 𝑠)|𝑧𝑖 𝑆, 𝑆= 𝑠] min 1, 𝑒Ρ Pr[A(𝑆 ) 𝑂|𝑧 𝑖 𝑆 , 𝑆 = 𝑠 , 𝐸( 𝑠 )] + Ξ΄ Pr[𝐸( 𝑠)|𝑧𝑖 𝑆, 𝑆= 𝑠] min 1, 𝑒Ρ Pr[A(𝑆 ) 𝑂|𝑧 𝑖 𝑆 , 𝑆 = 𝑠 , 𝐸( 𝑠 )] Pr[𝐸( 𝑠)|𝑧𝑖 𝑆, 𝑆= 𝑠] + Ξ΄ min 1, 𝑒Ρ Pr[A(𝑆 ) 𝑂|𝑧 𝑖 𝑆 , 𝑆 = 𝑠 , 𝐸( 𝑠 )] 𝑒Ρ Pr[𝐸( 𝑠 )|𝑧 𝑖 𝑆 , 𝑆 = 𝑠 ] + Ξ΄ + Ξ΄ 𝑒2Ξ΅ Pr[A(𝑆 ) 𝑂|𝑧 𝑖 𝑆 , 𝑆 = 𝑠 , 𝐸( 𝑠 )] Pr[𝐸( 𝑠 )|𝑧 𝑖 𝑆 , 𝑆 = 𝑠 ] + 2Ξ΄. (10) Now, observe that if 𝐸( 𝑠) does not occur (and correspondingly if 𝐸( 𝑠 ) does not occur), then we deterministically out the constant hypothesis in either case, and hence Pr[A(𝑆) 𝑂|𝑧𝑖 𝑆, 𝑆= 𝑠, 𝐸( 𝑠)] = Pr[A(𝑆 ) 𝑂|𝑧 𝑖 𝑆 , 𝑆 = 𝑠 , 𝐸( 𝑠 )]. (11) Combining (8) and (11), we get Pr[A(𝑆) 𝑂|𝑧𝑖 𝑆, 𝑆= 𝑠, 𝐸( 𝑠)] Pr[ 𝐸( 𝑠)|𝑧𝑖 𝑆, 𝑆= 𝑠] 𝑒Ρ Pr[A(𝑆 ) 𝑂|𝑧 𝑖 𝑆 , 𝑆 = 𝑠 , 𝐸( 𝑠 )] Pr[ 𝐸( 𝑠 )|𝑧 𝑖 𝑆 , 𝑆 = 𝑠 ] + Ξ΄ (12) Altogether, (10) and (12) give that Pr[A(𝑆) 𝑂|𝑧𝑖 𝑆, 𝑆= 𝑠] 𝑒2Ξ΅ Pr[A(𝑆 ) 𝑂|𝑧 𝑖 𝑆 , 𝑆 = 𝑠 ] + 3Ξ΄. Substituting in (6), we get Pr[A(𝑆) 𝑂|𝑧𝑖 𝑆] 𝑠 :𝑧 𝑖 𝑠 Pr[A(𝑆) 𝑂|𝑧𝑖 𝑆, 𝑆= 𝑠] Pr[ 𝑆 = 𝑠 |𝑧 𝑖 𝑆 ] 𝑒2Ξ΅ Pr[A(𝑆 ) 𝑂|𝑧 𝑖 𝑆 , 𝑆 = 𝑠 ] + 3Ξ΄ Pr[ 𝑆 = 𝑠 |𝑧 𝑖 𝑆 ] 𝑠 :𝑧 𝑖 𝑠 Pr[A(𝑆 ) 𝑂|𝑧 𝑖 𝑆 , 𝑆 = 𝑠 ] Pr[ 𝑆 = 𝑠 |𝑧 𝑖 𝑆 ] = 𝑒2Ξ΅ Pr[A(𝑆 ) 𝑂|𝑧 𝑖 𝑆 ] + 3Ξ΄. Finally, substituting back in (5), we get 𝑒2Ξ΅ Pr[A(𝑆 ) 𝑂|𝑧 𝑖 𝑆 ] + 3Ξ΄ Pr[𝑧 𝑖 𝑆 ] + Pr[A(𝑆 ) 𝑂|𝑧 𝑖 𝑆 ] Pr[𝑧 𝑖 𝑆 ] 𝑒2Ξ΅ Pr[A(𝑆 ) 𝑂|𝑧 𝑖 𝑆 ] Pr[𝑧 𝑖 𝑆 ] + Pr[A(𝑆 ) 𝑂|𝑧 𝑖 𝑆 ] Pr[𝑧 𝑖 𝑆 ] + 3Ξ΄ = 𝑒2Ξ΅ Pr[A(𝑆 ) 𝑂] + 3Ξ΄. By the same calculations, we also get the bound Pr[A(𝑆 ) 𝑂] 𝑒2Ξ΅ Pr[A(𝑆) 𝑂] + 3Ξ΄, completing the proof. Proof of Lemma 4.5. Let A(𝑋) = 𝐼= (𝐼1, 𝐼2, . . . , πΌπ‘˜). Note that 𝑖=1 1[𝐼𝑗= 𝑖 𝑋𝑖= 1], 𝑖=1 Pr𝑋,A[𝐼𝑗= 𝑖 𝑋𝑖= 1] = 1 𝑖=1 Pr[𝑋𝑖= 1] | {z } =𝑝 Pr[𝐼𝑗= 𝑖|𝑋𝑖= 1] 𝑖=1 Pr[𝐼𝑗= 𝑖|𝑋𝑖= 1]. (13) Now, for any π‘₯ {0, 1}𝑛, let π‘₯ 𝑖denote π‘₯with its 𝑖th bit flipped. Then, observe that Pr[𝐼𝑗= 𝑖|𝑋𝑖= 0] = Pr[𝑋𝑖= 0 𝐼𝑗= 𝑖] Pr[𝑋𝑖= 0] = P π‘₯ {0,1}𝑛,π‘₯𝑖=0 Pr[π‘₯] Pr[𝐼𝑗= 𝑖|π‘₯] P π‘₯ {0,1}𝑛,π‘₯𝑖=1 Pr[π‘₯ 𝑖] Pr[𝐼𝑗= 𝑖|π‘₯ 𝑖] P π‘₯ {0,1}𝑛,π‘₯𝑖=1 Pr[π‘₯] (𝑒Ρ Pr[𝐼𝑗= 𝑖|π‘₯] + Ξ΄) = 𝑒Ρ P π‘₯ {0,1}𝑛,π‘₯𝑖=1 Pr[π‘₯] Pr[𝐼𝑗= 𝑖|π‘₯] 𝑝 + Ξ΄ = 𝑒Ρ Pr[𝐼𝑗= 𝑖|𝑋𝑖= 1] + Ξ΄, where in the fourth inequality, we used that for π‘₯having π‘₯𝑖= 1, Pr D[π‘₯ 𝑖] = 1 𝑝 𝑝 Pr[π‘₯], and that A is an (Ξ΅, Ξ΄)-DP mechanism. Hence, we have that Pr𝑋,A[𝐼𝑗= 𝑖] = Pr[𝑋𝑖= 0] Pr[𝐼𝑗= 𝑖|𝑋𝑖= 0] + Pr[𝑋𝑖= 1] Pr[𝐼𝑗= 𝑖|𝑋𝑖= 1] Pr[𝑋𝑖= 0] (𝑒Ρ Pr[𝐼𝑗= 𝑖|𝑋𝑖= 1] + Ξ΄) + Pr[𝑋𝑖= 1] Pr[𝐼𝑗= 𝑖|𝑋𝑖= 1] = (𝑝+ 𝑒Ρ (1 𝑝)) Pr[𝐼𝑗= 𝑖|𝑋𝑖= 1] + (1 𝑝)Ξ΄ (14) = Pr[𝐼𝑗= 𝑖|𝑋𝑖= 1] Pr[𝐼𝑗= 𝑖] (1 𝑝)Ξ΄ 𝑝+ 𝑒Ρ(1 𝑝) . (15) Substituting (15) in (13), we get E[𝑍] 𝑝 π‘˜(𝑝+ 𝑒Ρ(1 𝑝)) 𝑖=1 Pr[𝐼𝑗= 𝑖] π‘›π‘˜(1 𝑝)Ξ΄Βͺ . (16) Finally, note that P𝑛 𝑖=1 Pr[𝐼𝑗= 𝑖] = 1 for any 𝑗. Substituting in (16), we have shown the desired lower bound E[𝑍] 𝑝 𝑛𝑝(1 𝑝)Ξ΄ 𝑝+ 𝑒Ρ(1 𝑝) . For the upper bound, we repeat the above analysis with 𝑍 = 1 π‘˜ P 𝑗 A(𝑋) 1[𝑋𝑗= 0], to obtain E[𝑍 ] (1 𝑝) 𝑛𝑝(1 𝑝)Ξ΄ But note that 𝑍 = 1 𝑍, and hence E[𝑍] = 1 E[𝑍 ] 𝑝𝑒Ρ + 𝑛𝑝(1 𝑝)Ξ΄ Proof of Theorem 3. Recall that 𝐸 Z is an event satisfying D(𝐸) = 𝑝for the given distribution D over Z. Let D|𝐸denote the distribution D conditioned on the event 𝐸, and let D| 𝐸denote the distribution D conditioned on the complement of event 𝐸. Assume for the sake of contradiction that either E[𝑍] > 𝑝𝑒Ρ+𝑛𝑝(1 𝑝)Ξ΄ 1 𝑝+𝑝𝑒Ρ or E[𝑍] < 𝑝+𝑛𝑝(1 𝑝)Ξ΄ 𝑝+(1 𝑝)𝑒Ρ . Then, consider an algorithm B, that takes as input a bit string π‘Œfrom a product distribution on {0, 1}𝑛, where each bit is independently set to 1 with probability 𝑝. Given such an input string π‘Œ, the algorithm constructs a sequence 𝑆= {𝑧1, . . . , 𝑧𝑛}, where 𝑧𝑖 D|𝐸if π‘Œπ‘–= 1, and 𝑧𝑖 D| 𝐸otherwise. Thus, 𝑆is exactly distributed as 𝐷𝑛. B then passes 𝑆to the DP sample compression scheme 𝑀, which selects a compression set ΞΊ(𝑆) = (𝑖1, . . . , π‘–π‘˜) this is the tuple of indices that B outputs too. Note that because the compression function ΞΊ is an (Ξ΅, Ξ΄)-DP mechanism, B is also an (Ξ΅, Ξ΄)-DP mechanism with respect to its input. To see this, consider two neighboring bit strings 𝑦, 𝑦 , such that 𝑦𝑖= 1 and 𝑦 𝑖= 0. We will show that Pr[B(𝑦) 𝑂] 𝑒Ρ Pr[B(𝑦 ) 𝑂] + Ξ΄, and the same calculations will give the bound with 𝑦, 𝑦 swapped. Pr[B(𝑦) 𝑂] = 𝑧 𝑖 Pr[𝑧 𝑖] 𝑧𝑖 𝐸 Pr[𝑧𝑖|𝐸] Pr[A(𝑧 𝑖 𝑧𝑖) 𝑂] (17) Now, for any 𝑧 𝑖 𝐸, we know (since ΞΊ is an (Ξ΅, Ξ΄)-DP mechanism) that Pr[A(𝑧 𝑖 𝑧𝑖) 𝑂] 𝑒Ρ Pr[A(𝑧 𝑖 𝑧 𝑖) 𝑂] + Ξ΄, Pr[A(𝑧 𝑖 𝑧𝑖) 𝑂] 𝑒Ρ 𝑧 𝑖 𝐸 Pr[𝑧 𝑖| 𝐸] Pr[A(𝑧 𝑖 𝑧 𝑖) 𝑂] + Ξ΄. (18) Substituting (18) in (17) gives that Pr[B(𝑦) 𝑂] 𝑒Ρ 𝑧 𝑖 Pr[𝑧 𝑖] 𝑧 𝑖 𝐸 Pr[𝑧 𝑖| 𝐸] Pr[A(𝑧 𝑖 𝑧 𝑖) 𝑂] + Ξ΄ = 𝑒Ρ Pr[B(𝑦 ) 𝑂] + Ξ΄. Now, by our assumption, either E[𝑍] > 𝑝𝑒Ρ+𝑛𝑝(1 𝑝)Ξ΄ 1 𝑝+𝑝𝑒Ρ or E[𝑍] < 𝑝+𝑛𝑝(1 𝑝)Ξ΄ 𝑝+(1 𝑝)𝑒Ρ . But this means that either E h Pπ‘˜ 𝑗=1 1[π‘Œπ‘–π‘—= 1] i > 𝑝𝑒Ρ+𝑛𝑝(1 𝑝)Ξ΄ 1 𝑝+𝑝𝑒Ρ or E h Pπ‘˜ 𝑗=1 1[π‘Œπ‘–π‘—= 1] i < 𝑝+𝑛𝑝(1 𝑝)Ξ΄ 𝑝+(1 𝑝)𝑒Ρ . Thus, B is an (Ξ΅, Ξ΄)-DP selection mechanism that violates the bounds in Lemma 4.5, and hence our assumption is false. B A DP sample compression scheme based on Randomized Response Definition 7 (Randomized response). Let RR : {0, 1}𝑛 [𝑛]π‘˜be the randomized response selection mechanism defined as follows. Given π‘₯ {0, 1}𝑛, RR flips each bit of π‘₯independently with probability 1 1+𝑒Ρ to obtain π‘₯. Let 𝑆= {𝑖 [𝑛] : π‘₯𝑖= 1} and 𝑆 = [𝑛] \ 𝑆. Further, let |𝑆| = 𝑑. If 𝑑 π‘˜, then RR outputs a uniformly random subset of π‘˜indices from 𝑆, ordered arbitrarily. Otherwise, it arbitrarily orders 𝑆, and outputs 𝑆 𝑇, where 𝑇is a uniformly random subset of π‘˜ 𝑑indices chosen from 𝑆 (and ordered arbitrarily), and denotes concatenation. Claim B.1 (RR boosts empirical measure optimally). In the setting of Lemma 4.5, let Ξ΄ = 0 and let A be the randomized response mechanism RR (Definition 7) . Then, E[𝑍] 1 π‘˜π‘›π‘˜ exp (π‘˜ 𝑛)(1 𝑝+ 𝑝𝑒Ρ) 1 𝑝+ 𝑝𝑒Ρ . (19) Remark 3. Observe that when π‘˜= π‘œ 𝑛 log 𝑛 and 𝑛gets large, the expression in the parentheses approaches 1. Thus, we can conclude that randomized response attains the upper bound from Lemma 4.5 when Ξ΄ = 0. Proof. Recall that for 𝑋 D, randomized response first constructs π‘Œby flipping each bit of 𝑋with probability 1 1+𝑒Ρ . That is, the distribution of π‘Œis the product distribution where each π‘Œπ‘–is 1 with probability 𝑝 𝑒Ρ 1+𝑒Ρ + (1 𝑝) 1 1+𝑒Ρ = 1 𝑝+𝑝𝑒Ρ 1+𝑒Ρ := Ξ±. Let 𝑆= {𝑖 [𝑛] : π‘Œπ‘–= 1}. We first claim that with high probability, |𝑆| π‘˜. To see this, note that Pr[|𝑆| π‘˜] = 1 Pr[|𝑆| < π‘˜] = 1 Pr[ 𝑆 [𝑛] : |𝑆 | > 𝑛 π‘˜,π‘Œπ‘–= 0 𝑖 𝑆 ] 1 Pr[ 𝑆 [𝑛] : |𝑆 | = 𝑛 π‘˜+ 1,π‘Œπ‘–= 0 𝑖 𝑆 ] 1 π‘›π‘˜ 𝑒 Ξ±(𝑛 π‘˜) | {z } :=Ξ² where we denote the tail probability by Ξ². Note that since we assume π‘˜= π‘œ 𝑛 log 𝑛 , Ξ² = π‘œ(1). Let 𝐼be the tuple of π‘˜indices that RR outputs (note that all these indices are always distinct). Recall that, if |𝑆| π‘˜, then π‘Œπ‘–= 1 for all 𝑖 𝐼. Let 𝑍1 = P 𝑖 𝐼1[π‘Œπ‘–= 1] and 𝑍0 = P 𝑖 𝐼1[π‘Œπ‘–= 0]. Then we have that E[𝑍1] π‘˜ Pr[|𝑆| π‘˜] π‘˜(1 Ξ²) = E[𝑍0] π‘˜Ξ². Markov s inequality then gives us that Pr[𝑍0 1] π‘˜Ξ². Thus, with probability at least 1 π‘˜Ξ², we have that π‘Œπ‘–= 1 for all (distinct) indices output by RR: let 𝐺denote this event. Finally, let 𝑍= 1 π‘˜ P 𝑖 𝐼1[𝑋𝑖= 1]. Then, we have that E[𝑍] Pr[𝐺] E[𝑍|𝐺] (1 π‘˜Ξ²) E[𝑍|𝐺] 𝑖1,...,π‘–π‘˜ Pr[𝐼= {𝑖1, . . . , π‘–π‘˜}|𝐺] E[𝑍|𝐺, 𝐼= {𝑖1, . . . , π‘–π‘˜}]. (20) But observe that E[𝑍|𝐺, 𝐼= {𝑖1, . . . , π‘–π‘˜}] = 1 𝑗 {𝑖1,...,π‘–π‘˜} Pr[𝑋𝑗= 1|π‘Œπ‘—= 1] 𝑗 {𝑖1,...,π‘–π‘˜} Pr[𝑋𝑗= 1 π‘Œπ‘—= 1] 𝑗 {𝑖1,...,π‘–π‘˜} 𝑝 𝑒Ρ 1+𝑒Ρ 𝑝 𝑒Ρ 1+𝑒Ρ + (1 𝑝) 1 1+𝑒Ρ Substituting in (20) and plugging in the expression for Ξ², we get the desired bound E[𝑍] 1 π‘˜π‘›π‘˜ exp (π‘˜ 𝑛)(1 𝑝+ 𝑝𝑒Ρ) Neur IPS Paper Checklist Question: Do the main claims made in the abstract and introduction accurately reflect the paper s contributions and scope? Answer: [Yes] Justification: We claim to study credit attribution in machine learning algorithms in the abstract, and our main results (Definitions 1, 3 and Theorems 1, 2) are precisely about this. Guidelines: The answer NA means that the abstract and introduction do not include the claims made in the paper. The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. 2. Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [Yes] Justification: We are upfront about the nuances involved in credit attribution and copyright protection (e.g., in the introduction as well as conclusion), in the sense that a single definition is likely not going to be all-purpose. We are also candid about our assumptions throughout the paper, and identify various open directions and loose ends. Guidelines: The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. The authors are encouraged to create a separate Limitations section in their paper. The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren t acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. 3. Theory Assumptions and Proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: [Yes] Justification: We state all the assumptions in our theorems, and also provide complete proofs/references. Guidelines: The answer NA means that the paper does not include theoretical results. All the theorems, formulas, and proofs in the paper should be numbered and crossreferenced. All assumptions should be clearly stated or referenced in the statement of any theorems. The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. Theorems and Lemmas that the proof relies upon should be properly referenced. 4. Experimental Result Reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: [NA] Justification: The paper does not include experiments. Guidelines: The answer NA means that the paper does not include experiments. If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. While Neur IPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example (a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. (b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. (c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). (d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. 5. Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [NA] Justification: The paper does not include experiments requiring code. Guidelines: The answer NA means that paper does not include experiments requiring code. Please see the Neur IPS code and data submission guidelines (https://nips.cc/ public/guides/Code Submission Policy) for more details. While we encourage the release of code and data, we understand that this might not be possible, so No is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). The instructions should contain the exact command and environment needed to run to reproduce the results. See the Neur IPS code and data submission guidelines (https: //nips.cc/public/guides/Code Submission Policy) for more details. The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. 6. Experimental Setting/Details Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [NA] Justification: The paper does not include experiments. Guidelines: The answer NA means that the paper does not include experiments. The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. The full details can be provided either with the code, in appendix, or as supplemental material. 7. Experiment Statistical Significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: [NA] Justification: The paper does not include experiments. Guidelines: The answer NA means that the paper does not include experiments. The authors should answer Yes if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) The assumptions made should be given (e.g., Normally distributed errors). It should be clear whether the error bar is the standard deviation or the standard error of the mean. It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. 8. Experiments Compute Resources Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: [NA] Justification: The paper does not include experiments. Guidelines: The answer NA means that the paper does not include experiments. The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn t make it into the paper). 9. Code Of Ethics Question: Does the research conducted in the paper conform, in every respect, with the Neur IPS Code of Ethics https://neurips.cc/public/Ethics Guidelines? Answer: [Yes] Justification: Our work conforms to the Neur IPS Code of Ethics. Guidelines: The answer NA means that the authors have not reviewed the Neur IPS Code of Ethics. If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). 10. Broader Impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: [Yes] Justification: Our work suggests definitions for credit attribution in machine learning algorithms. The hope is that models adhering to these definitions (particularly generative AI models) will not be able to get away with generating content that is heavily influenced from existing (possibly copyrighted) artwork without properly citing it. We do not anticipate any negative societal impact due to our work. Guidelines: The answer NA means that there is no societal impact of the work performed. If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). 11. Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [NA] Justification: The paper poses no such risks. Guidelines: The answer NA means that the paper poses no such risks. Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. 12. Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: [NA] Justification: The paper is theoretical and does not use existing assets. Guidelines: The answer NA means that the paper does not use existing assets. The authors should cite the original paper that produced the code package or dataset. The authors should state which version of the asset is used and, if possible, include a URL. The name of the license (e.g., CC-BY 4.0) should be included for each asset. For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. If this information is not available online, the authors are encouraged to reach out to the asset s creators. 13. New Assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: [NA] Justification: The paper does not release any new assets. Guidelines: The answer NA means that the paper does not release new assets. Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. The paper should discuss whether and how consent was obtained from people whose asset is used. At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. 14. Crowdsourcing and Research with Human Subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: [NA] Justification: Our work does not entail crowdsourcing experiments or research with human subjects. Guidelines: The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. According to the Neur IPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. 15. Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [NA] Justification: Not applicable for the same reason as above. Guidelines: The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the Neur IPS Code of Ethics and the guidelines for their institution. For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.