# longtailed_learning_requires_feature_learning__46e0bc2e.pdf Published as a conference paper at ICLR 2023 LONG-TAILED LEARNING REQUIRES FEATURE LEARNING Thomas Laurent1, James H. von Brecht, Xavier Bresson2 1 Loyola Marymount University, tlaurent@lmu.edu 2 National University of Singapore, xaviercs@nus.edu.sg We propose a simple data model inspired from natural data such as text or images, and use it to study the importance of learning features in order to achieve good generalization. Our data model follows a long-tailed distribution in the sense that some rare subcategories have few representatives in the training set. In this context we provide evidence that a learner succeeds if and only if it identifies the correct features, and moreover derive non-asymptotic generalization error bounds that precisely quantify the penalty that one must pay for not learning features. 1 INTRODUCTION Part of the motivation for deploying a neural network arises from the belief that algorithms that learn features/representations generalize better than algorithms that do not. We try to give some mathematical ballast to this notion by studying a data model where, at an intuitive level, a learner succeeds if and only if it manages to learn the correct features. The data model itself attempts to capture two key structures observed in natural data such as text or images. First, it is endowed with a latent structure at the patch or word level that is directly tied to a classification task. Second, the data distribution has a long-tail, in the sense that rare and uncommon instances collectively form a significant fraction of the data. We derive non-asymptotic generalization error bounds that quantify, within our framework, the penalty that one must pay for not learning features. We first prove a two part result that quantifies precisely the necessity of learning features within the context of our data model. The first part shows that a trivial nearest neighbor classifier performs perfectly when given knowledge of the correct features. The second part shows it is impossible to a priori craft a feature map that generalizes well when using a nearest neighbor classification rule. In other words, success or failure depends only on the ability to identify the correct features and not on the underlying classification rule. Since this cannot be done a priori, the features must be learned. Our theoretical results therefore support the idea that algorithms cannot generalize on long-tailed data if they do not learn features. Nevertheless, an algorithm that does learn features can generalize well. Specifically, the most direct neural network architecture for our data model generalizes almost perfectly when using either a linear classifier or a nearest neighbor classifier on the top of the learned features. Crucially, designing the architecture requires knowing only the meta structure of the problem, but no a priori knowledge of the correct features. This illustrates the built-in advantage of neural networks; their ability to learn features significantly eases the design burden placed on the practitioner. Subcategories in commonly used visual recognition datasets tend to follow a long-tailed distribution (Salakhutdinov et al., 2011; Zhu et al., 2014; Feldman & Zhang, 2020). Some common subcategories have a wealth of representatives in the training set, whereas many rare subcategories only have a few representatives. At an intuitive level, learning features seems especially important on a long-tailed dataset since features learned from the common subcategories help to properly classify test points from a rare subcategory. Our theoretical results help support this intuition. We note that when considering complex visual recognition tasks, datasets are almost unavoidably long-tailed (Liu et al., 2019) even if the dataset contains millions of images, it is to be expected that many subcategories will have few samples. In this setting, the classical approach of deriving asymptotic performance guarantees based on a large-sample limit is not a fruitful avenue. General- Published as a conference paper at ICLR 2023 ization must be approached from a different point of view (c.f. Feldman (2020) for very interesting work in this direction). In particular, the analysis must be non-asymptotic. One of our main contribution is to derive, within the context of our data model, generalization error bounds that are non-asymptotic and relatively tight by this we mean that our results hold for small numbers of data samples and track reasonably well with empirically evaluated generalization error. In Section 2 we introduce our data model and in Section 3 we discuss our theoretical results. For the simplicity of exposition, both sections focus on the case where each rare subcategory has a single representative in the training set. Section 4 is concerned with the general case in which each rare subcategory has few representatives. Section 5 provides an overview of our proof techniques. Finally, in Section 6, we investigate empirically a few questions that we couldn t resolve analytically. In particular, our error bounds are restricted to the case in which a nearest neighbor classification rule is applied on the top of the features we provide empirical evidence in this last section that replacing the nearest neighbor classifier by a linear classifier leads to very minimal improvement. This further support the notion that, on our data model, it is the ability to learn features that drives success, not the specific classification rule used on the top of the features. Related work. By now, a rich literature has developed that studies the generalization abilities of neural networks. A major theme in this line of work is the use of the PAC learning framework to derive generalization bounds for neural networks (e.g. Bartlett et al. (2017); Neyshabur et al. (2017); Golowich et al. (2018); Arora et al. (2018); Neyshabur et al. (2018)), usually by proving a bound on the difference between the finite-sample empirical loss and true loss. While powerful in their generality, such approaches are usually task independent and asymptotic; that is, they are mostly agnostic to any idiosyncrasies in the data generating process and need a statistically meaningful number of samples in the training set. As such, the PAC learning framework is not well-tailored to our specific aim of studying generalization on long-tailed data distributions; indeed, in such setting, a rare subcategory might have only a handful of representatives in the training set. After breakthrough results (e.g. Jacot et al. (2018); Du et al. (2018); Allen-Zhu et al. (2019); Ji & Telgarsky (2019)) showed that vastly over-parametrized neural networks become kernel methods (the so-called Neural Tangent Kernel or NTK) in an appropriate limit, much effort has gone toward analyzing the extent to which neural networks outperform kernel methods (Yehudai & Shamir, 2019; Wei et al., 2019; Refinetti et al., 2021; Ghorbani et al., 2019; 2020; Karp et al., 2021; Allen-Zhu & Li, 2019; 2020; Li et al., 2020; Malach et al., 2021). Our interest lies not in proving such a gap for its own sake, but rather in using the comparison to gain some understanding on the importance of learning features in computer vision and NLP contexts. Analyses that shed theoretical light onto learning with long-tailed distributions (Feldman, 2020; Brown et al., 2021) or onto specific learning mechanisms (Karp et al., 2021) are perhaps closest to our own. The former analyses (Feldman, 2020; Brown et al., 2021) investigate the necessity of memorizing rare training examples in order to obtain near-optimal generalization error when the data distribution is long-tailed. Our analysis differs to the extent that we focus on the necessity of learning features and sharing representations in order to properly classify rare instances. Like us, the latter analysis (Karp et al., 2021) also considers a computer vision inspired task and uses it to compare a neural network to a kernel method, with the ultimate aim of studying the learning mechanism involved. Their object of study (finding a sparse signal in the presence of noise), however, markedly differs from our own (learning with long-tailed distributions). 2 THE DATA MODEL We begin with a simple example to explain our data model and to illustrate, at an intuitive level, the importance of learning features when faced with a long-tailed data distribution. For the sake of exposition we adopt NLP terminology such as words and sentences, but the image-based terminology of patches and images would do as well. The starting point is a very standard mechanism for generating observed data from some underlying collection of latent variables. Consider the data model depicted in Figure 1. We have a vocabulary of nw = 12 words and a set of nc = 3 concepts: V = {potato, cheese, carrots, chicken, . . .} and C = {vegetable, dairy, meat}. Published as a conference paper at ICLR 2023 Category 1 Category 2 Category 3 15 sentences (the training set) 6 sequences of concepts (latent variables) Vocabulary of 12 words partitioned into 3 concepts chicken lamb lettuce potato leek carrot x11 = [ cheese, butter, lettuce, chicken, leek ] x12 = [ carrot, pork, cream, carrot, cheese ] x13 = [ lettuce, chicken, butter, potato, butter ] x14 = [ lettuce, beef, yogurt, leek, cream ] x15 = [ potato, lamb, butter, potato, yogurt ] x21 = [ butter, pork, lamb, lamb, yogurt ] x22 = [ chicken, cheese, cream, lettuce, beef ] x23 = [ beef, cheese, cheese, carrot, pork ] x24 = [ lamb, butter, cream, potato, lamb ] x25 = [ chicken, cream, butter, leek, pork ] x31 = [ chicken, cheese, cheese, yogurt, carrot ] x32 = [ leek, pork, cream, lamb, butter ] x33 = [ carrot, lamb, cheese, pork, yogurt ] x34 = [ lettuce, chicken, cheese, chicken, yogurt ] x35 = [ leek, chicken, butter, lamb, cheese ] 1 = [ dairy, dairy, veggie, meat, veggie ] c1 = [ veggie, meat, dairy, veggie, dairy ] 2 = [ dairy, meat, meat, meat, dairy ] c2 = [ meat, dairy, dairy, veggie, meat ] 3 = [ meat, dairy, dairy, dairy, veggie ] c3 = [ veggie, meat, dairy, meat, dairy ] Figure 1: Data model with parameters set to L = 5, nw = 12, nc = 3, R = 3, and nspl = 5. The 12 words are partitioned into the 3 concepts as shown on the left of Figure 1. We also have 6 sequences of concepts of length L = 5. They are denoted by c1, c2, c3 and c 1, c 2, c 3. Sequences of concepts are latent variables that generate sequences of words. For example [dairy, dairy, veggie, meat, veggie] generates [cheese, butter, lettuce, chicken, leek] The sequence of words on the right was obtained by sampling each word uniformly at random from the corresponding concept. For example, the first word was randomly chosen out of the dairy concept (butter, cheese, cream, yogurt), and the last word was randomly chosen out of the vegetable concept (potato, carrot, leek, lettuce.) Sequences of words will be referred to as sentences. The non-standard aspect of our model comes from how we use the latent-variable observeddatum process to form a training distribution. The training set in Figure 1 is made of 15 sentences split into R = 3 categories. The latent variables c 1, c 2, c 3 each generate a single sentence, whereas the latent variables c1, c2, c3 each generate 4 sentences. We will refer to c1, c2, c3 as the familiar sequences of concepts since they generate most of the sentences encountered in the training set. On the other hand c 1, c 2, c 3 will be called unfamiliar. Similarly, a sentence generated by a familiar (resp. unfamiliar) sequence of concepts will be called a familiar (resp. unfamiliar) sentence. The former represents a datum sampled from the head of a distribution while the latter represents a datum sampled from its tail. We denote by xr,s the sth sentence of the rth category, indexed so that the first sentence of each category is an unfamiliar sentence and the remaining ones are familiar. Suppose now that we have trained a learning algorithm on the training set described above and that at inference time we are presented with a previously unseen sentence generated by the unfamiliar sequence of concept c 1 = [dairy, dairy, veggie, meat, veggie]. To fix ideas, let s say that sentence is: xtest = [butter, yogurt, carrot, beef, lettuce] (1) This sentence is hard to classify since there is a single sentence in the training set that has been generated by the same sequence of concepts, namely x1,1 = [cheese, butter, lettuce, chicken, leek] . (2) Moreover these two sentences do not overlap at all (i.e. the ith word of xtest is different from the ith word of x1,1 for all i.) To properly classify xtest, the algorithm must have learned the equivalences butter cheese, yogurt butter, carrot lettuce, and so forth. In other words, the algorithm must have learned the underlying concepts. Nevertheless, a neural network with a well-chosen architecture can easily succeed at such a classification task. Consider, for example, the network depicted on Figure 2. Each word of the input sentence, after being encoded into a one-hot-vector, goes through a multi-layer perceptron (MLP 1 on the figure) shared across words. The output is then normalized using Layer Norm (Ba et al., 2016) to produce a representation of the word. The word representations are then concatenated into a single vector that goes through a second multi-layer perceptron (MLP 2 on the figure). Published as a conference paper at ICLR 2023 [ cheese, butter, lettuce, chicken, leek ] concatenate Figure 2: A simple neural net. This network, if properly trained, will learn to give similar representations to words that belong to the same concept. Therefore, if it correctly classifies the train point x1,1 given by (2), it will necessarily correctly classify the test point xtest given by (1). So the neural network is able to classify the previously unseen sentence xtest despite the fact that the training set contains a single example with the same underlying sequence of concepts. This comes from the fact that the neural network learns features and representations from the familiar part of the training set (generated by the head of the distribution), and uses these, at test time, to correctly classify the unfamiliar sentences (generated by the tail of the distribution). In other words, because it learns features, the neural network has no difficulty handling the long-tailed nature of the distribution. To summarize, the variables L, nw, nc, R and nspl parametrize instances of our data model. They denote, respectively, the length of the sentences, the number of words in the vocabulary, the number of concepts, the number of categories, and the number of samples per category. So in the example presented in Figure 1 we have L = 5, nw = 12, nc = 3, R = 3 and nspl = 5 (four familiar sentences and one unfamiliar sentence per category). The vocabulary V and set of concepts C are discrete sets with |V| = nw and |C| = nc, rendered as V = {1, . . . , nw} and C = {1, . . . , nc} for concreteness. A partition of the vocabulary into concepts, like the one depicted at the top of Figure 1, is encoded by a function ϕ : V C that assigns words to concepts. We require that each concept contains the same number of words, so that ϕ satisfies |ϕ 1({c})| = |{w V : ϕ(w) = c}| = nw/nc for all c C, (3) and we refer to such a function ϕ : V C satisfying (3) as equipartition of the vocabulary. The set Φ = {All functions ϕ from V to C that satisfy (3) } denotes the collection of all such equipartitions, while the data space and latent space are denoted X = VL and Z = CL, respectively. Elements of X are sentences of L words and they take the form x = [x1, x2, . . . , x L], while elements of Z take the form c = [c1, c2, . . . , c L] and correspond to sequences of concepts. In the context of this work, a feature map refers to any function ψ : X 7 F from data space to feature space. The feature space F can be any Hilbert space (possibly infinite dimensional) and we denote by , F the associated inner product. Our analysis applies to the case in which a nearest neighbor classification rule is applied on the top of the extracted features. Such rule works as follow: given a test point x, the inner products ψ(x), ψ(y) F are evaluated for all y in the training set; the test point x is then given the label of the training point y that led to the highest inner product. 3 STATEMENT AND DISCUSSION OF MAIN RESULTS Our main result states that, in the context of our data model, features must be tailored (i.e. learned) to each specific task. Specifically, it is not possible to find a universal feature map ψ : X F that performs well on a collection of tasks like the one depicted on Figure 1. In the context of this work, a task refers to a tuple T = ( ϕ ; c1, . . . , c R ; c 1, . . . , c R ) Φ Z2R (4) that prescribes a partition of the vocabulary into concepts, R familiar sequences of concepts, and R unfamiliar sequences of concepts. Given such a task T we generate a training set S as described in the previous section. This training set contains R nspl sentences split over R categories, and each category contains a single unfamiliar sentence. Randomly generating the training set S from the task T corresponds to sampling S Dtrain T from a distribution Dtrain T defined on the space X R nspl and parametrized by the variables in (4) (the appendix provides an explicit formula for this distribution). We measure performance of an Published as a conference paper at ICLR 2023 algorithm by its ability to generalize on previously unseen unfamiliar sentences. Generating an unfamiliar sentence amounts to drawing a sample x Dtest T from a distribution Dtest T on the space X parametrized by the variables ϕ, c 1, . . . , c R in (4) that determine unfamiliar sequences of concepts. Finally, associated with every task T we have a labelling function f T : X {1, . . . , R} that assigns the label r to sentences generated by either cr or c r (this function is ill-defined if two sequences of concepts from different categories are identical, but this issue is easily resolved by formal statements in the appendix). Summarizing our notations, for every task T Φ Z2R we have a distribution Dtrain T on the space X R nspl, a distribution Dtest T on the space X, and a labelling function f T . Given a feature space F, a feature map ψ : X F, and a task T Φ Z2R, the expected generalization error of the nearest neighbor classification rule on unfamiliar sentences is given by: err(F, ψ, T ) = E S Dtrain T P x Dtest T arg max y S ψ(x), ψ(y) F = f T (x) # For simplicity, if the test point has multiple nearest neighbors with inconsistent labels in the training set (and so the arg max returns multiple training points y), we will count the classification as a failure for the nearest neighbor classification rule. We therefore replace (5) by the more formal (but more cumbersome) formula err(F, ψ, T ) = E S Dtrain T P x Dtest T y arg max y S ψ(x), ψ(y) F such that f T (y) = f T (x) # to make this explicit. Our main theoretical results concern performance of a learner not on a single task T but on a collection of tasks T = {T1, T2, . . . , TNtasks}, and so we define err(F, ψ, T) = 1 T T err(F, ψ, T ) (7) as the expected generalization error on such a collection T of tasks. As a task refers to an element of the discrete set Φ Z2R, any subset T Φ Z2R defines a collection of tasks. Our main result concerns the case where the collection of tasks T = Φ Z2R consists in all possible tasks that one might encounter. For concreteness, we choose specific values for the model parameters and state the following special case of our main theorem (Theorem 3 at the end of this section) Theorem 1. Let L = 9, nw = 150, nc = 5, R = 1000 and nspl 2. Let T = Φ Z2R. Then err(F, ψ, T) > 98.4% for all feature spaces F, and all feature maps ψ : X 7 F. In other words, for the model parameters specified above, it is not possible to design a task-agnostic feature map ψ that works well if we are uniformly uncertain about which specific task we will face. Indeed, the best possible feature map will fail at least 98.4% of the time at classifying unfamiliar sentences (with a nearest-neighbor classification rule), where the probability is with respect to the random choices of the task, of the training set, and of the unfamiliar test sentence. Interpretation: Our desire to understand learning demands that we consider a collection of tasks rather than a single one, for if we consider only a single task then the problem, in our setting, becomes trivial. Indeed, assume T = {T1} with T1 = (ϕ; c1, . . . , c R; c 1, . . . , c R) consists only of a single task. With knowledge of this task we can easily construct a feature map ψ : X RLnc that performs perfectly. Indeed, the map ψ([x1, . . . , x L]) = [eϕ(x1), . . . , eϕ(x L)] (8) that simply replaces each word xℓof the input sentence by the one-hot-encoding eϕ(xℓ) of its corresponding concept will do.1 A bit of thinking reveals that the nearest neighbor classification rule associated with feature map (8) perfectly solves the task T1. This is due to the fact that sentences 1We use ei to denote the ith basis vector of Rnc. So eϕ(xℓ) is a one-hot vector coding for the concept ϕ(xℓ). Published as a conference paper at ICLR 2023 generated by the same sequence of concepts are mapped by ψ to the exact same location in feature space. As a consequence, the nearest neighbor classification rule will match the unfamiliar test sentence x to the unique training sentence y that occupies the same location in feature space, and this training sentence has the correct label by construction (assuming that sequences of concepts from different categories are distinct). To put it formally: Theorem 2. Given a task T Φ Z2R satisfying c r = c s and c r = cs for all r = s, there exists a feature space F and a feature map ψ : X 7 F such that err(F, ψ, T ) = 0. Consider now the case where T = {T1, T2} consists of two tasks. According to Theorem 2 there exists a ψ that perfectly solves T1, but this ψ might perform poorly on T2, and vice versa. So, it might not be possible to design good features if we do not know a priori which of these tasks we will face. Theorem 1 states that, in the extreme case where T contains all possible tasks, this is indeed the case the best possible task-agnostic features ψ will perform catastrophically on average. In other words, features must be task-dependent in order to succeed. To draw a very approximate analogy, imagine once again that T = {T1} and that T1 represents, say, a hand-written digit classification task. A practitioner, after years of experience, could hand-craft a very good feature map ψ that performs almost perfectly for this task. If we then imagine the case T = {T1, T2} where T1 represents a hand-written digit classification task and T2 represents, say, an animal classification task, then it becomes more difficult for a practitioner to handcraft a feature map ψ that works well for both tasks. In this analogy, the size of the set T encodes the amount of knowledge the practitioner has about the specific tasks she will face. The extreme choice T = Φ Z2R corresponds to the practitioner knowing nothing beyond the fact that natural images are made of patches. Theorem 1 quantifies, in this extreme case, the impossibility of hand-crafting a feature map ψ knowing only the range of possible tasks and not the specific task itself. In a realistic setting the collection of tasks T is smaller, of course, and the data generative process itself is more coherent than in our simplified setup. Nonetheless, we hope our analysis sheds some light on some of the essential limitations of algorithms that do not learn features. Finally, our empirical results (see Section 6) show that a simple algorithm that learns features does not face this obstacle. We do not need knowledge of the specific task T in order to design a good neural network architecture, but only of the family of tasks T = Φ Z2R that we will face. Indeed, the architecture in Figure 2 succeeds at classifying unfamiliar test sentences more than 99% of the time. This probability, which we empirically evaluate, is with respect to the choice of the task, the choice of the training set, and the choice of the unfamiliar test sentence (we use the values of L, nw, nc and R from Theorem 1, and nspl = 6, for this experiment). Continuing with our approximate analogy, this means our hypothetical practitioner needs no domain specific knowledge beyond the patch structure of natural images when designing a successful architecture. In sum, successful feature design requires task-specific knowledge while successful architecture design requires only knowledge of the task family. Main Theorem: Our main theoretical result extends Theorem 1 to arbitrary values of L, nw, nc, nspl and R. The resulting formula involves various combinatorial quantities. We denote by n k the binomial coefficients and by n k the Stirling numbers of the second kind. Let N = {0, 1, 2, . . .} and let γ, ˆγ : NL+1 N be the functions defined by γ(k) := PL+1 i=1 (i 1)ki and ˆγ(k) := PL+1 i=1 iki, respectively. We then define, for 0 ℓ L, the sets Sℓ:= k NL+1 : ˆγ(k) = nw and ℓ γ(k) L . We let S = S0, and we note that the inclusion Sℓ S always holds. Given k NL+1 we denote by Ak := n A N(L+1) nc : i=1 i Aij = nw/nc for all j and j=1 Aij = ki for all i o the set of k-admissible matrices. Finally, we let f, g : S R be the functions defined by f(k) := (nw/nc)! nc n L c nw! ki! Ai,1! Ai,2! Ai,nc! g(k) := γ(k)! nw! k1!k2! k L+1! Published as a conference paper at ICLR 2023 respectively. With these definitions at hand, we may now state our main theorem. Theorem 3 (Main Theorem). Let T = Φ Z2R. Then err(F, ψ, T) k Sℓ f(k)g(k) 2 max k Sℓf(k) (9) for all feature spaces F, all feature maps ψ : X 7 F, and all 0 ℓ L. The combinatorial quantities involved appear a bit daunting at a first glance, but, within the context of the proof, they all take a quite intuitive meaning. The heart of the proof involves the analysis of a measure of concentration that we call the permuted moment, and of an associated graph-cut problem. The combinatorial quantities arise quite naturally in the course of analyzing the graph cut problem. We provide a quick overview of the proof in Section 5, and refer to the appendix for full details. For now, it suffices to note that we have a formula (i.e. the right hand side of (9)) that can be exactly evaluated with a few lines code. This formula provides a relatively tight lower bound for the generalization error. Theorem 1 is then a direct consequence plugging L = 9, nw = 150, nc = 5, R = 1000 and ℓ= 7 in the right hand side of (9) gives the claimed 98.4% lower bound. 4 MULTIPLE UNFAMILIAR SENTENCES PER CATEGORY Category 1 Category 2 Category 3 x11 = [ cheese, butter, lettuce, chicken, leek ] x12 = [ yogurt, cheese, carrot, pork, carrot ] x13 = [ carrot, pork, cream, carrot, cheese ] x14 = [ lettuce, chicken, butter, potato, butter ] x15 = [ lettuce, beef, yogurt, leek, cream ] x16 = [ potato, lamb, butter, potato, yogurt ] x31 = [ chicken, cheese, cheese, yorgurt, carrot ] x32 = [ lamb, butter, cheese, cream, potato ] x33 = [ leek, pork, cream, lamb, butter ] x34 = [ carrot, lamb, cheese, pork, yogurt ] x35 = [ lettuce, chicken, cheese, chicken, yogurt ] x36 = [ leek, chicken, butter, lamb, cheese ] 3 = [ meat, dairy, dairy, dairy, veggie ] c3 = [ veggie, meat, dairy, meat, dairy ] 2 = [ dairy, meat, meat, meat, dairy ] c2 = [ meat, dairy, dairy, veggie, meat ] 1 = [ dairy, dairy, veggie, meat, veggie ] c1 = [ veggie, meat, dairy, veggie, dairy ] x21 = [ butter, pork, lamb, lamb, yogurt ] x22 = [ cream, beef, chicken, pork, butter ] x23 = [ chicken, cheese, cream, lettuce, beef ] x24 = [ beef, cheese, cheese, carrot, pork ] x25 = [ lamb, butter, cream, potato, lamb ] x26 = [ chicken, cream, butter, leek, pork ] Figure 3: More general version of our data model. The two previous sections were concerned with the case in which each unfamiliar sequence of concepts has a single representative in the training set. In this section we consider the more general case in which each unfamiliar sequence of concepts has n representatives in the training set. Figure 3 depicts an example with nspl = 6 and n = 2. This means that each category contains a total of nspl = 6 sentences, and that n = 2 of these sentences are generated by the unfamiliar sequence of concepts (the remaining four are generated by the familiar sequence of concepts). The other parameters in this example are L = 5, nw = 12, nc = 3 and R = 3. Using a simple union bound, inequality (9) easily extends to this situation the resulting formula is a bit cumbersome so we present it in the appendix (see Theorem 7). In the concrete case where L = 9, nw = 150, nc = 5, R = 1000 this formula simplifies to err(F, ψ, T) 1 0.015 n 1/R for all F and all ψ, (10) therefore exhibiting an affine relationship between the error rate and the number n of unfamiliar sentences per category. Note that choosing n = 1 in (10) leads to a 98.4% lower bound on the error rate, therefore recovering the result from Theorem 1. This lower bound then decreases by 1.5% with each additional unfamiliar sentence per category in the training set. We would like to emphasize one more time the importance of non-asymptotic analysis in the longtailed learning setting. For example, in inequality (10), the difficulty lies in obtaining a value as small as possible for the coefficient in front of n . We accomplish this via a careful analysis of the graph cut problem associated with our data model. 5 PROOF OUTLINE PERMUTED MOMENT AND OPTIMAL FEATURE MAP The proof involves two main ingredients. First, the key insight of our analysis is the realization that generalization in our data model is closely tied to the permuted moment of a probability distribution. To state this central concept, it will prove convenient to think of probability distributions on X as vectors p RN with N = |X|, together with indices 0 i N 1 given by some arbitrary (but fixed) indexing of the elements of data space. Then pi denotes the probability of the ith element of Published as a conference paper at ICLR 2023 X in this indexing. We use SN to denote the set of permutations of {0, 1, . . . , N 1} and σ SN to refer to a particular permutation. The tth permuted moment of the probability vector p RN is Ht(p) := max σ SN i=0 (i/N)t pσ(i) (11) Since (11) involves a maximum over all possible permutations, the definition clearly does not depend on the way the set X was indexed. In order to maximize the sum, the permutation σ must match the largest values of pi with the largest values of (i/N)t, so the maximizing permutation simply orders the entries pi from smallest to largest. A very peaked distribution that gives large probability to only a handful of elements of X will have large permuted moment. Because of this, the permuted moment is akin to the negative entropy; it has large values for delta-like distributions and small values for uniform ones. From definition (11) it is clear that 0 Ht(p) 1 for all probability vectors p, and it is easily verified that the permuted moment is convex. These properties, as well as various useful bounds for the permuted moment, are presented and proven in the appendix. Second, we identify a specific feature map, ψ : X F , which is optimal for a collection of tasks closely related to the ones considered in our data model. Leveraging the optimality of ψ on these related tasks allows us to derive an error bound that holds for the tasks of interest. The feature map ψ is better understood through its associated kernel, which is given by the formula K (x, y) = ψ (x), ψ (y) F = n L c n Lw {ϕ Φ : ϕ(xℓ) = ϕ(yℓ) for all 1 ℓ L} Up to normalization, K (x, y) simply counts the number of equipartitions of the vocabulary for which sentences x and y have the same underlying sequence of concepts. Intuitively this makes sense, for the best possible kernel must leverage the only information we have at hand. We know the general structure of the problem (words are partitioned into concepts) but not the partition itself. So to try and determine if sentences (x, y) were generated by the same sequence of concepts, the best we can do is to simply try all possible equipartitions of the vocabulary and count how many of them wind up generating (x, y) from the same underlying sequence of concepts. A high count makes it more likely that (x, y) were generated by the same sequence of concepts. The optimal kernel K does exactly this, and provides a good (actually optimal, see the appendix) measure of similarity between pairs of sentences. For fixed x X, the function y 7 K (x, y) defines a probability distribution on data space. The connection between generalization error, permuted moment, and optimal feature map, come from the fact that sup F,ψ [1 err(F, ψ, T)] 1 |X| x X H2R 1 (K (x, )) + 1 and so, up to a small error 1/R, it is the permuted moments of K that determine the success rate. We then obtain the lower bound (9) by studying these moments in great detail. A simple union bound is then used to obtain inequalities such as (10). 6 EMPIRICAL RESULTS We conclude by presenting empirical results that complement our theoretical findings. The full details of these experiments (training procedure, hyperparameter choices, number of experiments ran to estimate the success rates, and standard deviations of these success rates), as well as additional experiments, can be found in Appendix E. Codes are available at https://github.com/ xbresson/Long_Tailed_Learning_Requires_Feature_Learning. Parameter Settings. We consider five parameter settings for the data model depicted in Figure 3. Each setting corresponds to a column in Table 1. In all five settings, we set the parameters L = 9, nw = 150, nc = 5 and R = 1000 to the values for which the error bound (10) holds. We choose values for the parameters nspl and n so that the ith column of the table corresponds to a setting in which the training set contains 5 familiar and i unfamiliar sentences per category. Recall that nspl is the total number of samples per category in the training set. So the first column of the table corresponds to a setting in which each category contains 5 familiar sentences and 1 unfamiliar Published as a conference paper at ICLR 2023 Table 1: Success rate on unfamiliar test sentences. n =1 n =2 n =3 n =4 n =5 nspl =6 nspl =7 nspl =8 nspl =9 nspl =10 Neural network in Figure 2 99.8% 99.9% 99.9% 99.9% 100% Nearest neighb. on features learned by neural net 99.9% 99.9% 99.9% 99.9% 99.9% Nearest neighb. on features extracted by ψ 0.7% 1.1% 1.5% 1.8% 2.2% Nearest neighb. on features extracted by ψone hot 0.6% 1.1% 1.4% 1.7% 2.1% Theoretical upper bound (0.015n + 1/1000) 1.6% 3.1% 4.6% 6.1% 7.6% SVM on features extracted by ψ 0.6% 1.5% 2.2% 3.2% 4.2% SVM on features extracted by ψone hot 0.5% 1.1% 1.9% 2.8% 3.8% SVM with Gaussian kernel 0.6% 1.1% 2.0% 2.8% 3.6% sentence, whereas the last column corresponds to a setting in which each category contains 5 familiar sentences and 5 unfamiliar sentences. Algorithms. We evaluate empirically seven different algorithms. The first two rows of the table correspond to experiments in which the neural network in Figure 2 is trained with SGD and constant learning rate. At test time, we consider two different strategies to classify test sentences. The first row of the table considers the usual situation in which the trained neural network is used to classify test points. The second row considers the situation in which the trained neural network is only used to extract features (i.e. the concatenated words representation right before MLP2). The classification of test points is then accomplished by running a nearest neighbor classifier on these learned features. The third (resp. sixth) row of the table shows the results obtained when running a nearest neighbor algorithm (resp. SVM) on the features ψ of the optimal feature map. By the kernel trick, these algorithms only require the values of the optimal kernel ψ (x), ψ (y) F , computed via (12), and not the features ψ themselves. The fourth (resp. seventh) row shows results obtained when running a nearest neighbor algorithm (resp. SVM) on features extracted by the simplest possible feature map, that is ψone hot([x1, . . . , x L]) = [ex1, . . . , ex L] where exℓdenotes the one-hot-encoding of the ℓth word of the input sentence. Finally, the last row considers a SVM with Gaussian Kernel (also called RBF kernel). Results. The first two rows of the table correspond to algorithms that learn features from the data; the remaining rows correspond to algorithms that use a pre-determined (not learned) feature map. Table 1 reports the success rate of each algorithm on unfamiliar test sentences. A crystal-clear pattern emerges. Algorithms that learn features generalize almost perfectly, while algorithms that do not learn features catastrophically fail. Moreover, the specific classification rule matters little. For example, replacing MLP2 by a nearest neighbor classifier on the top of features learned by the neural network leads to equally accurate results. Similarly, replacing the nearest neighbor classifier by a SVM on the top of features extracted by ψ or ψone hot leads to almost equally poor results. The only thing that matters is whether or not the features are learned. Finally, inequality (10) gives an upper bound of 0.015n + 1/1000 on the success rate of the nearest neighbor classification rule applied on the top of any possible feature map (including ψ and ψone hot). The fifth row of Table 1 compares this bound against the empirical accuracy obtained with ψ and ψone hot, and the comparison shows that our theoretical upper bound is relatively tight. When n = 1 our main theorem states that no feature map can succeed more than 1.6% of the time on unfamiliar test sentences (fifth row of the table). At first glance this appears to contradict the empirical performance of the feature map extracted by the neural network, which succeeds 99% of the time (second row of the table). The resolution of this apparent contradiction lies in the order of operations. The point here is to separate hand crafted or fixed features from learned features via the order of operations. If we choose the feature map before the random selection of the task then the algorithm performs poorly since it uses unlearned, task-independent features. By contrast, the neural network learns a feature map from the training set, and since the training set is generated by the task, this process takes place after the random selection of the task. It therefore uses taskdependent features, and the network performs almost perfectly for the specific task that generated its training set. But by our main theorem, it too must fail if the task changes but the features do not. Published as a conference paper at ICLR 2023 Acknowledgements. Xavier Bresson is supported by NUS-R-252-000-B97-133 and A*STAR Grant ID A20H4g2141. Zeyuan Allen-Zhu and Yuanzhi Li. What can resnet learn efficiently, going beyond kernels? Advances in Neural Information Processing Systems, 32, 2019. Zeyuan Allen-Zhu and Yuanzhi Li. Backward feature correction: How deep learning performs deep learning. ar Xiv preprint ar Xiv:2001.04413, 2020. Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning via overparameterization. In International Conference on Machine Learning, pp. 242 252. PMLR, 2019. Sanjeev Arora, Rong Ge, Behnam Neyshabur, and Yi Zhang. Stronger generalization bounds for deep nets via a compression approach. In International Conference on Machine Learning, pp. 254 263. PMLR, 2018. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. ar Xiv preprint ar Xiv:1607.06450, 2016. Peter L Bartlett, Dylan J Foster, and Matus J Telgarsky. Spectrally-normalized margin bounds for neural networks. Advances in neural information processing systems, 30, 2017. Gavin Brown, Mark Bun, Vitaly Feldman, Adam Smith, and Kunal Talwar. When is memorization of irrelevant training data necessary for high-accuracy learning? In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pp. 123 132, 2021. Chih-Chung Chang and Chih-Jen Lin. Libsvm: a library for support vector machines. ACM transactions on intelligent systems and technology (TIST), 2(3):1 27, 2011. Simon S Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes over-parameterized neural networks. ar Xiv preprint ar Xiv:1810.02054, 2018. Vitaly Feldman. Does learning require memorization? a short tale about a long tail. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, pp. 954 959, 2020. Vitaly Feldman and Chiyuan Zhang. What neural networks memorize and why: Discovering the long tail via influence estimation. Advances in Neural Information Processing Systems, 33:2881 2891, 2020. Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, and Andrea Montanari. Limitations of lazy training of two-layers neural network. Advances in Neural Information Processing Systems, 32, 2019. Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, and Andrea Montanari. When do neural networks outperform kernel methods? Advances in Neural Information Processing Systems, 33: 14820 14830, 2020. Noah Golowich, Alexander Rakhlin, and Ohad Shamir. Size-independent sample complexity of neural networks. In Conference On Learning Theory, pp. 297 299. PMLR, 2018. Arthur Jacot, Franck Gabriel, and Cl ement Hongler. Neural tangent kernel: Convergence and generalization in neural networks. Advances in neural information processing systems, 31, 2018. Ziwei Ji and Matus Telgarsky. Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow relu networks. ar Xiv preprint ar Xiv:1909.12292, 2019. Stefani Karp, Ezra Winston, Yuanzhi Li, and Aarti Singh. Local signal adaptivity: Provable feature learning in neural networks beyond kernels. Advances in Neural Information Processing Systems, 34, 2021. Yuanzhi Li, Tengyu Ma, and Hongyang R Zhang. Learning over-parametrized two-layer neural networks beyond ntk. In Conference on learning theory, pp. 2613 2682. PMLR, 2020. Published as a conference paper at ICLR 2023 Ziwei Liu, Zhongqi Miao, Xiaohang Zhan, Jiayun Wang, Boqing Gong, and Stella X Yu. Largescale long-tailed recognition in an open world. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2537 2546, 2019. Eran Malach, Pritish Kamath, Emmanuel Abbe, and Nathan Srebro. Quantifying the benefit of using differentiable learning over tangent kernels. In International Conference on Machine Learning, pp. 7379 7389. PMLR, 2021. Behnam Neyshabur, Srinadh Bhojanapalli, and Nathan Srebro. A Pac-Bayesian approach to spectrally-normalized margin bounds for neural networks. ar Xiv preprint ar Xiv:1707.09564, 2017. Behnam Neyshabur, Zhiyuan Li, Srinadh Bhojanapalli, Yann Le Cun, and Nathan Srebro. Towards understanding the role of over-parametrization in generalization of neural networks. ar Xiv preprint ar Xiv:1805.12076, 2018. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825 2830, 2011. Maria Refinetti, Sebastian Goldt, Florent Krzakala, and Lenka Zdeborov a. Classifying highdimensional Gaussian mixtures: Where kernel methods fail and neural networks succeed. In International Conference on Machine Learning, pp. 8936 8947. PMLR, 2021. Ruslan Salakhutdinov, Antonio Torralba, and Josh Tenenbaum. Learning to share visual appearance for multiclass object detection. In CVPR 2011, pp. 1481 1488. IEEE, 2011. Colin Wei, Jason D Lee, Qiang Liu, and Tengyu Ma. Regularization matters: Generalization and optimization of neural nets vs their induced kernel. Advances in Neural Information Processing Systems, 32, 2019. Gilad Yehudai and Ohad Shamir. On the power and limitations of random features for understanding neural networks. Advances in Neural Information Processing Systems, 32, 2019. Xiangxin Zhu, Dragomir Anguelov, and Deva Ramanan. Capturing long-tail distributions of object subcategories. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 915 922, 2014. Published as a conference paper at ICLR 2023 In Section A we prove a few elementary properties of the permuted moment (11). Section B is devoted to the proof of inequality (13), which we restate here for convenience: sup F,ψ [1 err(F, ψ, T)] 1 |X| x X H2R 1 (K (x, )) + 1 where the collection of tasks T = Φ Z2R consists in all possible tasks that one might encounter. Inequality (14) plays a central role in our work as it establishes the connection between the generalization error, the permuted moment, and the optimal kernel K defined by (12). The proof is non-technical and easily accessible. In Section C we provide the following upper bound on the permuted moment of the optimal kernel: x X H2R 1 (K (x, )) k Sℓ f(k)g(k) max k Sℓf(k) (15) for all 0 ℓ L. The proof is combinatorial in nature, and involves the analysis of a graph-cut problem. Combining (14) and (15) establishes Theorem 3. In Section D we consider the case in which each unfamiliar sequence of concepts has n representatives in the training set. A simple union bound shows that, in this situation, inequality (14) becomes sup F,ψ [1 err(F, ψ, T)] n x X H2R 1 (K (x, )) + 1 Combining (16) and (15) then provides our most general error bound, see Theorem 7. Inequality (10) in the main body of the paper is just a special case of Theorem 7. Finally, in Section E, we provide the full details of the experiments. A PROPERTIES OF THE PERMUTED MOMENT The permuted moment, in Section 5, was defined for probability vectors only. It will prove convenient to consider the permuted moment of nonnegative vectors as well. We denote by R+ = [0, + ) the nonnegative real numbers, and by RN + the vectors with N nonnegative real entries indexed from i = 0 to i = N 1. The permuted moment of u RN + is then given by Ht(u) := max σ SN i=0 (i/N)t uσ(i). (17) where SN denote the set of permutations of {0, 1, . . . , N 1}. The concept of an ordering permutation will prove useful in the next lemma. Definition 1. σ SN is said to be an ordering permutation of u RN if uσ(0) uσ(1) . . . uσ(N 1). (18) The lemma below shows that the permutation maximizing (17) is the one that sorts the entries ui from smallest to largest. Lemma 1. Let u RN + and let σ be an ordering permutation of u. Then σ arg max σ SN i=0 (i/N)t uσ(i). (19) Proof. The optimization problem (19) can be formulated as finding a pairing between the ui s and the (i/N)t s that maximizes the sum of the product of the pairs. An ordering permutation of u corresponds to pairing the smallest entry of u to (0/N)t, the second smallest entry to (1/N)t, the third smallest entry to (2/N)t, and so forth. This pairing is clearly optimal. Published as a conference paper at ICLR 2023 In light of the previous lemma, we see that computing the permuted moment of a vector u can be accomplished as follow: 1) sort the entries of u from smallest to largest; 2) compute the dot product between this sorted vector and the vector h 0 N t . . . N 1 N ti . (20) Let us now focus on the case where u is a probability distribution. If u is very peaked, it must have a large permuted moment since, after sorting, most of the mass concentrates on the high values of (20) located on the right. On the contrary, if u is very spread, it must have small permuted moment since it wastes its mass on small values of (20). Because of this, the permuted moment is akin to the negative entropy; it has large values for delta-like distributions and small values for uniform ones. We now show that the permuted moment is subaddiditive and one-homogeneous on RN + (as a consequence it is convex on the set of probability vectors) and we derive some elementary ℓ1 and ℓ bounds. We denote by u p the ℓp-norm of a vector u. In particular, if u RN +, we have i=0 ui and u := max 0 i N 1 ui. With this notation in hand, we can now state our lemma: Lemma 2. (i) Ht(u + v) Ht(u) + Ht(v) for all u, v RN +. (ii) Ht(c u) = c Ht(u) for all u RN + and all c 0. (iii) Ht(u) u 1 for all u RN +. (iv) Ht(u) N t+1 u for all u RN +. Proof. Properties (i) and (ii) are obvious. To prove (iii) and (iv), define wi = (i/N)t and note that w 1 and w 1 = N i=0 (i/N)t ! 0 xtdt = N t + 1 Then (iii) comes from Ht(u) w u 1 whereas (iv) comes from Ht(u) w 1 u . We conclude this section with a slightly more sophisticated bound that holds for probability vectors this bound will play a central role in Section C. Lemma 3. Suppose p RN +, and suppose PN i=1 pi = 1. Then i=0 min{pi, λ} t + 1 for all λ 0. Proof. Fix a λ 0 and define the vectors u and v as follow: ui = min{pi, λ} and vi = pi min{pi, λ} for all 0 i N 1 Note that this two vectors are non-negative and sum to p. We can therefore use Lemma 2 to obtain Ht(p) = Ht(u + v) Ht(u) + Ht(v) N t + 1 u + v 1 To conclude, we note that u λ, and v 1 = 1 PN 1 i=0 min{pi, λ}. Published as a conference paper at ICLR 2023 B PERMUTED MOMENT OF K AND GENERALIZATION ERROR This section is devoted to the proof of inequality (14). We start by recalling a few definitions. The vocabulary, set of concepts, data space, and latent space are V = {1, . . . , nw}, C = {1, . . . , nc}, X = VL and Z = CL respectively. Elements of X are sentences of L words and they take the form x = [x1, x2, . . . , x L], while elements of Z take the form c = [c1, c2, . . . , c L] and correspond to sequences of concepts. We also recall that the collection of all equipartitions of the vocabulary is denoted by Φ = All functions ϕ from V to C that satisfy |ϕ 1({c})| = sc for all c where sc := nw/nc denote the size of the concepts. Given ϕ Φ, we denote by ϕ : X C the function ϕ [x1, x2, . . . , x L] := ϕ(x1), ϕ(x2), . . . , ϕ(x L) that operates on sentences element-wise. The informal statement the sentence x is randomly generated by the sequence of concepts c means that x is sampled uniformly at random from the set ϕ 1({c}) = {x X : ϕ(x) = c}. (21) We will often do the abuse of notation of writing ϕ 1(c) instead of ϕ 1({c}). We now formally define the sampling process associated with our main data model. Sampling Process DM: (i) Sample T = ( ϕ ; c1, . . . , c R ; c 1, . . . , c R ) uniformly at random in T = Φ Z2R. (ii) For r = 1, . . . , R: Sample (xr,1, . . . , xr,nunf) uniformly at random in ϕ 1(c r) . . . ϕ 1(c r). Sample (xr,nunf+1, . . . , xr,nspl) uniformly at random in ϕ 1(cr) . . . ϕ 1(cr). (iii) Sample xtest uniformly at random in ϕ 1(c 1). Step (i) of the above sampling process consists in selecting at random a task T among all possible tasks. Step (ii) consists in generating a training set2 S X R nspl exactly as depicted on Figure 1: each unfamiliar sequence of concept c r generates nunf sentences, whereas each familiar sequence of concept cr generates nfam sentences (recall that the number of samples per category is nspl = nunf +nfam). Finally step (iii) consists in randomly generating an unfamiliar test sentence xtest X. Without loss of generality we assume that this test sentence is generated by the unfamiliar sequence of concept c 1. We denote by ϱDM the p.d.f. of the sampling process DM. This function is defined on the sample space ΩDM := Φ C2R X R nspl X . A sample from ΩDM takes the form ω = ϕ ; c1, . . . , c R ; c 1, . . . , c R | {z } The task ; x1,1, . . . , x1,nspl ; . . . ; x R,1, . . . , x R,nspl | {z } The training sentences ; xtest |{z} The test sentence and we have the following formula for ϱDM ϱDM(ω) := 1 |Φ||C|2R 1 ϕ 1(c r)(xr, 1) | ϕ 1(c r)| 1 ϕ 1(cr)(xr, s) ! 1 ϕ 1(c 1)(xtest) | ϕ 1(c 1)| (22) where 1 ϕ 1(cr) and 1 ϕ 1(c r) denote the indicator functions of the set ϕ 1(cr) and ϕ 1(cr) respectively. Let us compute a few marginals of ϱDM in order to verify that it is indeed the p.d.f. of the 2We refer to S as the training set . In our formalism however it is not a set, but an element of X R nspl. Published as a conference paper at ICLR 2023 sampling process DM. Writing ω = (T , S, xtest), summing over the variables S and xtest, and using the fact that P x X 1 ϕ 1(c)(x) = | ϕ 1(c)|, we obtain X xtest X ϱDM(T , S, xtest) = 1 |Φ||C|2R . This shows that each task T is equiprobable. Summing over the variable S gives X S C2R ϱDM(T , S, xtest) = 1 |Φ||C|2R 1 ϕ 1(c 1)(xtest) | ϕ 1(c 1)| This shows that, given a task T , the test sentence xtest is obtained by sampling uniformly at random from ϕ 1(c 1). A similar calculation shows that, given a task T , the train sentence xr,s is obtained by sampling uniformly at random from ϕ 1(cr) if s 1, and from ϕ 1(c r) if s = 1. Given a feature space F and a feature map ψ : X F, we define the events EF,ψ ΩDM as follow: EF,ψ = n ω ΩDM : There exists 1 s nspl such that ψ(xtest), ψ(x1,s ) F > ψ(xtest), ψ(xr,s) F for all 2 r R and all 1 s nspl o . (23) Note that this event consists in all the outcomes ω = (T , S, xtest) for which the feature map ψ associates the test sentence xtest to a train point xr,s from the first category. Since by construction, xtest belongs to the first category, EF,ψ consists in all the outcomes for which the nearest neighbor classification rule is successful . As a consequence, when T = Φ Z2R, the generalization error can be expressed as err(F, ψ, T) = 1 PDM h EF,ψ i (24) where PDM denote the probability measure on ΩDM induced by ϱDM. Equation (24) should be viewed as our fully formal definition of the quantity err(F, ψ, T), as opposed to the more informal definition given earlier by equations (6) and (7). The goal of this section is to prove inequality (14), which, in light of (24) is equivalent to sup F,ψ PDM h EF,ψ i 1 |X| x X H2R 1 (K (x, )) + 1 which in turn equivalent to sup K:X X R K pos. semi-def. PDM h EK i 1 |X| x X H2R 1 (K (x, )) + 1 where the event EK is defined by EK = n ω ΩDM : There exists 1 s nspl such that K(xtest, x1,s ) > K(xtest, xr,s) for all 2 r R and all 1 s nspl o (27) and where the supremum is taken over all kernels K : X X R which are symmetric positive semidefinite. We will actually prove a slightly stronger result, namely sup K:X X R Kis symmetric PDM h EK i 1 |X| x X H2R 1 (K (x, )) + 1 where the supremum is taken over all functions K : X X R that satisfy K(x, y) = K(y, x) for all (x, y) X X. The rest of the section is devoted to proving (28). In Subsection B.1 we start by considering a simpler data model for this simpler data model we are able to show that the function ψ implicitly defined by (12) is the best possible feature map (we actually only work with the associated kernel K , and never need ψ itself). We also show that the success rate is exactly equal to the permuted moment of K see Theorem 4, which is is the central result of this section. In the remaining subsections, namely Subsection B.2 and Subsection B.3, we leverage the bound obtained for the simpler data model in order to obtain bound (28) for the main data model. These two subsections are mostly notational. The core of the analysis takes place in Subsection B.1. Published as a conference paper at ICLR 2023 B.1 A SIMPLER DATA MODEL We start by presenting the sampling process associated with our simpler datamodel. Sampling Process SDM: (i) Sample ϕ uniformly at random in Φ. Sample c1, c2, . . . , ct+1 uniformly at random in Z. (ii) For 1 r t + 1: Sample xr uniformly at random in ϕ 1(cr). (iii) Sample xtest uniformly at random in ϕ 1(c1). The function ϱSDM(ϕ; c1, . . . , ct+1; x1, . . . , xt+1; xtest) := 1 |Φ||C|t+1 1 ϕ 1(cr)(xr) ! 1 ϕ 1(c1)(xtest) | ϕ 1(c1)| (29) on ΩSDM := Φ Ct+1 X t+2 is the p.d.f. of the above sampling process. We use PSDM to denote the probability measure on ΩSDM induced by this function. The identity in the next theorem is the central result of this section. Theorem 4. Let K denote the set of all symmetric functions from X X to R. Then sup K K PSDM h K(xtest, x1) > K(xtest, xr) for all 2 r t + 1 i = 1 |X| x X Ht(K x) (30) In (30), K x stands for the function K(x, ). Theorem 4 establishes an intimate connection between the permuted moment and the ability of any fixed feature map (or equivalently, any fixed kernel) to generalize well in our framework. The sampling process considered in this theorem involves two points, xtest and x1, generated by the same sequence of concepts c1, and t distractor points x2, . . . , xt+1 generated by different sequences of concepts. Success for the kernel K means correctly recognizing that xtest is more similar to x1 than any of the distractors, and the success rate in (30) precisely quantifies its ability to do so as a function of the number t of distractors. The theorem shows that the probability of success for the best possible kernel at this task is exactly equal to the averaged tth-permuted moment of K x, so it elegantly quantifies the generalization ability of the best possible fixed feature map in term of the permuted moment. We also provide an explicit construction for a kernel K(x, y) that achieves the supremum in (30) First, choose a kernel ε(x, y) that satisfies (i) ε(x, y) = ε(x, z) for all x, y, z X with y = z. (ii) 0 ε(x, y) 1 for all x, y X. and then define the following perturbation K(x, y) = K (x, y) + ε(x, y)/(2s L c |Φ|) (31) of K . Any such kernel is a maximizer of the optimization problem in (30), so we may think of perturbations of K as bona-fide optimal. The rest of this subsection is devoted to the proof of Theorem 4, and we also show, in the course of the proof, that (31) is a maximizer of the optimization problem in (30). We use K to denote the set of all symmetric functions from X X to R. We will refers to such functions as kernel despite the fact that these functions are not necessarily positive semi-definite. Proving Theorem 4 requires that we study the following optimization problem: Maximize E(K) := PSDM h K(xtest, x1) > K(xtest, xr) for all 2 r t + 1 i (32) over all kernels K K. (33) We recall the definition of the optimal kernel, K (x, y) = 1 s Lc {ϕ Φ : ϕ(xℓ) = ϕ(yℓ) for all 1 ℓ L} where sc = nw/sc denotes the size of a concept. We start with the following simple lemma: Published as a conference paper at ICLR 2023 Lemma 4. The function K x( ) = K (x, ) is a probability distribution on X. Proof. First note that K can be written as K (x, y) = 1 |{ϕ Φ : ϕ(x) = ϕ(y)}| |Φ| = 1 s Lc |Φ| ϕ Φ 1{ ϕ(x)= ϕ(y)} (35) Since ϕ maps exactly sc words to each concept c {1, . . . , nc}, we have that |{x X : ϕ(x) = c}| = s L c for all c C. (36) Therefore X y X K (x, y) = 1 s Lc |Φ| y X 1{ ϕ(x)= ϕ(y)} = 1 s Lc |Φ| ϕ Φ |{y X : ϕ(y) = ϕ(x)}| = 1 We now show that the marginal of the p.d.f. ϱSDM is related to K . Lemma 5. For all x1, . . . , xt+1 and xtest in X we have X ct+1 C ϱSDM(ϕ; c1, . . . , ct+1; x1, . . . , xt+1; xtest) = 1 |X|t+1 K (x1, xtest). Proof. Identity (36) can be expressed as ϕ 1(c) = s L c for all c C. As a consequence, definition (29) of ϱSDM(ω) simplifies to ϱSDM(ω) = α r=1 1 ϕ 1(cr)(xr) 1 ϕ 1(c1)(xtest) (37) where the constant α is given by |Φ||C|t+1s L(t+2) c = 1 |Φ|n L(t+1) c s L(t+2) c = 1 |Φ||X|t+1s Lc In the above we have used the fact that |C| = n L c and |X| = n L w. We then note that the identity 1 ϕ 1(c)(x) = 1{ ϕ(x)=c} implies X c C 1 ϕ 1(c)(x) = X c C 1{ ϕ(x)=c} = 1 (38) 1 ϕ 1(c)(x) 1 ϕ 1(c)(y) = X 1{ ϕ(x)=c} 1{ ϕ(y)=c} = 1{ ϕ(x)= ϕ(y)} (39) for all x, y X. Summing (37) over the variables c1, . . . , ct+1 we obtain ct+1 C ϱSDM(ω) = α X 1 ϕ 1(c1)(x1) 1 ϕ 1(c1)(xtest) r=2 1 ϕ 1(cr)(xr) 1 ϕ 1(c1)(x1) 1 ϕ 1(c1)(xtest) r=2 1 ϕ 1(cr)(xr) = α 1{ ϕ(x1)= ϕ(xtest)} where we have used (38) and (39) to obtain the last equality. Summing the above over the variable ϕ gives K (x1, xtest)/|X|t+1. The next lemma provides a purely algebraic (as opposed to probabilistic) formulation for the functional E(K) defined in (32). Published as a conference paper at ICLR 2023 Lemma 6. The functional E : K R can be expressed as E(K) = 1 |X| y X K (x, y) |{z X : K(x, z) < K(x, y)}| Proof. Let g : X t+2 K {0, 1} be the indicator function defined by g(x1, . . . , xt+1, xtest, K) = 1 if K(xtest, x1) > K(xtest, xr) for all 2 r t + 1 0 otherwise Let ω denote the sample (ϕ; c1, . . . , ct+1; x1, . . . , xt+1; xtest). Since g only depends on the last t+2 variables of ω, we have E(K) = PSDM h K(xtest, x1) > K(xtest, xr) for all 2 r t + 1 i (41) xtest X g(x1, . . . , xt+1, xtest, K) ϱSDM(ω) (42) xtest X g(x1, . . . , xt+1, xtest, K) ct+1 C ϱSDM(ω) xtest X g(x1, . . . , xt+1, xtest, K) 1 |X|t+1 K (x1, xtest) (44) xtest X K (x1, xtest) xt+1 X g(x1, . . . , xt+1, xtest, K) where we have used Lemma 5 to go from (43) to (44). Writing the indicator function g as a product of indicator functions, g(x1, . . . , xt+1, xtest, K) = r=2 1{K(xtest,x1)>K(xtest,xr)} we obtain the following expression for the term appearing between parentheses in (45): xt+1 X g(x1, . . . , xt+1, xtest, K) = 1 |X|t xr X 1{K(xtest,x1)>K(xtest,xr)} z X 1{K(xtest,x1)>K(xtest,z)} = |{z X : K(xtest, x1) > K(xtest, z)}| Changing the name of variables xtest, x1 to x, y gives (40). We now use expression (40) for E(K) and reformulate optimization problem (32)-(33) into an equivalent optimization problem over symmetric matrices. Putting an arbitrary ordering on the set X (starting with i = 0) and denoting by K ij the value of the kernel K on the pair that consists of the ith and jth element of X, we see that optimization problem (32)-(33) can be written as Maximize E(K) := 1 |{j [N] : Kij < Kij}| over all symmetric matrices K RN N (47) Published as a conference paper at ICLR 2023 In the above we have used the letter N to denote the cardinality of X, that is N = n L w, and we have used the notation [N] = {0, 1, . . . , N 1}. Before solving the matrix optimization problem (46)-(47), we start with a simpler vector optimization problem. Let p be a probability vector, that is p RN + with PN i=1 pi = 1, and consider the optimization problem: Maximize e(v) := |{j [N] : vj < vj}| over all vector v RN. (49) Recall from Definition 1 that an ordering permutation of a vector v is a permutation that sorts its entries from smallest to largest. We will say that two vectors v, w RN have the same ordering if there exist σ SN which is ordering for both v and w. The following lemma is key it shows that the optimization problem (48) (49) has a simple solution. Lemma 7. The following identity sup v RN e(v) = Ht(p ) holds. Moreover, the supremum is achieved by any vector v RN that has mutually distinct entries3 and that has the same ordering than p . Proof. Let Distinct(RN) denote the vectors of RN that have mutually distinct entries. We will first show that sup v RN e(v) = sup v Distinct(RN) e(v). (50) To do this we show that for any v RN, there exists w Distinct(RN) such that |{j [N] : vj < vj}| |{j [N] : wj < wj}| for all 0 j N 1. (51) There are many ways to construct such a w. One way is to simply set wj = σ 1(j) for some permutation σ that orders v. Indeed, note that σ 1(j) provides the position of vj in the sequence of inequality (18). Therefore if vj < vj we must have that σ 1(j ) < σ 1(j). This implies {j [N] : vj < vj} {j [N] : σ 1(j ) < σ 1(j)} for all j [N] which in turn implies (51). Because of (50) we can now restrict our attention to v Distinct(RN). Note that if v Distinct(RN), then it has a unique ordering permutation σ, vσ(0) < vσ(1) < vσ(2) < vσ(3) < . . . < vσ(N 1) and, recalling that σ 1(j) provide the position of vj in the above ordering, we clearly have that |{j [N] : v j < vj}| = σ 1(j). Therefore, if v Distinct(RN) and if σ denotes its unique ordering permutation, e(v) can be expressed as |{j [N] : vj < vj}| j=0 p σ(j) (j/N)t (52) Looking at definition (11) of the permuted moment, it is then clear that e(v) Ht(p ) for all v Distinct(RN). We then note that if v Distinct(RN) has the same ordering than p , then its unique ordering permutation σ must also be an ordering permutation of p . Then (52) combined with Lemma 1 implies that e(v) = Ht(p ). This concludes the proof. 3That is, vi = vj for all i = j. Published as a conference paper at ICLR 2023 Relaxing the symmetric constraint in the optimization problem (46)-(47) gives the following unconstrained problem over all N-by-N matrices: Maximize E(K) := 1 |{j [N] : Kij < Kij}| over all matrices K RN N (54) Let us denote by K i,: the ith row of the matrix K and remark that K i,: is a probability vector (because K (x, ) is a probability distribution on X, see Lemma 4). We then note that the above unconstrained problem decouples into N separate optimization problems of the type (48)-(49) in which the probability vector p must be replaced by the probability vector K i,:. Using Lemma 7 we therefore have that any K RN N that satisfies, for each 0 i N 1, (a) The entries of Ki,: are mutually distinct, (b) Ki,: and K i,: have the same ordering, must be a solution of (53)-(54). Lemma 7 also gives: sup K RN N E(K) = 1 i=0 Ht(K i,:). Now let ε RN N be a symmetric matrix that satisfies: (i) εij = εij for all i, j, j [N] with j = j , (ii) 0 εij 1 for all i, j [N], and define the following perturbation of the matrix K : K = K + 0.5 s Lc |Φ| ε (55) Recalling definition (35) of the kernel K , it is clear that for each i, j [N], we have K ij = ℓ s Lc |Φ| for some integer ℓ. (56) As a consequence perturbing K by adding to its entries quantities smaller than 1/(s L c |Φ|) can not change the ordering of its rows. Therefore the kernel K defined by (55) satisfies (b). It also satisfies (a). Indeed, if K ij = K ij and j = j , then we clearly have that Kij = Kij due to (i). On the other hand if K ij = K ij , then Kij = Kij due to (ii) and (56). We have therefore constructed a symmetric matrix that is a solution of the optimization problem (53)-(54). As a consequence we have sup K K E(K) = sup K RN N E(K) = 1 i=0 Ht(K i,:) where K should now be interpreted as the set of N-by-N symmetric matrices. The above equality proves Theorem 4, and we have also shown that the perturbed kernel (55) achieves the supremum. B.2 CONNECTION BETWEEN THE TWO SAMPLING PROCESSES In this subsection we show that the p.d.f. of Sampling Process SDM can be obtained by marginalizing the p.d.f. of Sampling Process DM over a subset of the variables. We also compute another marginal of ϱDM that will prove useful in the next subsection. Recall that ϱDM(ω) = 1 |Φ||C|2R 1 ϕ 1(c r)(xr, 1) | ϕ 1(c r)| 1 ϕ 1(cr)(xr, s) ! 1 ϕ 1(c 1)(xtest) | ϕ 1(c 1)| (57) Published as a conference paper at ICLR 2023 on ΩDM := Φ C2R X R nspl+1 is the p.d.f. of the sampling process for our main data model. Samples from ΩDM take the form ω = (ϕ ; c1, c2, c3, . . . , c R ; c 1, c 2, c 3, . . . , c R ; x1,1, x1,2, x1,3, . . . , x1,nspl ; . . . ; x R,1, x R,2, x R,3, . . . , x R,nspl ; xtest) We separate these variables into two groups, ω = (ωa, ωb), where ωa = (ϕ ; c1, c2, c3, . . . , c R ; c 1, c 2, c 3, . . . , c R ; x1,1, x1,2 ; . . . ; x R,1, x R,2 ; xtest) (58) ωb = (x1,3, x1,4, . . . , x1,nspl ; . . . ; x R,3, x R,4, . . . , x R,nspl) (59) The variable ωa belongs to Ωa = Φ C2R X 2R+1, and the variable ωb belongs to Ωb = X R(nspl 2). Note that the variables in ωa contains, among other, 2R sequences of concepts and 2R training points (the first and second training points of each category). Each of these 2R training points is generated by one of the 2R sequences of concepts. So the variables involved in ωa are generated by a process similar to the one involved in the simpler data model. The following lemma shows that p.d.f. of ωa, after marginalizing ωb, is indeed ϱSDM. Lemma 8. For all ωa Ωa we have X ωb Ωb ϱDM(ωa, ωb) = 1 |Φ||C|2R 1 ϕ 1(c r)(xr, 1) | ϕ 1(c r)| 1 ϕ 1(cr)(xr, 2) ! 1 ϕ 1(c 1)(xtest) | ϕ 1(c 1)| Recalling the definition (29) of ϱSDM, and letting t + 1 = 2R, we see that the above lemma states that X ωb Ωb ϱDM(ωa, ωb) = ϱSDM(ωa) (60) and Ωa = ΩSDM. Proof of Lemma 8. We start by reorganizing the terms involved in the product defining ϱDM so that the variables in ωa and ωb are clearly separated: 1 ϕ 1(c r)(xr, 1) | ϕ 1(c r)| 1 ϕ 1(cr)(xr, 2) ! 1 ϕ 1(c 1)(xtest) | ϕ 1(c 1)| 1 ϕ 1(cr)(xr, s) To demonstrate the process, let us start by summing the above formula over the first variable of ωb, namely x1,3. Since this variable only occurs in the last term of the above product, we have: x1,3 X ϱDM(ω) = 1 |Φ||C|2R 1 ϕ 1(c r)(xr, 1) | ϕ 1(c r)| 1 ϕ 1(cr)(xr, 2) ! 1 ϕ 1(c 1)(xtest) | ϕ 1(c 1)| 1 r R 3 s nspl (r,s) =(1,3) 1 ϕ 1(cr)(xr, s) 1 ϕ 1(c1)(x1,3) x X 1 ϕ 1(c1)(x) = | ϕ 1(c1)|, the last term of the above product is equal to 1 and can therefore be omitted. Repeating this process for all the xr,s that constitute ωb leads to the desired result. In the next subsection we will need the marginal of ϱDM with respect to another set of variables. To this aim we write ω = (ωc, ωd) where ωc = (ϕ ; x1,2, x1,3, . . . , x1,nspl ; . . . ; x R,2, x R,3, , . . . , x R,nspl ; xtest) (61) ωd = (c1, . . . , c R ; c 1, . . . , c R ; x1,1 ; . . . ; x R,1) (62) Note that all the unfamiliar training points are contained in ωd. The test point and the familiar training points are in ωc. We also let Ωc = Φ X R(nspl 1)+1 and Ωd = C2R X R. Published as a conference paper at ICLR 2023 Lemma 9. For all ωc Ωc we have ωd Ωd ϱDM(ωc, ωd) = 1 |Φ||X|R+1s LR(nspl 2) c r=1 1{ ϕ(xr,2)= ϕ(xr,3)=...= ϕ(xr,nspl)} Proof. We reorganizing the terms involved in the product defining ϱDM so that the variables in ωc and ωd are separated: ϱDM(ω) = 1 |Φ||C|2R 1 ϕ 1(cr)(xr, s) ! 1 ϕ 1(c 1)(xtest) | ϕ 1(c 1)| 1 ϕ 1(c r)(xr, 1) | ϕ 1(c r)| Summing the above formula over the last variable involved in ωd, namely x R,1, gives x R,1 X ϱDM(ω) = 1 |Φ||C|2R 1 ϕ 1(cr)(xr, s) ! 1 ϕ 1(c 1)(xtest) | ϕ 1(c 1)| 1 ϕ 1(c r)(xr, 1) | ϕ 1(c r)| 1 ϕ 1(c R)(x R,1) | ϕ 1(c R)| The last term in the above product is equal to 1 and can therefore be omitted. Iterating this process gives x R,1 X ϱDM(ω) = 1 |Φ||C|2R 1 ϕ 1(cr)(xr, s) ! 1 ϕ 1(c 1)(xtest) | ϕ 1(c 1)| We then use the fact that P c 1 C 1 ϕ 1(c 1)(xtest) = 1, see (38), together with ϕ 1(c 1) = s L c , see (36), to obtain x R,1 X ϱDM(ω) = 1 |Φ||C|2Rs Lc 1 ϕ 1(cr)(xr, s) We then sum over c 2, . . . , c R. Since these variables are not involved in the above formula we get x R,1 X ϱDM(ω) = 1 |Φ||C|R+1s Lc 1 ϕ 1(cr)(xr, s) = 1 |Φ||C|R|X| 1 ϕ 1(cr)(xr, s) where we have used |C|s L c = n L c s L c = |X| to obtain the last equality. Summing over c1 gives x R,1 X ϱDM(ω) = 1 |Φ||C|R|X| 1 ϕ 1(cr)(xr, s) = 1 |Φ||C|R|X| 1 ϕ 1(cr)(xr, s) 1 ϕ 1(c1)(x1, s) = 1 |Φ||C|R|X| 1 ϕ 1(cr)(xr, s) ! 1{ ϕ(x1,2)= ϕ(x1,3)=...= ϕ(x1,nspl)} | ϕ 1(c1)|nspl 1 To obtain the last equality we have used (39) but for a product of nspl 1 indicator functions instead of just two. Iterating this process we obtain x R,1 X ϱDM(ω) = 1 |Φ||C|R|X| 1{ ϕ(xr,2)= ϕ(x1,3)=...= ϕ(xr,nspl)} | ϕ 1(cr)|nspl 1 Using one more time that ϕ 1(cr) = s L c and |C|s L c = |X| gives to the desired result. Published as a conference paper at ICLR 2023 x21 = [ butter, pork, lamb, lamb, yogurt ] x22 = [ chicken, cheese, cream, lettuce, beef ] x23 = [ beef, cheese, cheese, carrot, pork ] x24 = [ lamb, butter, cream, potato, lamb ] x25 = [ chicken, cream, butter, leek, pork ] x31 = [ chicken, cheese, cheese, yorgurt, carrot ] x32 = [ leek, pork, cream, lamb, butter ] x33 = [ carrot, lamb, cheese, pork, yogurt ] x34 = [ lettuce, chicken, cheese, chicken, yogurt ] x35 = [ leek, chicken, butter, lamb, cheese ] 3 = [ meat, dairy, dairy, dairy, veggie ] c3 = [ veggie, meat, dairy, meat, dairy ] Category 1 Category 2 Category 3 15 sentences (the training set) 6 sequences of concepts (latent variables) 1 = [ dairy, dairy, veggie, meat, veggie ] c1 = [ veggie, meat, dairy, veggie, dairy ] x11 = [ cheese, butter, lettuce, chicken, leek ] x12 = [ carrot, pork, cream, carrot, cheese ] x13 = [ lettuce, chicken, butter, potato, butter ] x14 = [ lettuce, beef, yogurt, leek, cream ] x15 = [ potato, lamb, butter, potato, yogurt ] 2 = [ dairy, meat, meat, meat, dairy ] c2 = [ meat, dairy, dairy, veggie, meat ] xtest = [ yogurt, butter, carrot, chicken, carrot ] Figure 4: The test point xtest and the train point x1,1 are generated by the same sequence of concepts. B.3 CONCLUSION OF PROOF We now establish the desired upper bound (28), which we restate below for convenience: sup K K PDM h EK i 1 |X| x X H2R 1 (K (x, )) + 1 EK = n ω ΩDM : There exists 1 s nspl such that K xtest, x1,s > K xtest, xr,s for all 2 r R and all 1 s nspl o (64) We recall that the test point xtest is generated by the unfamiliar sequence of concepts c 1 and that it belongs to category 1, see Figure 4. The event EK describes all the outcomes in which the training point most similar to xtest (where similarity is measured with respect to the kernel K) belongs to the first category. There are two very distinct cases within the event EK: the training point most similar to xtest can be x1,1 this corresponds to a meaningful success in which the learner recognizes that x1,1 is generated by the same sequence of concepts than xtest, see Figure 4. Or the training point most similar to xtest can be one of the points x1,2, . . . , x1,nspl this corresponds to a lucky success because x1,2, . . . , x1,nspl are not related to xtest (they are generated by a different sequence of concept, see Figure 4). To make this discussion formal, we fix a kernel K K, and we partition the event EK as follow EK = Emeaningful Eluck (65) Emeaningful = EK n ω ΩDM : K xtest, x1,1 > K xtest, x1,s for all 2 s nspl o Eluck = EK n ω ΩDM : K xtest, x1,1 K xtest, x1,s for some 2 s nspl o The next two lemmas provide upper bounds for the probability of the events Emeaningful and Eluck. Lemma 10. PDM[Emeaningful] 1 |X| x X H2R 1(K x). Published as a conference paper at ICLR 2023 Proof. Define the event A := n ω ΩDM : K xtest, x1,1 > K xtest, xr,1 for all 2 r R o n ω ΩDM : K xtest, x1,1 > K xtest, xr,2 for all 1 r R o . This events involves only the first two training points of each category. On the example depicted on Figure 4, that would be the points x1,1 and x1,2, the points x2,1 and x2,2, and finally the points x3,1 and x3,2. The event A consists in all the outcomes in which, among these 2R training points, x1,1 is most similar to xtest. We then make two key remarks. First, these 2R points are generated by 2R distinct sequences on concepts so if we restrict our attention to these 2R points, we are in a situation very similar to the simpler data model ϱSDM (i.e. we first generate 2R sequences of concepts, then from each sequence of concepts we generate a single training point, and finally we generate a test point from the first sequence of concepts.) We will make this intuition precise by appealing to the fact that ϱSDM is the marginal of ϱDM, and this will allow us to obtain a bound for PDM[A] in term of the permuted moment of K . The second remark is that Emeaningful is clearly contained in A, and therefore we have PDM[Emeaningful] PDM[A] (66) so an upper bound for PDM[A] is also an upper bound for PDM[Emeaningful]. Let us rename some of the variables. We define d1, . . . , d2R, and y1, . . . , y2R as follow: d2r 1 = c r and d2r = cr for r = 1, . . . , R y2r 1 = xr,1 and y2r = xr,2 for r = 1, . . . , R On the example depicted on Figure 4, that would be: y1 = x1,1, y2 = x1,2, y3 = x2,1, y4 = x2,2, y5 = x3,1, y6 = x3,2 d1 = c1, d2 = c 1, d3 = c2, d4 = c 2, d5 = c3, d6 = c 3 In other words, the yr s are the first two training points of each category and the dr s are their corresponding sequence of concepts. With these notations it is clear that the training points yr are generated by distinct sequences of concepts, and that the test point xtest is generated by the same sequence of concepts than y1. Moreover the event A can now be conveniently written as A = {ω ΩDM : K xtest, y1 > K xtest, yr for all 2 r 2R}. Let h : X 2R+1 R be the indicator function defined by h(y1, . . . , y2R, xtest) = 1 if K (xtest, y1) > K (xtest, xr) for all 2 r 2R 0 otherwise We now recall the splitting ω = (ωa, ωb) described in (58)-(59) and note that ωa can be written as ωa = (ϕ ; d1, . . . , d2R ; y1, . . . , y2R ; xtest) Since h only depends on the variables involved in ωa, and since, according to Lemma 8, P ωb ϱDM(ωa, ωb) = ϱSDM(ωa), we obtain ωb Ωb h(y1, . . . , y2R, xtest) ϱDM(ωa, ωb) ωa Ωa h(y1, . . . , y2R, xtest) ϱSDM(ωa) = PSDM[ωa Ωa : K xtest, y1 > K xtest, yr for all 2 r 2R] x X H2R 1(K x) where we have used Theorem 4 in order to get the last inequality (with the understanding that t + 1 = 2R.) Combining the above bound with (66) concludes the proof. Published as a conference paper at ICLR 2023 We now estimate the probability of the event Eluck. Lemma 11. PDM[Eluck] 1 Proof. For 1 r R, we define the events ω ΩDM : max 2 s nspl K(xtest, xr,s) > max 2 s nspl K(xtest, xr ,s ) Note that the events Br involve only the training points with an index s 2: these are the familiar training points. On the example depicted on Figure 4, these are the training points generated by c1, c2 and c3. Let us pursue with this example. The event B1 consists in all the outcomes in which one of the points generated by c1 is more similar to xtest than any of the points generated by c2 and c3. The event B2 consists in all the outcomes in which one of the points generated by c2 is more similar to xtest than any of the points generated by c1 and c3. And finally the event B3 consists in all the outcomes in which one of the points generated by c3 is more similar to xtest than any of the points generated by c1 and c2. Importantly, the test point xtest is generated by the unfamiliar sequence of concepts c 1, and this sequence of concept is unrelated to the sequence c1, c2 and c3. So one would expect that, from simple symmetry, that PDM[B1] = PDM[B2] = PDM[B3]. (67) We will prove (67) rigorously, for general R, using Lemma 9 from the previous subsection. But before to do so, let us show that (67) implies the desired upperbound on the probability of Eluck. First, note that Eluck B1 and therefore PDM[Eluck] PDM[B1]. (68) Then, note that B1, B2 and B3 are mutually disjoints, and therefore, continuing with the same example, PDM[B1] + PDM[B2] + PDM[B3] = PDM[B1 B2 B3] 1 which, combined with (67) and (68), gives PDM[Eluck] 1/3 as desired. We now provide a formal proof of (67). As in the proof of the previous lemma, it is convenient to rename some of the variables. Let denote by famr the variable that consists in the familiar training points from category r: famr = (xr,2, . . . , xr,nspl) X nspl 1 With this notation we have famr,s = xr,s+1. We now recall the splitting ω = (ωc, ωd) described in (61)-(62), and note that ωc can be written as ωc = (ϕ; fam1 ; . . . ; fam R ; xtest). (69) Using Lemma 9 we have ωd Ωd ϱDM(ωc, ωd) = α r=1 1{ ϕ(xr,2)= ϕ(xr,3)=...= ϕ(xr,nspl)} (70) r=1 1{ ϕ(famr,1)= ϕ(famr,2)=...= ϕ(famr,nspl 1)} (71) r=1 h(ϕ, famr) (72) where α is the constant appearing in front of the product in Lemma 9 an h : Φ X nspl 1 {0, 1} is the indicator function implicitly defined in equality (71)-(72). With the slight abuse of notation of viewing famr as a set instead of a tuple, let us rewrite the event Br as ω ΩDM : max x famr K(xtest, x) > max x famr K(xtest, x) Published as a conference paper at ICLR 2023 We also define the corresponding indicator function g(famr, famr , xtest) = 1 if max x famr K(xtest, x) > max x famr K(xtest, x) 0 otherwise We now compute PDM(B1) (the formula for the other PDM(Br) are obtained in a similar manner.) Recall from (69) that the variables involved in famr only appear in ωc. Therefore PDM[B1] = X r=2 g(fam1, famr, xtest) ϱDM(ωc, ωd) (73) r=2 g(fam1, famr, xtest) ωd Ωd ϱDM(ωc, ωd) (74) r=2 g(fam1, famr, xtest) r =1 h(ϕ, famr ) (75) where we have used (72) to obtain the last equality. Let us now compare PDM(B1) and PDM(B2). Letting Z := X nspl 1, and recalling that ωc = (ϕ; fam1 , . . . , fam R ; xtest), we obtain: PDM[B1] = X g(fam1, famr, xtest) r =1 h(φ, fam r) PDM[B2] = X g(fam2, famr, xtest) r =1 h(φ, fam r) From the above it is clear that PDM[B1] = PDM[B2] (as can be seen by exchanging the name of the variables fam1 and fam2). Similar reasoning shows that the events Br are all equiprobable, which concludes the proof. Combining Lemma 10 and 11, together with the fact that EK = Ecorrect Eluck, concludes the proof of (63). Inequality (63) implies inequality (26), which itself is equivalent to inequality (14). C UPPER BOUND FOR THE PERMUTED MOMENT OF K This section is devoted to the proof of inequality (15), which we state below as a theorem for convenience. Theorem 5. For all 0 ℓ L, we have the upper bound x X Ht(K x) k Sℓ f(k)g(k) max k Sℓf(k) . (76) The rather intricate formula for the function f and g can be found in the main body of the paper, but we will recall them as we go through the proof. We also recall that the optimal kernel is given by the formula: K (x, y) = 1 s Lc {ϕ Φ : ϕ(xℓ) = ϕ(yℓ) for all 1 ℓ L} The key insight to derive the upper bound (76) is to note that each pair of sentences (x, y) induces a graph on the vocabulary {1, 2, . . . , nw}, and that the quantity {ϕ Φ : ϕ(xℓ) = ϕ(yℓ) for all 1 ℓ L} Published as a conference paper at ICLR 2023 Table 2: First five rows of the Strirling triangle for the Stirling numbers n k . n\k 1 2 3 4 5 1 1 2 1 1 3 1 3 1 4 1 7 6 1 5 1 15 25 10 1 can be interpreted as the number of equipartitions of the vocabulary that do not sever any of the edges of the graph. This graph-cut interpretation of the optimal kernel is presented in detail in Subsection C.1. In Subsection C.2 we derive a formula for K which is more tractable than (77). To do this we partition X X into subsets on which K is constant, then provide a formula for the value of K on each of these subsets (c.f. Lemma 16). With this formula at hand, we then appeal to Lemma 3 to derive a first bound for the permuted moment of K (c.f. Lemma 17). This first bound is not fully explicit because it involves the size of the subsets on which K is constant. In section C.3 we appeal to Cayley s formula, a classical result from graph theory, to estimate the size of these subsets (c.f. Lemma 18) and therefore conclude the proof of Theorem 5. We now introduce the combinatorial notations that will be used in this section, and we recall a few basics combinatorial facts. We denote by N = {0, 1, 2, . . .} the nonnegative integers. We use the standard notations n k := n! k!(n k)! and n k1, k2, . . . , km := n! k1!k2! kn! for the binomial and multinomial coefficients, with the understanding that 0! = 1. We recall that multinomial coefficients can be interpreted as the number of ways of placing n distinct objects into m distinct bins, with the constraint that k1 objects must go in the first bin, k2 objects must go in the second bin, and so forth. The Stirling numbers of the second kind n k are close relatives of the binomial coefficients. n k stands for the number of ways to partition a set of n objects into k nonempty subsets. To give a simple example, 4 2 = 7 because there are 7 ways to partition the set {1, 2, 3, 4} into two nonempty subsets, namely: {1} {2, 3, 4}, {2} {1, 3, 4}, {3} {1, 2, 4}, {4} {1, 2, 3}, {1, 2} {3, 4}, {1, 3} {2, 4}, {1, 4} {3, 4}. Stirling numbers are easily computed via the following variant of Pascal s recurrence formula 4: n 1 = 1 for n 1, n k for 2 k n 1. The above formula is easily derived from the definition of the Stirling numbers as providing the number of ways to partition a set of n objects into k nonempty subsets (see for example chapter 6 of ?). Table 2 shows the first few Stirling numbers. We recall that an undirected graph is an ordered pair G = (V, E), where V is the vertex set and E {{v, v } : v, v V and v = v } is the edge set. Edges are unordered pairs of distinct vertices (so loops are not allowed.) A tree is a connected graph with no cycles. A tree on n vertices has exactly n 1 edges. Cayley s formula states that there are nn 2 ways to put n 1 edges on n labeled vertices in order to make a tree. We formally state this classical result below: Lemma 12 (Cayley s formula). There are nn 2 trees on n labeled vertices. 4Alternatively, Stirling numbers can be defined through the formula n k = 1 k! Pk i=0( 1)i k k i (k i)n Published as a conference paper at ICLR 2023 C.1 GRAPH-CUT FORMULATION OF THE OPTIMAL KERNEL In this section we consider undirected graphs on the vertex set V := {1, 2, . . . , nw}. Since the data space X consists of sentences of length L, graphs that have at most L edges will be of particular importance. We therefore define: G := {All graphs on V that have at most L edges}. In other words, G consists in all the graphs G = (V, E) whose edge set E has cardinality less or equal to L. Since these graphs all have the same vertex set, we will often identify them with their edge set. We now introduce a mapping between pairs of sentences containing L words, and graphs containing at most L edges. Definition 2. The function ζ : X X G is defined by ζ(x, y) := [ 1 ℓ L xℓ =yℓ {{xℓ, yℓ} . (78) The right hand side of (78) is a set of at most L edges. Since graphs in G are identified with their edge set, ζ indeed define a mapping from X X to G. Let us give a few examples illustrating how the map ζ works. Suppose we have a vocabulary of nw = 10 words and sentences of length L = 6. Consider the pair of sentences (x, y) X X where x = [ 2, 2, 8, 5, 9, 7 ] y = [ 2, 5, 8, 2, 2, 1 ] (79) Then ζ(x, y) is the set of 3 edges ζ(x, y) = n {2, 5}, {9, 2}, {7, 1} o . which indeed define a graph on V. Note that position ℓ= 2 and ℓ= 4 of (x, y) code for the same edge {2, 5}, position 5 codes for the edge {9, 2}, and position 6 codes for the edge {7, 1}. On the other hand, position 1 and 3 do not code for any edge: indeed, since x1 = y1 and x3 = y3, these two positions do not contribute any edges to the edge set defined by (78). We will say that positions 1 and 3 are silent. We make this terminology formal in the definition below: Definition 3. Let (x, y) X X. If xℓ= yℓfor some 1 ℓ L, we say that position ℓof the pair (x, y) is silent. If xℓ = yℓfor some 1 ℓ L, we say that position ℓof the pair (x, y) codes for the edge {xℓ, yℓ}. Note that if (x, y) has some silent positions, or if multiple positions codes for the same edge, then the graph ζ(x, y) will have strictly less than L edges. On the other hand, if none of these take place, then ζ(x, y) will have exactly L edges. For example the pair of sentences x = [ 1, 1, 1, 5, 6, 7] y = [ 2, 3, 4, 6, 7, 1] (80) does not have silent positions, and all positions code for different edges. The corresponding graph ζ(x, y) = n {1, 2}, {1, 3}, {1, 4}, {5, 6}, {6, 7}, {7, 1} o has the maximal possible number of edges, namely L = 6 edges. From the above discussion, it is clear that any graph with L or less edges can be expressed as ζ(x, y) for some pair of sentences (x, y) X X. Therefore ζ : X X G is surjective. On the other hand, different pair of sentences can be mapped to the same graph. Therefore ζ is not injective. We now introduce the following function. Definition 4 (Number of cut-free equipartitions of a graph). The function I : G N is defined by : I(G) = |{ϕ Φ : ϕ(v) = ϕ(v ) for all edge {v, v } of the graph G}| (81) Published as a conference paper at ICLR 2023 (a) Cut-free (b) Not cut-free Figure 5: Two equipartitions of the same graph (each subsets of the equipartitions contain 7 vertices). The equipartition on the left is cut-free (no edges are severed). The equipartition on the right is not cut-free (4 edges are severed). The optimal kernel K (x, y) can be interpreted as the number of distinct cut-free equipartitions of the graph ζ(x, y) (modulo some scaling factor.) Recall that Φ is the set of maps ϕ : V {1, . . . , nc} that satisfy |ϕ 1(c)| = sc for all 1 c nc. Given a graph G, the quantity I(G) can therefore be interpreted as the number of ways to partition the vertices into nc labelled subsets of equal size so that no edges are severed (i.e. two connected vertices must be in the same subset.) In other words, I(G) is the number of cut-free equipartition of the graph G, see Figure 5 for an illustration. Note that if the graph G is connected, then I(G) = 0 since any equipartition of the graph will severe some edges. On the other hand, if the graph G has no edges, then I(G) = |Φ| since there are no edges to be cut (and therefore any equipartition is acceptable.) The optimal kernel K can be expressed as a composition of the function ζ and I. Indeed: K (x, y) = 1 |Φ|s Lc |{ϕ Φ : ϕ(xℓ) = ϕ(yℓ) for all 1 ℓ L}| (82) = 1 |Φ|s Lc |{ ϕ Φ : ϕ(v) = ϕ(v ) for all {v, v } ζ(x, y) }| (83) = 1 |Φ|s Lc I(ζ(x, y)) (84) where we have simply used that ζ(x, y) := S 1 ℓ L xℓ =yℓ {{xℓ, yℓ} to go from (82) to (83). We will refer to (84) as the graph-cut formulation of the optimal kernel. We have discussed earlier that the function ζ : X X G is surjective but not injective. We conclude this subsection with a lemma that provides an exact count of how many distinct (x, y) are mapped by ζ to a same graph G. Lemma 13. Suppose G G has m edges. Then |ζ 1(G)| = |{(x, y) X X : ζ(x, y) = G}| = m! 2αn L α w . Proof. Fix once and for all a graph G = (V, E) with edge set E = {e1, . . . , em} where m L. Given 0 α L, define the set Oα = {(x, y) X X : ζ(x, y) = G and (x, y) has exactly α non-silent positions}. We start by noting that the set Oα is empty for all α < m: indeed, since G has m edges, at least m positions of a pair (x, y) must be coding for edges (i.e., must be non-silent) in order to have ζ(x, y) = G. We therefore have the following partition: α=m Oα and Oα Oα = if α = α . To conclude the proof, we need to show that m! 2α n L α w for all m α L. (85) Published as a conference paper at ICLR 2023 Consider the following process to generate an ordered pair (x, y) that belong to Oα: we start by deciding which positions of (x,y) are going to be silent, and which positions are going to code for which edge of the graph G. This is equivalent to choosing a map ρ : {1, . . . , L} {e1, . . . , em, s} where {1, . . . , L} denotes the L positions, e1, . . . , em denote the m edges of the graph G, and s is the silent symbol. Choosing a map ρ correspond to assigning a role to each position: ρ(ℓ) = ei means that position ℓis given the role to code for edge ei, and ρ(ℓ) = s means that position ℓis given the role of being silent. The map ρ must satisfy: |ρ 1(s)| = L α and ρ 1(ei) = for 1 i m (86) because L α position must be silent and each edge must be represented by at least one position. The number of maps ρ : {1, . . . , L} {e1, . . . , em, s} that satisfies (86) is equal to L L α Indeed, we need to choose the L α positions that will be mapped to the silent symbol s: there are L L α ways of accomplishing this. We then partition the α remaining positions into m non-empty subsets: there are α m ways of accomplishing this. We finally map each of these non-empty subset to a different edge: there are m! ways of accomplishing this. We have shown that there are L α α m m! ways to assign roles to the positions. Let say that position ℓis assigned the role of coding for edge {v, v }. Then we have two choices to generate entries xℓ and yℓ: either xℓ= v and yℓ= v , or xℓ= v and yℓ= v. Since α positions are coding for edges, this lead to the factor 2α in equation (85). Finally, if the position ℓis silent, then we have nw choices to generate entries xℓand yℓ(because we need to choose v V such that xℓ= yℓ= v) , hence the factor n L α w appearing in (85). C.2 LEVEL SETS OF THE OPTIMAL KERNEL We recall that a connected component (or simply a component) of a graph is a connected subgraph that is not part of any larger connected subgraph. Since graphs in G have at most L edges, their components contain at most L + 1 vertices. This comes from the fact that the largest component that can be made with L edges contains L + 1 vertices. It is therefore natural, given a vector k = [k1, . . . , k L+1] NL+1, to define Gk := {G G : G has exactly k1 components of size 1, exactly k2 components of size 2, ..., exactly k L+1 components of size L + 1 } (87) where the size of a component refers to the number of vertices it contains. We recall that N = {0, 1, 2, . . .} therefore some of the entries of the vector k can be equal to zero. Note that components of size 1 are simply isolated vertices. The following lemma identify which k NL+1 lead to nonempty Gk. Lemma 14. The set Gk is not empty if and only if k satisfies i=1 iki = nw and i=1 (i 1)ki L. (88) Proof. Suppose Gk is not empty. Then there exists a graph G G that has exactly ki components of size i, for 1 i L + 1. A component of size i contains i vertices (by definition) and at least i 1 edges (otherwise it would not be connected.) Since G G it must have nw vertices and at most L edges. Therefore (88) must hold. Suppose that k NL+1 satisfies (88). Then we can easily construct a graph G on V that has a number of edges less or equal to L, and that has exactly ki components of size i, for 1 i L+1. To do this we first partition the vertices into subsets so that there are ki subsets of size i, for 1 i L+1. We then put i 1 edges on each subset of size i so that they form connected components. The resulting graph has ki components of size i, for 1 i L + 1, and PL+1 i=1 (i 1)ki edges. Published as a conference paper at ICLR 2023 The previous lemma allows us to partition G into non-empty subsets as follow: k S Gk, Gk = for all k S, and Gk Gk = if k = k (89) i=1 iki = nw and i=1 (i 1)ki L Recall that I(G) count the number of equipartitions that do not severe edges of G. The next lemma shows that two graphs that belongs to the same subset Gk have the same number of cut-free equipartitions, and it provides a formula for this number in term of the index k of the subset. Lemma 15. Suppose k S and define the set of admissible assignment matrices A N(L+1) nc : j=1 Aij = ki for all i and i=1 i Aij = sc for all j Then for all G Gk, we have that ki Ai,1, Ai,2, . . . , Ai,nc Let us remark that, since 0! = 1, the multinomial coefficient ki Ai,1,Ai,2,...,Ai,nc appearing in (92) is equal to 1 when ki is equal to 0. Poof of Lemma 15. Let k S and fix once and for all a graph G Gk. Define the set Ψ = {ϕ Φ : ϕ(v) = ϕ(v ) for all edge {v, v } of the graph G} so that I(G) = |Ψ|. Note that a map ϕ that belongs to Ψ must map all vertices that are in a connected component to the same concept (otherwise some edges of G would be severed.) So a map ϕ Ψ can be viewed as assigning connected components to concepts. Given a matrix A N(L+1) nc we define the set: ΨA = {ϕ Ψ : ϕ assigns Aij components of size i to concept j, for all 1 i L+1 and 1 j nc}. We then note that the set ΨA is empty unless the matrix A satisfies: Ai,1 + Ai,2 + Ai,3 + . . . + Ai,nc = ki for all 1 i L + 1 A1,j + 2A2,j + 3A3,j + . . . + (L + 1)AL+1,j = sc for all 1 j nc. The first constraint states that the total number of connected components of size i is equal to ki (because G Gk). The second constraint states that concept j must receive a total of sc vertices (because ϕ Φ.) The matrices that satisfy these two constraints constitute the set Ak defined in (91). We therefore have the following partition of the set Ψ: A Ak ΨA, ΨA = if A Ak , ΨA ΨB = if A = B. To conclude the proof, we need to show that ki Ai,1, Ai,2, . . . , Ai,nc for all A Ak. (93) To see this, consider the ki components of size i. The number of ways to assign them to the nc concepts so that concept j receives Aij of them is equal to the multinomial coefficient ki Ai,1,Ai,2,...,Ai,nc . Repeating this reasonning for the components of each size gives (93). We now leverage the previous lemma to obtain a formula for K . For k S we define Ωk := ζ 1(Gk). Published as a conference paper at ICLR 2023 Since ζ : X X G is surjective, partition (89) of G induces the following partition of X X: k S Ωk, Ωk = if k S and Ωk Ωk = if k = k . (94) Using the graph-cut formulation of the optimal kernel together with Lemma 15 we therefore have K (x, y) = 1 |Φ|s Lc I(ζ(x, y)) = 1 |Φ|s Lc ki Ai,1, Ai,2, . . . , Ai,nc for all (x, y) Ωk. (95) The above formula is key to our analysis. We restate it in the lemma below, but in a slightly different format that will better suit the rest of the analysis. Let f : S R be the function defined by f(k) := n L c (sc!)nc ki Ai,1, Ai,2, . . . , Ai,nc We then have: Lemma 16 (Level set decomposition of K ). The kernel K is constant on each subsets Ωk of the partition (94). Moreover we have K (x, y) = f(k)/|X| for all (x, y) Ωk and for all k S. Proof. The quantity |Φ| appearing in (95) can be interpreted as the number of ways to assign the nw words to the nc concepts so that each concept receives sc words. Therefore |Φ| = nw sc, sc, . . . , sc = nw! (sc!)nc . Combined with the fact that |X| = n L w, this leads to the desired formula for K . The above lemma provides us with the level sets of the optimal kernel. Together with Lemma 3, this allows us to derive the following upper bound for the permuted moment of K . Lemma 17. Let Ω= X X. The inequality x X Ht(K x) max k S f(k) holds for all S S. Proof. Let S S and define: λ = max k S max (x,y) Ωk K (x, y) = 1 |X| max k S f(k) where we have used the fact that K is equal to f(k)/|X| on Ωk. We then appeal to Lemma 3 to obtain: x X Ht (K (x, )) 1 |X| t + 1 + 1 X y X min{K (x, y), λ} t + 1 + 1 1 y X min{K (x, y), λ} (98) t + 1 + 1 1 (x,y) Ωk min{K (x, y), λ} (99) t + 1 + 1 1 (x,y) Ωk min{K (x, y), λ} (100) t + 1 + 1 1 (x,y) Ωk K (x, y) (101) t + 1 + 1 1 k S |Ωk|f(k) Published as a conference paper at ICLR 2023 where we have use the fact that X X = S k S Ωk to go from (98) to (99), and the fact that K (x, y) λ on S k S Ωk to go from (100) to (101). To conclude the proof, we simply note that λ|X| = maxk S f(k) according to our definition of λ. The bound provided by the above lemma is not fully explicit because it involves the size of level sets Ωk. In the next section, we appeal to Cayley s formula to obtain a lower bound for |Ωk|. C.3 FOREST LOWER BOUND FOR THE SIZE OF THE LEVEL SETS We recall that a forest is a graph whose connected components are trees (equivalently, a forest is a graph with no cycles.) Let us define: F := {G G : G is a forest}. In other words, F is the set of forests on V = {1, 2, . . . , nw} that have at most L edges. We obviously have the following lower bound on the size of the level sets: |Ωk| = ζ 1(Gk) ζ 1(Gk F) . (103) In this subsection, we use Cayley s formula to derive an explicit formula for ζ 1(Gk F) . We start with the following lemma: Lemma 18. Let k S, then |Gk F| = nw! k1!k2! k L+1! Proof. First we note that (104) can be written as |Gk F| = nw k1, 2k2, . . . , (L + 1)k L+1 iki i, i, . . . , i We now explain the above formula. The set Gk F consists in all the forests that have exactly k1 trees of size 1, k2 trees of size 2, ... , k L+1 trees of size L + 1. In order to construct a forest with this specific structure, we start by assigning the nw vertices to L + 1 bins, with bin 1 receiving k1 vertices, bin 2 receiving 2k2 vertices, ..., bin L + 1 receiving (L + 1)k L+1 vertices. The number of ways of accomplishing this is nw k1, 2k2, . . . , (L + 1)k L+1 Let us now consider the vertices in bin i for some i 2. We claim that there are 1 ki! iki i, i, . . . , i ways of putting edges on these iki vertices in order to make ki trees of size i. Indeed, there are 1 ki! iki i,i,...,i ways of partitioning the vertices into ki disjoint subsets of size i, and then, according to Cayley s formula, there are ii 2 ways of putting edges on each of these subsets so that they form a tree. To conclude, we remark that there is obviously only one way to to make k1 trees of size 1 out of the k1 vertices in the first bin. Recall that a tree on n vertices always has n 1 edges. So a graph that belongs to Gk F has i=1 (i 1)ki edges since it is made of k1 trees of size 1, k2 trees of size 2, ..., k L+1 trees of size (L+1). The fact that all graphs in Gk F have the same number of edges allows us to to obtain an explicit formula for |ζ 1(Gk F)| by combining combine Lemma 13 and 18, namely |ζ 1(Gk F)| = nw! k1!k2! k L+1! Published as a conference paper at ICLR 2023 This lead us to define the function g : S R by g(k) = 1 n2L w nw! k1!k2! k L+1! where γ(k) = i=1 (i 1)ki. Recalling (103) we therefore have that |Ω| g(k) for all k S. Combining the above inequality with Lemma 17 we obtain: Theorem 6. The inequality x X Ht(K x) k S g(k) f(k) max k S f(k) holds for all S S. The above theorem is more general than Theorem 5 indeed, in Theorem 5, the choice of the subset S is restricted to the L + 1 candidates: i=1 iki = nw and ℓ i=1 (i 1)ki L where ℓ= 0, 1, . . . , L. When L = 9, nw = 150, nc = 5 and t = 1999 (these are the parameters used in Theorem 1), choosing S = S7 leads to a relatively tight upper bound. When L = 15, nw = 30, nc = 5 and t = 5999 (these are the parameters corresponding the the second experiment of the experimental section), choosing S = S11 gives a good upper bound. D MULTIPLE UNFAMILIAR SENTENCES PER CATEGORY In the data model depicted in Figure 1, each unfamiliar sequence of concept has a single representative in the training set. In this section we consider the more general case in which each unfamiliar sequence of concepts has n representatives in the training set, where An example with n = 2 is depicted in Figure 6. The variables L, nw, nc, R, nspl and n parametrize instances of this more general data model, and the associated sampling process is: Sampling Process DM2: (i) Sample T = ( ϕ ; c1, . . . , c R ; c 1, . . . , c R ) uniformly at random in T = Φ Z2R. (ii) For r = 1, . . . , R: Sample (xr,1, . . . , xr,n ) uniformly at random in ϕ 1(c r) . . . ϕ 1(c r). Sample (xr,n +1, . . . , xr,nspl) uniformly at random in ϕ 1(cr) . . . ϕ 1(cr). (iii) Sample xtest uniformly at random in ϕ 1(c 1). Our analysis easily adapts to this more general case and gives: Theorem 7. Let T = Φ Z2R. Then 1 err(F, ψ, T) n " k Sℓ f(k)g(k) max k Sℓf(k) # for all feature space F, all feature map ψ : X 7 F, and all 0 ℓ L. Published as a conference paper at ICLR 2023 3 = [ meat, dairy, dairy, dairy, veggie ] c3 = [ veggie, meat, dairy, meat, dairy ] 1 = [ dairy, dairy, veggie, meat, veggie ] c1 = [ veggie, meat, dairy, veggie, dairy ] 2 = [ dairy, meat, meat, meat, dairy ] c2 = [ meat, dairy, dairy, veggie, meat ] x11 = [ cheese, butter, lettuce, chicken, leek ] x12 = [ yogurt, cheese, carrot, pork, carrot ] x13 = [ carrot, pork, cream, carrot, cheese ] x14 = [ lettuce, chicken, butter, potato, butter ] x15 = [ lettuce, beef, yogurt, leek, cream ] x16 = [ potato, lamb, butter, potato, yogurt ] x21 = [ butter, pork, lamb, lamb, yogurt ] x22 = [ cream, beef, chicken, pork, butter ] x23 = [ chicken, cheese, cream, lettuce, beef ] x24 = [ beef, cheese, cheese, carrot, pork ] x25 = [ lamb, butter, cream, potato, lamb ] x26 = [ chicken, cream, butter, leek, pork ] x31 = [ chicken, cheese, cheese, yorgurt, carrot ] x32 = [ lamb, butter, cheese, cream, potato ] x33 = [ leek, pork, cream, lamb, butter ] x34 = [ carrot, lamb, cheese, pork, yogurt ] x35 = [ lettuce, chicken, cheese, chicken, yogurt ] x36 = [ leek, chicken, butter, lamb, cheese ] Category 1 Category 2 Category 3 xtest = [ yogurt, butter, carrot, chicken, carrot ] Figure 6: Data Model with n = 2 unfamiliar sentences per category. The other parameters in this example are set to L = 5, nw = 12, nc = 3, R = 3 and nspl = 6. The points highlighted in yellow are the ones involved in the definition of the event A(1), see equation (109). Theorem 3 in the main body of the paper can be viewed as a special case of the above theorem indeed, setting n = 1 in inequality (105) we exactly recover (9). In order to prove Theorem 7, we simply need to revisit subsection B.3. We denote by ΩDM2 and PDM2 the sample space and probability measure associated with the sampling process DM2. As in subsection B.3, given a kernel K K, we define the event EK = n ω ΩDM2 : There exists 1 s nspl such that K xtest, x1,s > K xtest, xr,s for all 2 r R and all 1 s nspl o . Such event describes all outcomes corresponding to successful classification of the test point xtest. For simplicity let us assume that n = 2 (therefore matching the scenario depicted in Figure 6). We further partition the event EK according to which training point from the first category is most similar to the test point: EK = E(1) meaningful E(2) meaningful Eluck (106) The event E(1) meaningful consists in all the outcomes where, among the points from first category, x1,1 is the most similar to xtest, E(2) meaningful consists in all the outcomes where, among the points from first category, x1,2 is the most similar to xtest, and Eluck consists in all the remaining cases. Formally we have: E(1) meaningful = EK n ω ΩDM2 : K xtest, x1,1 > K xtest, x1,2 o n ω ΩDM2 : K xtest, x1,1 > K xtest, x1,s for all 3 s nspl o E(2) meaningful = EK n ω ΩDM2 : K xtest, x1,2 K xtest, x1,1 o n ω ΩDM2 : K xtest, x1,2 > K xtest, x1,s for all 3 s nspl o Eluck = EK n ω ΩDM2 : there exists 3 s nspl such that K xtest, x1,s K xtest, x1,s for all 1 s nspl o Published as a conference paper at ICLR 2023 Exactly as in subsection B.3, we then prove that PDM2[E(i) meaningful] 1 |X| x X H2R 1(K x) for i = 1, 2 (107) PDM2[Eluck] 1 The proof of inequality (107) is essentially identical to the proof of Lemma 10. We define the event A(1) := n ω ΩDM2 : K xtest, x1,1 > K xtest, xr,1 for all 2 r R o n ω ΩDM2 : K xtest, x1,1 > K xtest, xr,3 for all 1 r R o (109) and the event A(2) := n ω ΩDM2 : K xtest, x1,2 > K xtest, xr,2 for all 2 r R o n ω ΩDM2 : K xtest, x1,2 > K xtest, xr,3 for all 1 r R o The x s involved in the definition of the event A(1) are highlighted in yellow in Figure 6. Crucially they are all generated by different sequences of concepts, except for x1,1 and xtest. We can therefore appeal to Theorem 4 to obtain PDM2[A(1)] 1 |X| x X H2R 1(K x) since there is a total of t = 2R 1 distractors (the distractors in Figure 6 are x1,3, x2,1, x2,3, x3,1 and x3,3). We then use the fact E(1) meaningful A(1) to obtain (107) with i = 1. The case i = 2 is exactly similar. We now prove (108). The proof is similar to the proof of Lemma 11. For 1 r R, we define the events ω ΩDM2 : max 3 s nspl K(xtest, xr,s) > max 3 s nspl K(xtest, xr ,s ) By symmetry, these events are equiprobable. They also are mutually disjoints, and therefore PDM2[Br] 1/R. Inequality (108) then comes from the fact that Eluck B1. Combining (106), (107), (108) then gives sup K K PDM2 h EK i 2 |X| x X H2R 1 (K x) + 1 and, going back to the general case where n denotes the number of representatives that each sequence of unfamiliar concepts has in the training set, sup K K PDM2 h EK i n x X H2R 1 (K x) + 1 which in turn implies (16). Combining inequalities (15) and (16) then concludes the proof of Theorem 7. E DETAILS OF THE EXPERIMENTS In this section we provide the details of the experiments described in Section 6, as well as additional experiments. Table 4 provides the results of experiments in which the parameters L, nw, nc and R are set to L = 9, nw = 150, nc = 5, R = 1000. Published as a conference paper at ICLR 2023 Table 3: Accuracy in % on unfamiliar test points (L = 9, nw = 150, nc = 5, R = 1000). n =1 n =2 n =3 n =4 n =5 nspl =6 nspl =7 nspl =8 nspl =9 nspl =10 Neural network 99.8 0.3 99.9 0.1 99.9 0.1 99.9 0.1 100 0.1 NN on feat. extracted by neural net 99.9 0.1 99.9 0.1 99.9 0.1 99.9 0.1 99.9 0.1 NN on feat. extracted by ψ 0.7 0.2 1.1 0.3 1.5 0.3 1.8 0.3 2.2 0.3 NN on feat. extracted by ψone hot 0.6 0.2 1.1 0.3 1.4 0.2 1.7 0.3 2.1 0.3 Upper bound (0.015n + 1/1000) 1.6 3.1 4.6 6.1 7.6 SVM on feat. extracted by ψ 0.6 0.3 1.5 0.4 2.2 0.4 3.2 0.6 4.2 1.0 SVM on feat. extracted by ψone hot 0.5 0.1 1.1 0.1 1.9 0.1 2.8 0.2 3.8 0.2 SVM with Gaussian kernel 0.6 0.1 1.1 0.1 2.0 0.1 2.8 0.2 3.6 0.2 Table 4: Accuracy in % on unfamiliar test points (L = 9, nw = 50, nc = 5, R = 1000). n =1 n =2 n =3 n =4 n =5 nspl =6 nspl =7 nspl =8 nspl =9 nspl =10 Neural network 99.9 0.1 99.9 0.1 99.9 0.1 99.9 0.1 100 0.1 NN on feat. extracted by neural net 99.9 0.1 99.9 0.1 99.9 0.1 99.9 0.1 99.9 0.1 NN on feat. extracted by ψ 2.4 0.3 4.1 0.6 5.5 0.6 6.9 0.8 8.0 0.8 NN on feat. extracted by ψone hot 2.0 0.3 3.4 0.5 4.8 0.6 5.7 0.5 6.7 0.7 Upper bound (0.073n + 1/1000) 7.4 14.7 22.0 29.3 36.6 SVM on feat. extracted by ψ 2.2 0.5 5.2 0.9 8.6 0.9 11.7 0.6 15.1 1.2 SVM on feat. extracted by ψone hot 1.2 0.1 3.5 0.2 6.4 0.2 9.9 0.3 13.6 0.4 SVM with Gaussian kernel 2.0 0.1 3.7 0.2 5.4 0.2 8.6 0.3 12.1 0.3 The parameters nspl and n are chosen so that the training set contains 5 familiar sentences per category, and between 1 and 5 unfamiliar sentences per category. Table 3 is identical to Table 1 in Section 6, with the exception that it contains additional information (i.e. the standard deviations of the obtained accuracies). The abbreviation NN appearing in Table 3 stands for Nearest Neighbor . Table 4 provides the results of additional experiments in which the parameters L, nw, nc and R are set to L = 9, nw = 50, nc = 5, R = 1000. The parameters nspl and n are chosen, as in the previous set experiments, so that the training set contains 5 familiar sentences per category, and between 1 and 5 unfamiliar sentences per category. The tasks considered in this set of experiments are easier due to the fact that the vocabulary is smaller (nw = 50 instead of nw = 150). E.1 NEURAL NETWORK EXPERIMENTS We consider the neural network in Figure 2. The number of neurons in the input, hidden and output layers of the MLPs constituting the neural network are set to: MLP 1: din = 150, dhidden = 500, dout = 10, MLP 2: din = 90, dhidden = 2000, dout = 1000. For each of the 10 possible parameter settings in Table 3 and Table 4, we do 104 experiments. For each experiment we generate: A training set containing R nspl sentences. A test set containing 10, 000 unfamiliar sentences (10 sentences per category). We then train the neural network with stochastic gradient descent until the training loss reaches 10 4 (we use a cross entropy loss). The learning rate is set to 0.01 (constant learning rate), and the batch size to 100. At test time, we either use the neural network to classify the test points (first row of the tables) or we use a nearest neighbor classification rule on the top of the features extracted by the neural network (second row of the tables). The mean and standard deviation of the 104 test Published as a conference paper at ICLR 2023 accuracies, for each of the 10 settings, and for each of the two evaluation strategies, is reported in the first two rows of Table 3 and Table 4. E.2 NEAREST-NEIGHBOR EXPERIMENTS In these experiments we use a nearest neighbor classification rule on the top of features extracted by ψ (third row of Table 3 and 4) or ψone hot (fourth row). For each of the 10 possible parameter settings in Table 3 and Table 4, we do 50 experiments. For each experiment we generate: A training set containing R nspl sentences. A test set containing 1, 000 unfamiliar sentences (one sentences per category). In order to perform the nearest neighbor classification rule on the features extracted by ψ , one needs to evaluate the kernel K (x, y) = ψ (x), ψ (y) F for each pair of sentences. Computing K (x, y) requires an expensive combinatorial calculation which is the reason why we perform fewer experiments and use a smaller test set than in E.1. In order to break ties, the values of K (x, y) are perturbed according to (31). With the parameter setting L = 9, nw = 50, nc = 5 and R = 1000, our theoretical lower bound for the generalization error is err(F, ψ, T) 1 0.073 n 1/R for all F and all ψ, (112) which is obtained by choosing ℓ= 6 in inequality (105). This lead to an upper bound of 0.073 n + 1/R on the success rate. This upper bound is evaluated for n ranging from 1 to 5 in the fifth row of Table 4. E.3 SVM ON FEATURES EXTRACTED BY ψone hot AND SVM WITH GAUSSIAN KERNEL For each of the 10 possible parameter settings in Table 3 and Table 4, we do 100 experiments. For each experiment we generate: A training set containing R nspl sentences. A test set containing 10, 000 unfamiliar sentences (10 sentences per category). We use the feature map ψone hot (which simply concatenates the one-hot-encodings of the words composing a sentence) to extract features from each sentence. These features are further normalized according to the formula x = ψone hot(x) p p p(1 p) where p = 1/nw (113) so that they are centered around 0 and are O(1). We then use the SVC function of Scikit-learn Pedregosa et al. (2011), which itself relies on the LIBSVM library Chang & Lin (2011), in order to run a soft multiclass SVM algorithm on these features. We tried various values for the parameter controlling the ℓ2 regularization in the soft-SVM formulation, and found that the algorithm, on this task, is not sensitive to this choice so we chose C = 1. The results are reported in the seventh row of both tables. We also tried a soft SVM with Gaussian Kernel K(x, y) = e γ x y 2 applied on the top of features extracted by ψone hot and normalized according to (113). We use the SVC function of Scikit-learn with ℓ2 regularization parameter set to C = 1. For the experiments in Table 3 (nw = 150), the parameter γ involved in the definition of the kernel was set to γ = 0.25 when n {1, 2} and to γ = 0.1 when n {3, 4, 5}. For the experiments in Table 4 (nw = 50), it was set to γ = 0.75 when n = 1, to γ = 0.1 when n = 2, and finally to γ = 0.005 when n {3, 4, 5}. Published as a conference paper at ICLR 2023 Table 5: Search for the hyperparameter α α λmin(Gtrain) λmax(Gtrain) Test Accuracy 0.001 90.9 50, 583.3 6.1% 0.01 81.5 32, 334.9 6.1% 0.1 56.8 15, 358.7 6.6% 1.0 22.9 4, 191.5 7.6% 10 2.5 673.4 7.2% 15 0.138 471.7 7.0% 16 0.2 445.5 7.0% 100 4.7 86.9 5.4% 1000 4.983 15.573 5.0% E.4 SVM ON FEATURES EXTRACTED BY ψ Applying a SVM on the feature extracted by ψ is equivalent to running a kernelized SVM with kernel K . A naive implementation of such algorithm leads to very poor results on our data model. For such algorithm to not completely fail, it is important to carefully rescale K so that the eigenvalues of the corresponding Gram matrix are well behaved. Recall that K (x, y) = n L c n Lw {ϕ Φ : ϕ(xℓ) = ϕ(yℓ) for all 1 ℓ L} and let ξ : R R be a strictly increasing function. Since the nearest neighbor classification rule works by comparing the values of K on various pairs of points, it is clear that using the kernel K (x, y) = ξ(K (x, y)) is equivalent to using the kernel K (x, y). In particular, choosing ξ(x) := log(1 + (n L w/α)x) gives the following family of optimal kernels: K α (x, y) = log 1 + n L w α K (x, y) (115) To be clear, all these kernels are exactly equivalent to the the kernel K when using a nearest neighbor classification rule. However, they lead to different algorithms when used for kernelized SVM. We have experimented with various choice of the function ξ and found out that this logarithmic scaling works well for kernelized SVM. For each of the 10 possible parameter settings in Table 3 and Table 4, we do 10 experiments. For each experiment we generate: A training set containing R nspl sentences. A test set containing 1, 000 unfamiliar sentences (one sentences per category). Let us denote by xtrain i , 1 i R nspl, the data points in one of these training set, and by xtest i , 1 i 1000, the data points in the corresponding test set. In order to run the kernelized SVM algorithm we need to form the Gram matrices Gtrain ij = K α (xtrain i , xtrain j ) and Gtest ij = K α (xtest i , xtrain j ) (116) Constructing each of these Gram matrices takes a few days on CPU. We then use the SVC function of Scikit-learn to run a soft multiclass kernelized-SVM algorithm. We tried various values for the parameter controlling the ℓ2 and found that the algorithm is not sensitive to this choice so we chose C = 1. The algorithm, on the other hand, is quite sensitive to the choice of the hyperparamater α defining the kernel K α . We experimented with various choices of α and found that choosing the smallest α that makes the Gram matrix Gtrain positive definite works well (note that the Gram matrix should be positive semidefinite for the kernelized SVM method to make sense). In Table 5 we show an example, on a specific pair of train and test set5, of how the eigenvalues of Gtrain and the test accuracy depends on α. 5the training set and test set used in this experiment were generated by our data model with parameters L = 9, nw = 50, nc = 5, R = 1000, nspl = 8, and n = 3