# on_the_hardness_of_learning_under_symmetries__39ef28c3.pdf Published as a conference paper at ICLR 2024 ON THE HARDNESS OF LEARNING UNDER SYMMETRIES Bobak T. Kiani 12, Thien Le 1, Hannah Lawrence 1, Stefanie Jegelka13, Melanie Weber2 1 MIT EECS, 2 Harvard SEAS, 3 TU Munich, co-first author We study the problem of learning equivariant neural networks via gradient descent. The incorporation of known symmetries ( equivariance ) into neural networks has empirically improved the performance of learning pipelines, in domains ranging from biology to computer vision. However, a rich yet separate line of learning theoretic research has demonstrated that actually learning shallow, fully-connected (i.e. non-symmetric) networks has exponential complexity in the correlational statistical query (CSQ) model, a framework encompassing gradient descent. In this work, we ask: are known problem symmetries sufficient to alleviate the fundamental hardness of learning neural nets with gradient descent? We answer this question in the negative. In particular, we give lower bounds for shallow graph neural networks, convolutional networks, invariant polynomials, and frame-averaged networks for permutation subgroups, which all scale either superpolynomially or exponentially in the relevant input dimension. Therefore, in spite of the significant inductive bias imparted via symmetry, actually learning the complete classes of functions represented by equivariant neural networks via gradient descent remains hard. 1 INTRODUCTION In recent years, the purview of machine learning has expanded to non-traditional domains with geometric inputs, from graphs to sets to point clouds. Correspondingly, it is now common practice to tailor neural architectures to the symmetries of the input for example, graph neural networks are invariant to permutations of the input nodes, while networks operating on molecules and point clouds are invariant to permutation, translation, and rotation. Empirically, encoding such structure has led to computational benefits in applications (Wang et al., 2021; Batzner et al., 2022; Bronstein et al., 2021). From the perspective of generalization, previous research has quantified the benefits of imposing symmetry during learning (Long & Sedghi, 2019; Bietti et al., 2021; Sannai et al., 2021; Weber et al., 2020; Mei et al., 2021), often achieving rather tight bounds for simple models (Elesedy, 2021; Tahmasebi & Jegelka, 2023). At their core, these results are bounds on sample complexity, i.e. how much data is needed to learn a given task. In contrast, the effect of symmetries on the computational complexity of learning algorithms has not been previously studied. Broadly speaking, generalization bounds are necessary but not sufficient to show efficient learnability, as there can be exponentially large gaps between sample complexity and runtime lower bounds. Indeed, an active line of research in learning theory studies the hardness of learning fully-connected neural networks via correlational statistical query (CSQ) algorithms (defined in Sec. 2.1), which notably encompass gradient descent. In particular, for a Gaussian data distribution, Diakonikolas et al. (2020) proved exponential lower bounds for learning shallow neural networks. Such works provide valuable impossibility results, demonstrating that one cannot hope to efficiently learn any function represented by a small neural networks under simple data distributions. In this work, we ask: does invariance provide a restrictive enough subclass of neural network functions to circumvent these impossibility results? Our main finding, shown in various settings, is that an inductive bias towards symmetry is not strong enough to alleviate the fundamental hardness of learning neural nets via gradient descent. We view this result as a guide for future theoretical work: additional function structure is generally needed, even in the invariant setting, to achieve guarantees of efficient learnability. Our proof techniques largely rely on extending techniques from the general Published as a conference paper at ICLR 2024 setting to symmetric function classes. In many of our lower bounds, imposition of symmetry on a function class reduces the computational complexity of learning by a factor proportional to the size of the group. Some of our lower bounds use simple symmetrizations of existing hard families, while others require bona fide new classes of hard invariant networks. For example, intuition from message passing GNN architectures seems to indicate that complexity would grow exponentially with the number of such message passing steps; however, our results show that even a single message passing layer gathers sufficient features to form exponentially hard functions (Liao et al., 2020). Our Contributions We consider various interrelated questions on the computational hardness of learning, focusing primarily on how hard it is to learn data generated by invariant architectures. Question (primary). Given Gaussian i.i.d. inputs in n variables, how hard is it to learn the class of shallow single hidden layer invariant architectures in the correlational statistical query model? We answer this question in various settings. As a warm-up, we give general (SQ) lower bounds for learning invariant Boolean functions in Sec. 3. Next, we show that graph neural networks with a single message passing layer are exponentially hard to learn, either in the number of nodes or feature dimension (Theorem 3 and Theorem 4, respectively). Hardness in the feature dimension results from careful symmetrization of the hard functions in Diakonikolas et al. (2020). This technique did not extend to the node dimension hardness result in Theorem 3, for which we instead constructed a customized set of hard functions whose outputs are parities in the degree counts of the graph. Since adjacency matrices of graphs are often Boolean valued, this setting also offers general SQ lower bounds scaling exponentially with the number of nodes n. For subgroups of permutations, we prove CSQ hardness results for symmetrized (frame-averaged) neural network classes encapsulating CNNs (Theorem 7 of Sec. 5). The proofs here are based on constructions in Goel et al. (2020) and Diakonikolas et al. (2020), ensuring that properly chosen frames retain the underlying structure necessary (e.g., sign invariance) to guarantee orthogonality of hard functions. These lower bounds constitute our main results, but we provide two complementary results to provide an even more complete picture. First, we show that classes of sparse invariant polynomials can be learned efficiently, but not by gradient descent, via a straightforward extension of the algorithm in Andoni et al. (2014) for arbitrary polynomials (Sec. 6). This learning algorithm is SQ, and we give various nΩ(log n) CSQ lower bounds for learning over nΩ(log n) many orthogonal invariant polynomials of degree at most O(log(n)). Second, we take a classical computational complexity perspective to establish the NP hardness of learning GNNs. The work of Blum & Rivest (1988) established that even 3-node feedforward networks are NP hard to train, and later expanded to average case complexity for improper learning by various works (Daniely et al., 2014; Daniely, 2016; Klivans & Kothari, 2014). Although it is perhaps not surprising that these hardness results can be extended to invariant architectures, for the sake of completeness (and since we were unaware of an extension in the literature), in Sec. 4.3 we give an NP hardness result for proper learning of the weights of a GNN architecture. Finally, Sec. 7 provides a few experiments verifying that the hard classes of invariant functions we propose are indeed difficult to learn. 1.1 RELATED WORK A long line of research studies the hardness of learning neural networks. We briefly review the most relevant works here, and refer the reader to App. A for a more holistic review. Our work is largely motivated to extend hardness results for feedforward networks to equivariant and symmetric neural networks. Early results showed that there exist rather artificial distributions of data for which feedforward networks are hard to learn in worst-case settings (Judd, 1987; Blum & Rivest, 1988; Livni et al., 2014). Interest later grew into studying hardness for more natural settings via the statistical query framework (Kearns, 1998; Shamir, 2018). Goel et al. (2020) showed superpolynomial CSQ lower bounds for learning single hidden layer Re LU networks, subsequently strengthened to exponential lower bounds in Diakonikolas et al. (2020). We use the hard class of networks in Diakonikolas et al. (2020) as a basis for the frame averaged networks in our work. Hardness results for learning feedforward neural networks have also been shown by proving reductions to average case or cryptographically hard problems (Chen et al., 2022a; Daniely & Vardi, 2020). Though our work is focused on runtime lower bounds, some algorithms exist for learning restricted classes of geometric networks efficiently such as CNNs (Brutzkus & Globerson, 2017; Du et al., Published as a conference paper at ICLR 2024 2017; 2018a). Zhang et al. (2020); Li et al. (2021) give algorithms learning one hidden layer GNNs via gradient based algorithms, which are efficient when certain factors such as the condition number of the weights are bounded. We should also note that for practical separations between SQ and CSQ algorithms, Andoni et al. (2014) show that learning sparse polynomials of degree d in n variables requires Ω(nd) CSQ queries, but often only O(nd) SQ queries. We extend this to polynomials in the invariant polynomial ring in Sec. 6. 2 BACKGROUND AND NOTATION We use a, a, and A to denote scalars, vectors, and matrices. In denotes the identity matrix of dimension n. We denote groups by G and graphs by G. We denote the normal distribution supported over Rn with mean v Rn and covariance matrix M Rn n as N(v, M), often abbreviated as N for N(0, In). We also denote by Gn the set of graphs on n vertices and, when clear from context, E an arbitrary distribution over Gn. To ease notation, we often use [n] to denote the set {1, . . . , n}. Let X be an input space and D a distribution on X. Given functions f, g : X R, their inner product is f, g D = ED[fg] with corresponding norm f D = p f, f D. For functions f, g : X { 1, +1} whose outputs are Boolean, the classification error is Px D[f(x) = g(x)]. 2.1 SQ LEARNING FRAMEWORK The well-studied statistical query (SQ) model offers a restricted query complexity based model for proving hardness, which encapsulates most algorithms in practice (Kearns, 1998; Reyzin, 2020). Given a joint distribution D on input/output space X Y, any SQ algorithm is composed of a set of queries. Each query takes as input a function g : X Y [ 1, 1] and tolerance parameter τ > 0, and returns a value SQ(g, τ) in the range: E(x,y) D [g(x, y)] τ SQ(g, τ) E(x,y) D [g(x, y)] + τ. (1) A special class of queries are correlational statistical queries (CSQ), where the query function g : X [ 1, 1] is only a function of x, and the oracle returns E(x,y) D [g(x) y] the correlation of g with y up to error τ. Hardness is quantified as the number of queries required to learn a function y = c (x), drawn from a function class C up to a desired maximal error. Notably, SQ hardness results imply hardness for any gradient based algorithm, as gradients with respect to a loss can be captured by statistical queries. Furthermore, only CSQ oracles are required for the mean-squared error (MSE) loss. Example 1 (GD from CSQ). Given a function Nθ : X R with parameters θ, gradients of the MSE loss with respect to θ can be estimated with a correlational statistical query for each parameter: E(x,y) D θ 1 2(y Nθ(x))2 = E [Nθ(x) θNθ(x)] | {z } y independent E [y θNθ(x)] | {z } CSQ with θNθ Access to the distribution D is typically provided through samples, and the tolerance τ in part captures the error in statistical sampling of quantities such as the gradient. In fact, by drawing O(log(1/δ)/τ 2) samples, standard Hoeffding bounds guarantee estimates of a given query are within tolerance τ with probability 1 δ. In the SQ (CSQ) framework, the complexity of learning is determined by the tolerance bound and number of queries (loosely corresponding to steps in an algorithm) needed to learn a function class. Definition 1 (SQ (CSQ) Learning). Given a function class C and distribution D over input space X, an algorithm SQ (CSQ) learns C up to classification error (ℓ2 squared loss) ϵ if, given only SQ (CSQ) oracle access to (x, c (x)), x D for some unknown c C, the algorithm outputs a function f such that Px D [f(x) = c (x)] ϵ (i.e. f c D ϵ). SQ lower bounds for Boolean functions are typically shown by finding a set of roughly orthogonal functions that lower bound the so-called statistical dimension. For real-valued function classes, lower bounds on the statistical dimension imply CSQ lower bounds. Applying more general SQ lower bounds is challenging, unless one reduces the problem of learning a real-valued function to that of learning a subset therein of (say) Boolean functions within the class (Chen et al., 2022a). We cover this in more detail and also discuss other learning theoretic frameworks in App. B. Published as a conference paper at ICLR 2024 3 WARM UP: INVARIANT BOOLEAN FUNCTIONS Before detailing our main results concerning real-valued functions, we explore the foundational context of Boolean functions where the SQ formalism originated (Kearns, 1998; Blum et al., 1994). This illustrative analysis will serve as a warm-up for the more complex scenarios later. It is known that many classes of Boolean functions have exponential query complexity (e.g., the class of parity functions) arising from the orthogonality properties of Boolean functions. Enforcing invariance under a given group G, the number of orbits of the 2n bitstrings gives an analogous hardness metric, though care must be taken in dealing with the distribution of these orbits. More formally, in the Boolean setting, let input distribution D be uniform over X = { 1, +1}n. For a group G with representation ρ, let Oρ = {{ρ(g) x : g G} : x { 1, +1}n} denote the orbits of the inputs under the representation. For any given orbit O Oρ, let x O denote an orbit representative (i.e., some fixed element in the set O). We can represent any symmetric function f : On { 1, +1} as a function of these orbits, where the correlation of two functions f, g is f, g D = E [fg] = 2 n X Ok Oρ |Ok|f(x Ok)g(x Ok). (3) The probability p Oρ : Oρ [0, 1] of a random bitstring falling in orbit OK Oρ is equal to p Oρ(Ok) = |Ok|/2n. With this notation, we have a general lower bound on the SQ hardness of learning invariant Boolean functions. Theorem 2 (Boolean SQ hardness). For a given symmetry group G with representation ρ : G GL({ 1, +1}n), let p Oρ := P 2n 2 1/2 and let Hρ be the class of symmetric Boolean functions, defined as Hρ = {f : { 1, +1}n { 1, +1} : g G, x { 1, +1}n : f(ρ(g) x) = f(x)} . (4) Any SQ learner capable of learning Hρ up to sufficiently small classification error probability ϵ (ϵ < 1/4 suffices) with queries of tolerance τ requires at least τ 2 p Oρ 2/2 queries. Proof sketch. We show that 1 p Oρ 2 is the probability that, over independently drawn inputs x, x , the distribution (f(x), f(x )) is uniformly random over f drawn uniformly from Hρ. From a proof technique in Chen et al. (2022a), this connects the task to one of distinguishing distributions, for which the SQ complexity is at least the stated amount. See App. C for complete proof. For a more direct lower bound, we can use H older s inequality such that p Oρ 2 1 2n max Ok Oρ |Ok| |G|2 n. This gives query complexity of at least 2n 1|G| 1τ 2. For commonly studied groups, assuming τ = O(1), the SQ complexity generally aligns with the number of orbits. Table 1 summarizes these hardness lower bounds for commonly studied groups1. Group max Ok Oρ |Ok| Query Complexity Symmetric group on n bits ( n n/2) 2n = 1 Θ( n) Θ( n) Symmetric group on n n graphs n! 2n2 = 2 n2+n log n+O(n) Ω(2O(n2)) Cyclic group on n bits n Ω(2n/n) Table 1: Query complexity of learning common invariant Boolean function classes. 4 LOWER BOUNDS FOR GNNS We now show statistical query lower bounds for graph neural networks, which are invariant to node permutations, in three settings: (1) SQ lower bounds scaling with the number of nodes n for Erd os R enyi distributed graphs with trivial node features, (2) CSQ lower bounds scaling in the node feature dimension for a fixed graph, and (3) a simple extension of NP hardness in a proper learning task (deciding if fitting a training set with a given GNN is possible). 1Certain sub-classes of Hρ may suffice to attain hardness, much like how Parity functions suffice to prove exponential lower bounds in the traditional Boolean setting. We do not consider this strengthening here. Published as a conference paper at ICLR 2024 4.1 HARDNESS IN NUMBER OF NODES We consider a class of two hidden layer GNNs that are only a function of the adjacency matrix, and show hardness in the number of nodes n. This covers a commonly used procedure where one learns node-level equivariant features in the first layer, aggregates these features, and passes them through an MLP (Dwivedi et al., 2020). We consider a Boolean function class of GNNs over the uniform distribution Unif({0, 1}n n), which can be viewed as the Erd os R enyi random graph model with p = 0.5.2 Node features x Rn are trivial and always set to x = 1. 2 hidden layer GNN family We consider two layer GNNs f = f (2) f (1). These take the commonly used form of message passing f (1) : {0, 1}n n Rk1 which aggregates k1 permutation invariant features for the graph, followed by a single hidden layer Re LU MLP f (2) : Rk1 {0, 1}. [f (1) a,b(A)]i = 1 n σ (ai + bi Ax) (output of channel i [k1]) f (2) u,v,W (h) = Xk2 i=1 uiσ( W:,i, h + vi), (5) with subscripts for trainable weights. For our hard class of functions, k1 = O(n) and k2 = O(n), so there are at most O(n2) parameters (proportional to the number of edges). Family of hard functions We define the hard functions in terms of degree counts c A [n]n+1, where entry [c A]i counts the number of nodes that have i 1 outgoing edges in A: [c A]i = Xn k=1 1 [[A1]k = i 1] . (6) The hard functions in HER,n are enumerated over subsets S [n + 1] and a bit b {0, 1}: HER,n = {g S,b : S [n + 1], b {0, 1}}, g S,b(A) = b + X i S[c A]i mod 2. (7) g S,b(A) is a sort of parity function supported over integers in c A in S. We show in Lemma 15 that the GNNs in Eq. (5) can construct these functions. Additionally, the class HER,n is Boolean, so hardness lower bounds apply in the general SQ model. Theorem 3 (SQ hardness of HER,n). Any SQ learner capable of learning HER,n up to classification error probability ϵ sufficiently small (ϵ < 1/4 suffices) with queries of tolerance τ requires at least Ω τ 2 exp(nΩ(1)) queries. Proof sketch. We form networks that in the message passing layer calculate c A, which is passed to the MLP calculating Eq. (7) as a sum of Re LUs via hat-like functions that mimic parities. Based on concentration properties of the Erd os R enyi model, we show that for some p < 0.5, at least Θ(np) entries of c A will have values scaling roughly as O( n) and the probability that any such entry is odd converges to 1/2. Also using concentration properties, we condition on a high probability region (where probability of these entries being odd are roughly independent from each other), and use hardness of learning parity functions to get SQ lower bounds. 4.2 HARDNESS IN FEATURE DIMENSION In this section, we derive correlational statistical query lower bounds (CSQ) for commonly used graph neural network architectures (GNNs), and state the corresponding learning-theoretic hardness result. The techniques in this section extend those of Diakonikolas et al. (2020), resulting in an exponential (in feature dimension) lower bound for learning one-hidden layer GNNs. 1-hidden layer GNN For a fixed directed, unweighted graph G with n N vertices, let A(G) Rn n be the adjacency matrix or the Laplacian of G. For input feature dimension d N and width parameter k N, we consider the following set of functions: F d,k G := f : Rn d R, f(X) = 1 n σ(A(G)XW )a | W Rd 2k, a R2k , (8) for some nonlinearity σ. When the graph itself is part of the input, we have the function class: F d,k n := f : Rn d Gn R, f(X, G) = 1 n σ(A(G)XW )a | W Rd 2k, a R2k . (9) 2Our exponential lower bounds arise from the wide range of possible degrees of nodes in the graph. Though we do not generalize this result, this fact means that it is likely extendable to restricted sparse Erd os R enyi models G(n, pn) where pn = ω(1/n) (i.e. average degree grows arbitrarily with n). Published as a conference paper at ICLR 2024 Family of hard functions First, we define functions f G : Rn 2 R and fn : Rn 2 Gn R as f G(X) = 1 n σ (A(G)XW ) a , fn(X, G) = 1 n σ (A(G)XW ) a , (10) " (cos(πj/k)) j [2k] (sin(πj/k)) j [2k] R2 2k, a = ( 1)j j [2k] R2k. (11) Given a set of matrices B Rd 2, our family of hard functions can now be written as: CB G := g B G : X 7 f G(XB) f G N | B B , CB n := g B n : (X, G) 7 fn(XB, G) fn N E | B B . Below, we show exponential lower bounds 3 for learning F d,k G and F d,k n : Theorem 4 (Exponential CSQ lower bound for GNNs). For any number of vertices n independent of input dimension d and width parameter k, let ϵ > 0 be a sufficiently small error constant and E a distribution over Gn, independent of N, with Pr E({ }) < Ω(1) < 1. Then there exists a set B of size at least 2Ω(dΩ(1)) such that any CSQ algorithm that queries from oracles of concept f CB G (resp. CB n ) and outputs a hypothesis h with f h N ϵ (resp. f h N E ϵ) requires either 2dΩ(1) queries or at least one query with precision d Ω(k) + 2 dΩ(1). Proof sketch. Full details of the proof can be found in App. E. From Lemma 17 of Diakonikolas et al. (2020), there exists a set B Rd 2 of size at least 2Ω(dΩ(1)) such that B B 2 = O(d Ω(1)) < 1 if B = B (13) and B B = 12 which we use to index CB G . We design a specialized class of invariant orthogonal Hermite polynomials, such that the correlation in low degree moments vanishes and is bounded as: | f G( B), f G( B ) N | B B k 2 X m>k f [m] G 2 N = B B k 2 f G 2 N . (14) Finally, we use Eq. (13) to get the desired almost uncorrelatedness of elements in CB G and use a simple total probability argument to extend this to CB n . 4.3 NP HARDNESS OF PROPER LEARNING OF GNNS The classic work of Blum & Rivest (1988) showed that there exist datasets for which determining whether or not a 3-node neural network can fit that dataset is NP hard. It is straightforward to prove a similar result for invariant networks. We map the task of training a single hidden layer GNN to the NP hard problem of learning halfspaces with noise (Guruswami & Raghavendra, 2009; Feldman et al., 2012). Here, one is given a set of N labeled examples {xi, yi}N i=1 and must determine whether there exists a halfspace parameterized by vector v and constant θ that correctly classifies a specified fraction of the labels such that sign ( v, xi θ) = yi. We map a GNN learning problem to this task, as informally described below and formally detailed in App. D. Proposition 5 (NP hardness of GNN training; informal). There exists a sequence of datasets indexed by the number of nodes n, each of the form {Ai, xi, yi}N i=1 where yi { 1, +1}, such that determining whether a GNN with parameters a, b Rk, c R and k = O(n) of the form fa,b,c(x, A) = c + Xk i=1 1 relu (x + ai Ax) bi (15) can correctly classify a specific fraction of the data is NP hard. 5 CSQ LOWER BOUND FOR CNNS AND FRAME AVERAGING Several commonly used invariant architectures take the form of symmetrized neural networks over permutation subgroups G Sn, which can be generalized as frame-averaging architectures (Puny et al., 2023). This includes translation invariant CNNs, group convolutional networks, and networks with preprocessing steps such as sorting. We study the hardness of learning these invariant functions. 3To have a meaningful set of functions, g B G and g B n must not vanish, which holds when σ is not a low-degree polynomial (Re LU suffices, see Remark 12 of Diakonikolas et al. (2020)) and G (graph with no edges). Published as a conference paper at ICLR 2024 Frame averaging and its connection with CNNs. Let G be a subgroup of the permutation group Sn that acts on Rn d by permuting the rows. A frame is a function F : Rn d 2G\ satisfying F(g X) = g F(X) set-wise. As noted in Puny et al. (2023), given some frame F, one can transform an arbitrary function (or neural network) h(X) into an invariant function by averaging its values as 1 |F(X)| P g F(X) h(g 1X). For instance, setting X : F(X) = G recovers the Reynolds (or group-averaging) operator. Given a fixed frame F, we will first show CSQ lower bounds for the following class of frame-averaged one-hidden-layer fully connected nets: f : Rn d R, f(X) = 1 p g F(X) a σ(W (g 1X))1d | W Rn k, a Rk ) for some nonlinearity σ. Example 2 (Frame for CNN). Set d = 1 and let G be the cyclic group acting on Rn via cyclic shifts of its elements with frame F(X) = G, X Rn. Then, Fd n consists of CNNs with one convolutional layer and k hidden channels. Remark 6 (Difficulty of frames). Since the frame F(X) may vary by datapoint X, the distribution Unif(F(X) 1) X can be significantly different from the original distribution over X, even if X g X for all g. For example, consider the frame F(X) containing all permutations that sort X lexicographically (where |F(X)| > 1 if X contains a repeated row). Even if X N with N invariant to row-permutation, the resultant distribution is not row permutation invariant. We focus on certain cases where such effects are not too pronounced. For instance, it is simple to check that if F(X) is constant over X, then F(X) = G X (Lemma 28). In the following, we assume that G is a polynomial-sized (in n) subgroup of Sn. Family of hard functions. Recall the low-dimensional function from the proof of Theorem 4: f exp : R2 d R with f(X) = (a ) σ((W ) X)1d (16) for some special parameter a R2k and W R2 2k in Eq. (11). We now define the family of hard functions, indexed by a set of matrices B Rn 2 obtained from Lemma 29: CB F = n g B : Rn d R with g B(X) = g G f exp(B g 1X) q |G| f exp N | B B Rn 2o . (17) Theorem 7 (Exponential CSQ lower bound for polynomial-sized group averaging). For feature dimension d independent of inputs n and width parameter k = Θ(n), let ϵ > 0 be a sufficiently small error constant. Then there exists a set B of size at least 2Ω(dΩ(1))/|G|2 such that: for any target g CB F with g N = 1, any CSQ algorithm outputting a hypothesis h with g h N ϵ requires either 2nΩ(1)/|G|2 queries or at least one query with precision |G|2 nΩ(1) + p Proof sketch. Using a union bound argument over the original set B from Diakonikolas et al. (2020), and concentration of inner product between permuted unit vectors, we obtain from Lemma 29 a set of orthogonal matrices B such that both B(B )T and g B(g B )T are small B = B B , g = g G. This construction costs a factor of |G|2 in |B |. The rest of the proof proceeds similarly; see App. F.1. We also obtain superpolynomial lower bounds for more general frames using the technique of Goel et al. (2020). Here, we drop the assumption that G is polynomially-sized. The hard functions are based on parity functions, similar to Goel et al. (2020) (but with an extra input dimension for X): f S : Rn d R with f S(X) = 1 d X w { 1,1}m χ(w)σ( w, XS / m) , S m [n]. (18) where m indicates a size m subset, χ is the parity function χ(w) = Q i wi and XS Rm d denotes entries of X in S. Here, w, XS := Pm i=1 wi(XS)i,: and σ : Rd Rd is an activation function. The hard invariant function class consists of frame-averages of the functions above: f S : Rn d R with f(X) = 1 p g F(X) f S(g 1X) | S m [n] Published as a conference paper at ICLR 2024 Below, we show a hardness result for this class inspired by that of Goel et al. (2020), whenever the frame F is sign invariant: F(X) = F(X z) for almost all X and for all z { 1, 1}n. Theorem 8 (Superpolynomial CSQ lower bound for sign-invariant frame averaging). If F is a signinvariant frame, then for any c > 0, any CSQ algorithm that queries from an oracle of concept f HS,F and output a hypothesis h with f h N Θ(1) needs at least Ω(nΩ(log n)/M(n)) queries or at least one query with precision O(n c/2), where M(n) is the size of the largest orbit of size m log(n) subsets of [n] under G. If moreover |F(X)| = 1 for almost all X (a singleton frame), then the same result holds with M(n) replaced by 1. Proof sketch. The proof hinges on the structure of f S, and in particular that applying g to the input of f S yields fg S. The sign-invariance of F is necessary to ensure that the proof of Goel et al. (2020), which involves introducing random sign-flips, goes through. See Lemma 33 for the full proof. Example 3 (Singleton frames). An example of such a frame is the lexicographical sorting of X Rn d according to the absolute value of the entries (to make the frame sign-invariant). With probability 1, X does not have any repeated values, and thus there is a unique lexicographical sort for X almost surely. In fact, for any group G, a singleton frame exists if X has no self-symmetry with probability 1, i.e. X = g X for all g G. For the hard class function to be nonvanishing, we show in Corollary 39 that at least a superpolynomial-sized subset of HS,F has norm lower-bounded by Ω(poly(n) 1), assuming that |F(X)| is polynomial in n almost surely (|G| does not have to be polynomial in n). 6 SEPARATION BETWEEN SQ AND CSQ FOR INVARIANT FUNCTION CLASSES A natural question is whether there exist real-valued function classes which are hard to learn in the CSQ setting (and hence by gradient descent with mean squared error loss), but efficient to learn via a more carefully constructed non-CSQ algorithm. In the general (non-invariant) setting, one such separation is that of learning k-sparse polynomials, i.e., degree d polynomials in n variables expressible as sums of only k unique monomials over inputs drawn from a product distribution D = n i=1 µi. Here, an efficient SQ (but not CSQ) algorithm called the GROWING-BASIS algorithm is known from Andoni et al. (2014). CSQ algorithms require Ω(nd) queries corresponding to the number of degree d orthogonal polynomials, but the GROWING-BASIS algorithms learns with O(nkf(d)) SQ queries, where f(d) may be exponential in d but independent of n. We informally illustrate the invariant extension of this here, leaving complete details to App. G. To perform this extension, we apply GROWING-BASIS to more general spaces (so-called rings ) of invariant polynomials whose basis consists of independent invariant functions or generators {gi R[x]G}i [r], for some r N. Independence here asserts that any gi cannot be written as a polynomial in the others. The space of all polynomials in gi(x) with degree d and sparsity k (both in gi s expansion) is denoted by R[g1(x), . . . , gr(x)]d,k. We have the following separation result: Lemma 9 (informal). Let x D such that the induced distribution of (gi(x))i [r] is a product distribution. Then any CSQ algorithm that learns f R[g1(x), . . . , gr(x)]d,k of degree at most d = O(log n) to a small constant error requires Ω(rd) CSQ queries of bounded tolerance. However, the GROWING-BASIS algorithm learns f with O(kr2 log r) SQ queries of precision Ω(1/(k poly(r))). A more rigorous exposition and proof is prepared in App. G. Finding independent generators itself can be a challenging task; however, we provide an example below, and more in App. G, for separations over groups where r = Ω(n) generators are known (Derksen & Kemper, 2015). Example 4. The sign group G flips the sign of input elements and is commonly used to study eigenvectors (Lim et al., 2022). G is indexed by vectors v { 1, +1}n with representation ρ(v) x = x v where represents pointwise multiplication. Consider generators g1, . . . , gn where gi(x) = |xi|. For inputs x Unif([ 1, +1]n), the distribution of [g1(x), . . . , gn(x)] is a product distribution Unif([0, 1]n) with the shifted Legendre polynomials as the orthogonal polynomials. For d at most O(log n), any CSQ algorithm requires Ω(nd) queries of bounded tolerance to learn these polynomials, whereas the GROWING-BASIS algorithm learns in O(kn2 log n) SQ queries. Published as a conference paper at ICLR 2024 0 200 400 600 800 1000 Train Set Test Set MSE 0/1 Error (a) GNN learning HER,n 0 100 200 300 400 500 Error (log axis) Train Set Test Set (b) CNN learning CB F Figure 1: Overparameterized GNN (a) and CNN (b) fail to learn functions from the class HER,n and CB F respectively by either failing to fit the training set or overfitting the data. Plots are aggregated and averaged over five random realizations. 7 EXPERIMENTS We train overparameterized GNNs and CNNs on the hard functions from Sec. 4 and Sec. 5, respectively. Specifically, we attempt to learn relatively small instances drawn from the function classes HER,n for graphs (Theorem 3) and CB F for translationally invariant data (Theorem 7). For the function class HER,n, we consider n = 15 node inputs and the target function g S,b where S [n] is a random subset of size 7 and b is either 0 or 1 at random. For the function class CB F, we choose a random orthogonal matrix B and then symmetrize the function class as per Eq. (17). Fig. 1a and 1b plot the performance of the GNN and CNN respectively. The GNN is unable to fit even the training data consisting of 225 n = 15 node graphs drawn uniformly from the Erd os R enyi model (i.e., p = 0.5). The CNN trained on n = 50 dimension inputs fits the training set of size 500 appropriately, but fails to generalize in the mean squared error loss. The GNN/CNN were overparameterized with 3 layers of graph/cyclic convolution followed by a two layer Re LU MLP on the aggregated invariant features. We refer the reader to App. H for further details. 8 DISCUSSION Our study asked how hard it is to learn relatively simple function classes constrained by symmetry. Setting aside mathematical formality, one may conjecture that such function classes are good representations or approximations of the data observed in nature. Our results indicate that such an assumption is unlikely to be a realistic one at least in worst-case settings, where we show such functions would be exponentially hard to learn. Provable guarantees of learning will have to incorporate further assumptions in the model class or biases in training to better account for the practical success of learning algorithms (Lawrence et al., 2021; Gunasekar et al., 2018; Le & Jegelka, 2022). We now state potential directions for future work and conjectures. It would be interesting to extend our hardness results beyond the SQ setting or to apply to continuous groups. For example, a natural question is whether symmetric functions are cryptographically hard to learn, as shown in Chen et al. (2022a) for two hidden layer Re LU networks. A line of work in probability and statistics has identified barriers to many algorithms solving average-case settings of classic optimization problems, such as max cut or largest independent set. These include the overlap gap property (Gamarnik, 2021) and bounds on low degree algorithms (Hopkins & Steurer, 2017; Brennan et al., 2020). Whether these barriers extend to the types of problems studied in geometric deep learning is an open question. We should note that there are restricted settings where learning neural networks is possible (see App. A). Recent work (Chen et al., 2022b; Chen & Narayanan, 2023) has shown that Re LU networks with O(1) hidden nodes are learnable in polynomial time. Extending this result to invariant network classes with O(1) channels would similarly expand the set of learnable invariant networks. Published as a conference paper at ICLR 2024 ACKNOWLEDGEMENTS The authors thank Sitan Chen for insightful discussions and feedback. BTK and MW were supported by the Harvard Data Science Initiative Competitive Research Fund and NSF award 2112085. TL and SJ were supported by NSF awards 2134108 and CCF-2112665 (TILOS AI Institute), and Office of Naval Research grant N00014-20-1-2023 (MURI ML-SCOPE). HL is supported by the Fannie and John Hertz Foundation and the NSF Graduate Fellowship under Grant No. 1745302. Emmanuel Abbe and Colin Sandon. Polynomial-time universality and limitations of deep learning. Communications on Pure and Applied Mathematics, 76(11):3493 3549, 2023. Alexandr Andoni, Rina Panigrahy, Gregory Valiant, and Li Zhang. Learning sparse polynomial functions. In Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete algorithms, pp. 500 510. SIAM, 2014. Eric R Anschuetz and Bobak T Kiani. Quantum variational algorithms are swamped with traps. Nature Communications, 13(1):7760, 2022. Eric R Anschuetz, Andreas Bauer, Bobak T Kiani, and Seth Lloyd. Efficient classical algorithms for simulating symmetric quantum systems. ar Xiv preprint ar Xiv:2211.16998, 2022. Ainesh Bakshi, Rajesh Jayaram, and David P Woodruff. Learning two layer rectified neural networks in polynomial time. In Conference on Learning Theory, pp. 195 268. PMLR, 2019. A Barvinok. Measure concentration lecture notes. See http://www. math. lsa. umich. edu/barvinok/total710. pdf, 2005. Simon Batzner, Albert Musaelian, Lixin Sun, Mario Geiger, Jonathan P Mailoa, Mordechai Kornbluth, Nicola Molinari, Tess E Smidt, and Boris Kozinsky. E (3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials. Nature communications, 13(1): 2453, 2022. Arash Behboodi, Gabriele Cesa, and Taco S Cohen. A pac-bayesian generalization bound for equivariant networks. Advances in Neural Information Processing Systems, 35:5654 5668, 2022. Alberto Bietti, Luca Venturi, and Joan Bruna. On the sample complexity of learning under geometric stability. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, 2021. URL https://openreview.net/forum? id=vlf0z TKa5Lh. Avrim Blum and Ronald Rivest. Training a 3-node neural network is np-complete. Advances in neural information processing systems, 1, 1988. Avrim Blum, Merrick Furst, Jeffrey Jackson, Michael Kearns, Yishay Mansour, and Steven Rudich. Weakly learning dnf and characterizing statistical query learning using fourier analysis. In Proceedings of the twenty-sixth annual ACM symposium on Theory of computing, pp. 253 262, 1994. Matthew Brennan, Guy Bresler, Samuel B Hopkins, Jerry Li, and Tselil Schramm. Statistical query algorithms and low-degree tests are almost equivalent. ar Xiv preprint ar Xiv:2009.06107, 2020. Michael M Bronstein, Joan Bruna, Taco Cohen, and Petar Veliˇckovi c. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. ar Xiv preprint ar Xiv:2104.13478, 2021. Alon Brutzkus and Amir Globerson. Globally optimal gradient descent for a convnet with gaussian inputs. In International conference on machine learning, pp. 605 614. PMLR, 2017. Yuan Cao and Quanquan Gu. Tight sample complexity of learning one-hidden-layer convolutional neural networks. Advances in Neural Information Processing Systems, 32, 2019. Published as a conference paper at ICLR 2024 Venkat Chandrasekaran, Benjamin Recht, Pablo A. Parrilo, and Alan S. Willsky. The convex geometry of linear inverse problems. Foundations of Computational Mathematics, 12(6): 805 849, 2012. doi: 10.1007/s10208-012-9135-7. URL https://doi.org/10.1007/ s10208-012-9135-7. Sourav Chatterjee. Superconcentration and Related Topics. Springer Cham, 2014. http:// example.com/history_of_time.pdf(visited 2016-01-01). Sitan Chen and Shyam Narayanan. A faster and simpler algorithm for learning shallow networks. ar Xiv preprint ar Xiv:2307.12496, 2023. Sitan Chen, Aravind Gollakota, Adam Klivans, and Raghu Meka. Hardness of noise-free learning for two-hidden-layer neural networks. Advances in Neural Information Processing Systems, 35: 10709 10724, 2022a. Sitan Chen, Adam R Klivans, and Raghu Meka. Learning deep relu networks is fixed-parameter tractable. In 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS), pp. 696 707. IEEE, 2022b. Sitan Chen, Zehao Dou, Surbhi Goel, Adam Klivans, and Raghu Meka. Learning narrow onehidden-layer relu networks. In The Thirty Sixth Annual Conference on Learning Theory, pp. 5580 5614. PMLR, 2023. Zhengdao Chen, Lei Chen, Soledad Villar, and Joan Bruna. Can graph neural networks count substructures? Advances in neural information processing systems, 33:10383 10395, 2020. Ziyu Chen and Wei Zhu. On the implicit bias of linear equivariant steerable networks: Margin, generalization, and their equivalence to data augmentation. ar Xiv preprint ar Xiv:2303.04198, 2023. Lenaic Chizat and Francis Bach. Implicit bias of gradient descent for wide two-layer neural networks trained with the logistic loss. In Conference on Learning Theory, pp. 1305 1338. PMLR, 2020. Iris Cong, Soonwon Choi, and Mikhail D Lukin. Quantum convolutional neural networks. Nature Physics, 15(12):1273 1278, 2019. Amit Daniely. Complexity theoretic limitations on learning halfspaces. In Proceedings of the forty-eighth annual ACM symposium on Theory of Computing, pp. 105 117, 2016. Amit Daniely and Gal Vardi. Hardness of learning neural networks with natural weights. Advances in Neural Information Processing Systems, 33:930 940, 2020. Amit Daniely, Nati Linial, and Shai Shalev-Shwartz. From average case complexity to improper learning complexity. In Proceedings of the forty-sixth annual ACM symposium on Theory of computing, pp. 441 448, 2014. Amit Daniely, Nathan Srebro, and Gal Vardi. Most neural networks are almost learnable. ar Xiv preprint ar Xiv:2305.16508, 2023. Abhimanyu Das, Sreenivas Gollapudi, Ravi Kumar, and Rina Panigrahy. On the learnability of deep random networks. ar Xiv preprint ar Xiv:1904.03866, 2019. Giacomo De Palma, Bobak Kiani, and Seth Lloyd. Random deep neural networks are biased towards simple functions. Advances in Neural Information Processing Systems, 32, 2019. Harm Derksen and Gregor Kemper. Computational invariant theory. Springer, 2015. Ilias Diakonikolas, Daniel M. Kane, and Alistair Stewart. Statistical query lower bounds for robust estimation of high-dimensional gaussians and gaussian mixtures. In 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), pp. 73 84, 2017. doi: 10.1109/FOCS. 2017.16. Ilias Diakonikolas, Daniel M Kane, Vasilis Kontonis, and Nikos Zarifis. Algorithms and sq lower bounds for pac learning one-hidden-layer relu networks. In Conference on Learning Theory, pp. 1514 1539. PMLR, 2020. Published as a conference paper at ICLR 2024 Simon Du, Jason Lee, Yuandong Tian, Aarti Singh, and Barnabas Poczos. Gradient descent learns one-hidden-layer cnn: Don t be afraid of spurious local minima. In International Conference on Machine Learning, pp. 1339 1348. PMLR, 2018a. Simon S Du and Surbhi Goel. Improved learning of one-hidden-layer convolutional neural networks with overlaps. ar Xiv preprint ar Xiv:1805.07798, 2018. Simon S Du, Jason D Lee, and Yuandong Tian. When is a convolutional filter easy to learn? ar Xiv preprint ar Xiv:1709.06129, 2017. Simon S Du, Yining Wang, Xiyu Zhai, Sivaraman Balakrishnan, Russ R Salakhutdinov, and Aarti Singh. How many samples are needed to estimate a convolutional neural network? Advances in Neural Information Processing Systems, 31, 2018b. Vijay Prakash Dwivedi, Chaitanya K Joshi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, and Xavier Bresson. Benchmarking graph neural networks. ar Xiv preprint ar Xiv:2003.00982, 2020. Bryn Elesedy. Provably strict generalisation benefit for invariance in kernel methods. Advances in Neural Information Processing Systems, 34:17273 17283, 2021. Vitaly Feldman, Venkatesan Guruswami, Prasad Raghavendra, and Yi Wu. Agnostic learning of monomials by halfspaces is hard. SIAM Journal on Computing, 41(6):1558 1590, 2012. Matthias Fey and Jan Eric Lenssen. Fast graph representation learning with pytorch geometric. ar Xiv preprint ar Xiv:1903.02428, 2019. David Gamarnik. The overlap gap property: A topological barrier to optimizing over random structures. Proceedings of the National Academy of Sciences, 118(41):e2108492118, 2021. Floris Geerts and Juan L Reutter. Expressiveness and approximation properties of graph neural networks. ar Xiv preprint ar Xiv:2204.04661, 2022. Surbhi Goel, Adam Klivans, and Raghu Meka. Learning one convolutional layer with overlapping patches. In International Conference on Machine Learning, pp. 1783 1791. PMLR, 2018. Surbhi Goel, Aravind Gollakota, Zhihan Jin, Sushrut Karmalkar, and Adam Klivans. Superpolynomial lower bounds for learning one-layer neural networks using gradient descent. In International Conference on Machine Learning, pp. 3587 3596. PMLR, 2020. Suriya Gunasekar, Jason D Lee, Daniel Soudry, and Nati Srebro. Implicit bias of gradient descent on linear convolutional networks. Advances in neural information processing systems, 31, 2018. Venkatesan Guruswami and Prasad Raghavendra. Hardness of learning halfspaces with noise. SIAM Journal on Computing, 39(2):742 765, 2009. Samuel B Hopkins and David Steurer. Bayesian estimation from few samples: community detection and related problems. ar Xiv preprint ar Xiv:1710.00264, 2017. Stefanie Jegelka. Theory of graph neural networks: Representation and learning. ar Xiv preprint ar Xiv:2204.07697, 2022. Ziwei Ji and Matus Telgarsky. Directional convergence and alignment in deep learning. Advances in Neural Information Processing Systems, 33:17176 17186, 2020. J Stephen Judd. Learning in networks is hard. In Proc. of 1st International Conference on Neural Networks, San Diego, California, June 1987. IEEE, 1987. Dimitris Kalimeris, Gal Kaplun, Preetum Nakkiran, Benjamin Edelman, Tristan Yang, Boaz Barak, and Haofeng Zhang. Sgd on neural networks learns functions of increasing complexity. Advances in neural information processing systems, 32, 2019. Michael Kearns. Efficient noise-tolerant learning from statistical queries. Journal of the ACM (JACM), 45(6):983 1006, 1998. Published as a conference paper at ICLR 2024 WF Kibble. An extension of a theorem of mehler s on hermite polynomials. In Mathematical Proceedings of the Cambridge Philosophical Society, volume 41, pp. 12 15. Cambridge University Press, 1945. Adam Klivans and Pravesh Kothari. Embedding hard learning problems into gaussian space. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2014. Mihail N Kolountzakis, Evangelos Markakis, and Aranyak Mehta. Learning symmetric k-juntas in time nˆ o (k). ar Xiv preprint math/0504246, 2005. Hannah Lawrence, Kristian Georgiev, Andrew Dienes, and Bobak T Kiani. Implicit bias of linear equivariant networks. ar Xiv preprint ar Xiv:2110.06084, 2021. Thien Le and Stefanie Jegelka. Training invariances and the low-rank phenomenon: beyond linear networks. In International Conference on Learning Representations, 2022. URL https:// openreview.net/forum?id=XEW8CQg Arno. Qunwei Li, Shaofeng Zou, and Wenliang Zhong. Learning graph neural networks with approximate gradient descent. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 8438 8446, 2021. Renjie Liao, Raquel Urtasun, and Richard Zemel. A pac-bayesian approach to generalization bounds for graph neural networks. ar Xiv preprint ar Xiv:2012.07690, 2020. Derek Lim, Joshua Robinson, Lingxiao Zhao, Tess Smidt, Suvrit Sra, Haggai Maron, and Stefanie Jegelka. Sign and basis invariant networks for spectral graph representation learning. ar Xiv preprint ar Xiv:2202.13013, 2022. Richard J Lipton, Evangelos Markakis, Aranyak Mehta, and Nisheeth K Vishnoi. On the fourier spectrum of symmetric boolean functions with applications to learning symmetric juntas. In 20th Annual IEEE Conference on Computational Complexity (CCC 05), pp. 112 119. IEEE, 2005. Roi Livni, Shai Shalev-Shwartz, and Ohad Shamir. On the computational efficiency of training neural networks. Advances in neural information processing systems, 27, 2014. Philip M Long and Hanie Sedghi. Generalization bounds for deep convolutional neural networks. ar Xiv preprint ar Xiv:1905.12600, 2019. Andreas Loukas. What graph neural networks cannot learn: depth vs width. ar Xiv preprint ar Xiv:1907.03199, 2019. Shaogao Lv. Generalization bounds for graph convolutional neural networks via rademacher complexity. ar Xiv preprint ar Xiv:2102.10234, 2021. I G Macdonald. Symmetric functions and Hall polynomials / by I. Oxford University Press, Oxford : New York, 1979. Song Mei, Theodor Misiakiewicz, and Andrea Montanari. Learning with invariances in random features and kernel models. In Conference on Learning Theory, pp. 3351 3418. PMLR, 2021. Johannes Jakob Meyer, Marian Mularski, Elies Gil-Fuster, Antonio Anna Mele, Francesco Arzani, Alissa Wilms, and Jens Eisert. Exploiting symmetry in variational quantum machine learning. PRX Quantum, 4(1):010328, 2023. Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of machine learning. MIT press, 2018. Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pp. 4602 4609, 2019. Elchanan Mossel, Ryan O Donnell, and Rocco P Servedio. Learning juntas. In Proceedings of the thirty-fifth annual ACM symposium on Theory of computing, pp. 206 212, 2003. Published as a conference paper at ICLR 2024 Quynh T Nguyen, Louis Schatzki, Paolo Braccia, Michael Ragone, Patrick J Coles, Frederic Sauvage, Martin Larocca, and M Cerezo. Theory for equivariant quantum neural networks. ar Xiv preprint ar Xiv:2210.08566, 2022. Samet Oymak and Mahdi Soltanolkotabi. End-to-end learning of a convolutional neural network via deep tensor decomposition. ar Xiv preprint ar Xiv:1805.06523, 2018. Ian Parberry. Circuit complexity and neural networks. MIT press, 1994. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary De Vito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, pp. 8024 8035. Curran Associates, Inc., 2019. Arthur Pesah, Marco Cerezo, Samson Wang, Tyler Volkoff, Andrew T Sornborger, and Patrick J Coles. Absence of barren plateaus in quantum convolutional neural networks. Physical Review X, 11(4):041011, 2021. Mircea Petrache and Shubhendu Trivedi. Approximation-generalization trade-offs under (approximate) group equivariance. ar Xiv preprint ar Xiv:2305.17592, 2023. Gilles Pisier. Probabilistic methods in the geometry of banach spaces. In Giorgio Letta and Maurizio Pratelli (eds.), Probability and Analysis, pp. 167 241, Berlin, Heidelberg, 1986. Springer Berlin Heidelberg. ISBN 978-3-540-40955-7. Omri Puny, Derek Lim, Bobak T Kiani, Haggai Maron, and Yaron Lipman. Equivariant polynomials for graph neural networks. ar Xiv preprint ar Xiv:2302.11556, 2023. Michael Ragone, Paolo Braccia, Quynh T Nguyen, Louis Schatzki, Patrick J Coles, Frederic Sauvage, Martin Larocca, and M Cerezo. Representation theory for geometric quantum machine learning. ar Xiv preprint ar Xiv:2210.07980, 2022. Lev Reyzin. Statistical queries and statistical algorithms: Foundations and applications. ar Xiv preprint ar Xiv:2004.00557, 2020. Akiyoshi Sannai, Masaaki Imaizumi, and Makoto Kawano. Improved generalization bounds of group invariant/equivariant deep networks via quotient feature spaces. In Uncertainty in Artificial Intelligence, pp. 771 780. PMLR, 2021. Louis Schatzki, Martin Larocca, Frederic Sauvage, and Marco Cerezo. Theoretical guarantees for permutation-equivariant quantum neural networks. ar Xiv preprint ar Xiv:2210.09974, 2022. Hanie Sedghi, Majid Janzamin, and Anima Anandkumar. Provable tensor methods for learning mixtures of generalized linear models. In Artificial Intelligence and Statistics, pp. 1223 1231. PMLR, 2016. Harshay Shah, Kaustav Tamuly, Aditi Raghunathan, Prateek Jain, and Praneeth Netrapalli. The pitfalls of simplicity bias in neural networks. Advances in Neural Information Processing Systems, 33:9573 9585, 2020. Ohad Shamir. Distribution-specific hardness of learning neural networks. The Journal of Machine Learning Research, 19(1):1135 1163, 2018. Andrea Skolik, Michele Cattelan, Sheir Yarkoni, Thomas B ack, and Vedran Dunjko. Equivariant quantum circuits for learning on weighted graphs. npj Quantum Information, 9(1):47, 2023. Jure Sokolic, Raja Giryes, Guillermo Sapiro, and Miguel Rodrigues. Generalization error of invariant classifiers. In Artificial Intelligence and Statistics, pp. 1094 1103. PMLR, 2017. Le Song, Santosh Vempala, John Wilmes, and Bo Xie. On the complexity of learning neural networks. Advances in neural information processing systems, 30, 2017. Published as a conference paper at ICLR 2024 Min Jae Song, Ilias Zadik, and Joan Bruna. On the cryptographic hardness of learning single periodic neurons. Advances in neural information processing systems, 34:29602 29615, 2021. Joel Spencer. Asymptopia, volume 71. American Mathematical Soc., 2014. Bal azs Sz or enyi. Characterizing statistical query learning: simplified notions and proofs. In International Conference on Algorithmic Learning Theory, pp. 186 200. Springer, 2009. Behrooz Tahmasebi and Stefanie Jegelka. The exact sample complexity gain from invariances for kernel regression on manifolds. ar Xiv preprint ar Xiv:2303.14269, 2023. Michel Talagrand. On Russo s Approximate Zero-One Law. The Annals of Probability, 22(3):1576 1587, 1994. doi: 10.1214/aop/1176988612. URL https://doi.org/10.1214/aop/ 1176988612. Leslie G Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134 1142, 1984. Guillermo Valle-Perez, Chico Q Camargo, and Ard A Louis. Deep learning generalizes because the parameter-function map is biased towards simple functions. ar Xiv preprint ar Xiv:1805.08522, 2018. Gal Vardi. On the implicit bias in deep-learning algorithms. Communications of the ACM, 66(6): 86 93, 2023. Santosh Vempala and John Wilmes. Gradient descent for one-hidden-layer neural networks: Polynomial convergence and sq lower bounds. In Conference on Learning Theory, pp. 3115 3117. PMLR, 2019. Saurabh Verma and Zhi-Li Zhang. Stability and generalization of graph convolutional neural networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1539 1548, 2019. Dian Wang, Robin Walters, and Robert Platt. So(2)-equivariant reinforcement learning. In International Conference on Learning Representations, 2021. Melanie Weber, Manzil Zaheer, Ankit Singh Rawat, Aditya K Menon, and Sanjiv Kumar. Robust large-margin learning in hyperbolic space. Advances in Neural Information Processing Systems, 33:17863 17873, 2020. Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip. A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems, 32(1):4 24, 2020. Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? ar Xiv preprint ar Xiv:1810.00826, 2018. Chulhee Yun, Shankar Krishnan, and Hossein Mobahi. A unifying view on implicit bias in training linear neural networks. ar Xiv preprint ar Xiv:2010.02501, 2020. Bohang Zhang, Shengjie Luo, Liwei Wang, and Di He. Rethinking the expressive power of gnns via graph biconnectivity. ar Xiv preprint ar Xiv:2301.09505, 2023. Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, and Jinjun Xiong. Fast learning of graph neural networks with guaranteed generalizability: one-hidden-layer case. In International Conference on Machine Learning, pp. 11268 11277. PMLR, 2020. Yuchen Zhang, Jason Lee, Martin Wainwright, and Michael I Jordan. On the learnability of fullyconnected neural networks. In Artificial Intelligence and Statistics, pp. 83 91. PMLR, 2017. Kai Zhong, Zhao Song, and Inderjit S Dhillon. Learning non-overlapping convolutional neural networks with multiple kernels. ar Xiv preprint ar Xiv:1711.03440, 2017. Pan Zhou and Jiashi Feng. Understanding generalization and optimization performance of deep cnns. In International Conference on Machine Learning, pp. 5960 5969. PMLR, 2018. Published as a conference paper at ICLR 2024 TABLE OF CONTENTS A Extended related works 16 B Extended background into learning frameworks 18 C Boolean statistical query setting 19 D Complexity theoretic hardness extension 20 E GNN hardness proofs 22 E.1 Lower bound for Erd os R enyi distributed graphs . . . . . . . . . . . . . . . . . . 22 E.2 Invariant Hermite polynomial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 E.3 Proof of Theorem 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 F Frame-Averaging hardness proofs 32 F.1 Exponential lower bound for constant frame and Reynolds operator . . . . . . . . 32 F.2 Superpolynomial lower bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 F.3 Proof of extra tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 G Learning invariant polynomials 44 H Experimental Details 47 A EXTENDED RELATED WORKS Restricted learning algorithms for CNNs and GNNs Various works show that convolutional and graph neural networks can be efficiently learned under various assumptions on the form of the network. For given input distributions, Brutzkus & Globerson (2017); Du et al. (2017) give an efficient learning algorithm for learning a single convolution filter and Zhong et al. (2017) give a polynomial time algorithm for learning a sum of a polynomial number of convolution kernels applied to an input. These can be viewed as a single hidden layer network where the last layer weights are all equal to one. Du & Goel (2018) (improving on Goel et al. (2018)) give a polynomial time algorithm for learning single layer convolutional networks (not necessarily invariant) with filters that have stride at least half the filter width. These CNNs have a single channel but are not invariant to the cyclic group due to the presence of weights in the second layer that vary by each patch of the filter. Similarly, Du et al. (2018a) show that gradient descent can also learn networks of a similar form with non-overlapping patches in the random setting (weights are random) with high probability. Oymak & Soltanolkotabi (2018) learn convolutional networks of depth D layers with a single kernel per layer and make a similar assumption that stride is at least as large as width. Zhang et al. (2020); Li et al. (2021) provide learning algorithms for a class of one hidden layer GNNs via gradient descent based algorithms whose sample complexity and runtime depend on factors like the condition number or norm of the weights of the target function. These algorithms are only guaranteed to be polynomial in runtime for the restricted instances when such factors are appropriately bounded. Learning feedforward neural networks The study of the hardness of learning feedforward nueral networks has a rich history dating back many decades. Judd (1987); Blum & Rivest (1988) show that in the proper learning setting, it is NP complete to find a set of weights of a given network that fits a given training set. These results were later expanded in (Zhang et al., 2017) to more realistic settings. Many works study the hardness of improperly learning the class of Re LU feedforward networks. In Published as a conference paper at ICLR 2024 the SQ setting, Goel et al. (2018); Diakonikolas et al. (2020); Song et al. (2017) show that at least a superpolynomial number of queries are needed to learn even single hidden layer networks in the correlational statistical query model. Chen et al. (2022a) show that networks with two hidden layers are even hard in the general SQ setting by using the networks to round inputs to boolean inputs and applying hardness results from Boolean learning theory and cryptography. We should remark that there are also other papers that reduce the task of learning feedforward neural networks to average case or cryptographically hard problems (Song et al., 2021; Daniely & Vardi, 2020). To sidestep these hardness results and provide proofs of learnability in the PAC setting, a number of works make assumptions on the networks or inputs/outputs to give efficient PAC learning algorithms for single hidden layer neural networks (Bakshi et al., 2019; Goel et al., 2018; Shamir, 2018; Sedghi et al., 2016; Vempala & Wilmes, 2019). Such assumptions include bounds on the condition number (Bakshi et al., 2019), approximation guarantees of the network by polynomials (Vempala & Wilmes, 2019), positivity of the second layer weights Diakonikolas et al. (2020), and others. In perhaps the most general setting, Chen & Narayanan (2023); Chen et al. (2023) give a polynomial time learning algorithm for learning single hidden layer Re LU networks with O(1) hidden nodes (i.e. runtime is exponential in the number of nodes but polynomial in other parameters such as the input size and error). Random neural network learnability Though the class of neural networks may be challenging to learn, various works study the average case setting where one is interested in learning random neural networks or random features from neural networks. In fact, prior work has shown that the complexity of such random neural networks, measured via statistical measures, continuity, or robustness, is rather low though this is by no means a guarantee of learnability (Kalimeris et al., 2019; De Palma et al., 2019; Shah et al., 2020; Valle-Perez et al., 2018). Various hardness results exist for this setting of learning random neural networks. Das et al. (2019) gives SQ hardness results for learning neural networks with sign activation that shows query complexity increasing exponentially with the depth of the network. Daniely & Vardi (2020) show that neural networks with random weights are hard to learn if the input distribution is allowed to depend on the weights based on reductions to random K-SAT. Nevertheless, recent work by Daniely et al. (2023) shows that random constant depth neural networks with Re LU activation can be learned in error ϵ in time scaling as dpolylog(1/ϵ) where d is the size of the network. Sample complexity and generalization bounds Separate from the question of computational complexity of learning is the question of how many samples are needed to learn a function class. Specific to (exactly or approximately) invariant neural network architectures, there exist papers studying the sample complexity of learning convolutional neural networks (Du et al., 2018b; Zhou & Feng, 2018; Cao & Gu, 2019) and graph neural networks (Zhang et al., 2020). Various papers consider how generalization bounds improve when enforcing equivariance. These include generalization bounds based on covering numbers or the complexity of the invariant function space Sokolic et al. (2017); Petrache & Trivedi (2023), based on invariant kernel method algorithms Bietti et al. (2021); Tahmasebi & Jegelka (2023); Elesedy (2021), and based on equivariant versions of norm-based PAC-Bayesian generalization bounds (Behboodi et al., 2022). For the group of translations, there also exist papers studying generalization bounds for architectures with convolutional layers though these architectures are not strictly invariant (Long & Sedghi, 2019; Zhou & Feng, 2018). Similarly, there are various generalization bounds as well for the class of graph neural networks (Lv, 2021; Verma & Zhang, 2019). Other related topics Though not directly a statement of computational hardness, various papers give no-go theorems for learning function classes by studying expressivity limitations of graph neural networks (Jegelka, 2022; Wu et al., 2020; Loukas, 2019). These expressivity results provide a set of functions that a graph neural network cannot express and thus by definition also cannot learn to arbitrary accuracy. The most well known set of results are related to limitations of graph neural networks in distinguishing non-isomorphic graphs via the Weisfeiler-Lehman hierarchy (Xu et al., 2018; Morris et al., 2019; Geerts & Reutter, 2022). Other work show expressivity limitations of GNNs by studying their power in expressing invariant polynomials (Puny et al., 2023), identifying graph biconnectivity (Zhang et al., 2023), counting substructures (Chen et al., 2020), and through various other means. Published as a conference paper at ICLR 2024 Various works study the implicit bias of neural networks primarily to help address why neural networks can even learn in overparameterized settings (Vardi, 2023; Chizat & Bach, 2020; Ji & Telgarsky, 2020). Here, the aim of these works is to show that gradient descent will converge to specific regularized functions that fit a given training set. Implicit bias results have been proven for linear but multi-layer classes of convolutional neural networks (Gunasekar et al., 2018; Yun et al., 2020; Le & Jegelka, 2022) and equivariant neural networks (Lawrence et al., 2021; Chen & Zhu, 2023). Generally, these results show that gradient descent on the parameter space of such neural networks is implicitly regularized under some norm or semi-norm depending on properties of the group. Extending these results to more realistic settings is likely challenging. In Boolean learning theory, there are various results on the problem of learning a symmetric k-junta (Mossel et al., 2003; Lipton et al., 2005; Kolountzakis et al., 2005). Here, we are promised that the Boolean function only depends on k variables, and within that k variable subset, the function is symmetric and thus only depends on the number of 1s in that subset. For this problem, Kolountzakis et al. (2005) achieve a runtime n O(k log k) that is polynomial in n for fixed k. Note, that the functions we study in the Boolean setting are assumed to be symmetric over the whole input space and not just on a size k subset of the inputs bits which drastically simplifies the problem. In fact, the hardness of the symmetric k-junta learning problem largely arises from challenges in finding which k bits are in the support of the Boolean function. Finally, in quantum machine learning, there has been a recent interest in studying quantum variational algorithms or quantum neural networks that are constrained under symmetries (Meyer et al., 2023; Ragone et al., 2022; Skolik et al., 2023; Cong et al., 2019). Some recent theoretical work has quantified the hardness and sample complexity associated with these models that are generally linear though acting on high dimensional Hilbert spaces (Pesah et al., 2021; Schatzki et al., 2022; Anschuetz & Kiani, 2022; Nguyen et al., 2022; Anschuetz et al., 2022). Since these quantum models typically operate in a different regime than classical models and are typically focused on quantum tasks, they are out of the scope of our current work. B EXTENDED BACKGROUND INTO LEARNING FRAMEWORKS There are various models used to rigorously quantify the hardness of learning. We recommend the review in Reyzin (2020) for an overview of the SQ formalism and its connections to other learning models. Here, we briefly mention the PAC model and overview further the background into the SQ framework from the main text. Perhaps the most widely used framework for quantifying hardness of learning is the provably approximately correct (PAC) model (Valiant, 1984). Definition 10 (PAC Learning (Mohri et al., 2018)). A concept class C is PAC-learnable if there exists an algorithm such that for any ϵ > 0 and δ > 0, for all distributions D on X, and for any target concept c C, the algorithm takes at most m = poly(1/ϵ, 1/δ, n) samples drawn i.i.d. from (x, c(x)) with x D, and returns a function f satisfying f c D ϵ with probability at most 1 δ. If the algorithm runs in time at most poly(1/ϵ, 1/δ, n), then C is efficiently PAC learnable. The algorithm is a proper learner if it returns f C, and otherwise denoted an improper learner. Proving computational hardness in the PAC model typically requires reducing learning tasks to cryptographically or complexity theoretically hard problems. Unconditional hardness results for learning with neural networks would imply P = NP: Abbe & Sandon (2023) show that the task of training neural networks via gradient descent is P-Complete, i.e. any poly-time algorithm in P can be reduced to the task of training a given poly-sized neural network with stochastic gradient descent. Given the above, the statistical query formalism sidesteps these challenges by constraining algorithms to be composed of a set of noisy queries. Perhaps the most important restriction in the SQ framework is that algorithms do not have access to individual data points but only noisy queries averaged across the input/output distribution. The classic example of an efficient non-SQ algorithm is Gaussian elimination for learning parities (Blum et al., 1994). Nevertheless, the SQ formalism naturally captures most algorithms used in practice including gradient descent as shown for example in Example 1 of the main text for gradients over the mean squared error loss. Published as a conference paper at ICLR 2024 For boolean functions, correlational statistical queries can capture any general statistical query since any query can be split into its +1 or 1 output cases (Reyzin, 2020). For example, assume one performs an SQ query with query function q : X { 1, +1} [ 1, +1], then E(x,y) D [g(x, y)] = E g(x, +1)1 + y 2 + g(x, 1)1 y which decomposes into two correlational statistical queries and two functions independent of the target. We remarked in the main text that generalizing real-valued hardness results proven in the CSQ framework to the SQ framework is in general challenging. In fact, Vempala & Wilmes (2019) more rigorously underscored this challenge (see their Proposition 4.1) by providing a rather unnatural, yet efficient, SQ algorithm that learns any finite real-valued function class satisfying a non-degeneracy assumption that the set of points f(x) = g(x) is measure zero for any pair of functions f, g in the class. Namely, they show the following no-go theorem. Theorem 11 (Limitations on SQ hardness for real-valued functions (Vempala & Wilmes, 2019)). Given a concept class C of functions f : X R of size |C| where for all f, g C the set of inputs where f(x) = g(x) is measure zero, the family C can be learned with O(log |C|) queries to SQ with a constant error tolerance τ. Proof. We aim to choose a query g : C [ 1, 1] which maps the target function f C to a unique value in the range [ 1, 1]. For our target function f C, let us call this value v. To specify the form of this query, it suffices to send each pair (x, f(x)) to its corresponding unique value in [ 1, +1], i.e. output v for every input (x, f(x)). Since outputs of disjoint functions in C are equal only on a support of measure zero, this mapping can be performed with probability one. Thus, we can query the value v and receive an output, say v up to error τ. Repeating this procedure iteratively to only include functions in the tolerance range [v τ, v + τ], we can select the target function after O(log |C|) queries. Loosely, the above theorem shows that real-valued classes of functions are only hard in SQ settings when there exist sufficiently many hard functions with finitely many outputs. Chen et al. (2022a) achieved such a result for two-hidden layer networks by constructing networks that round inputs to the nearest boolean bitstring. C BOOLEAN STATISTICAL QUERY SETTING As discussed in Sec. 3, the Boolean function setting provides a nice illustration of the types of results obtainable via SQ lower bounds. For further background into the boolean SQ setting, we recommend the review in Reyzin (2020). Here, we will prove Theorem 2 restated below for convenience. Theorem 2 (Boolean SQ hardness). For a given symmetry group G with representation ρ : G GL({ 1, +1}n), let p Oρ := P 2n 2 1/2 and let Hρ be the class of symmetric Boolean functions, defined as Hρ = {f : { 1, +1}n { 1, +1} : g G, x { 1, +1}n : f(ρ(g) x) = f(x)} . (4) Any SQ learner capable of learning Hρ up to sufficiently small classification error probability ϵ (ϵ < 1/4 suffices) with queries of tolerance τ requires at least τ 2 p Oρ 2/2 queries. For a simple proof of the above statement, we will follow the proof technique in Appendix B of Chen et al. (2022a). The above statement will follow as a consequence of the pairwise independence of functions in Hρ defined below. Definition 12 (Boolean Case of Definition C.1 of Chen et al. (2022a)). Let C be a hypothesis class mapping { 1, +1}n to { 1, +1}, and let D be a distribution on X. C is a (1 η)-pairwise independent function family if with probability 1 η over the choice of x, x drawn independently from D, the distribution of (f(x), f(x )) for f drawn uniformly at random from C is the product distribution Unif({ 1, +1}2). Published as a conference paper at ICLR 2024 Pairwise independent function families offer a convenient means to bound the number of queries needed to distinguish the outputs of a given unknown function f versus uniformly random outputs from the function class. Theorem 13 (Theorem C.4 of Chen et al. (2022a)). Let function class C mapping { 1, +1}n to { 1, +1} be a (1 η)-pairwise independent function family w.r.t. a distribution D on { 1, +1}n. For any f C, let Df denote the distribution of (x, f(x)) where x D. Let DUnif(C) denote the distribution of (x, y) where x D and y = f(x) for f Unif(C). Any SQ learner able to distinguish the labeled distribution Df for an unknown f C from the randomly labeled distribution DUnif(C) using bounded queries of tolerance τ requires at least τ 2 2η such queries. Finally, we are ready to prove Theorem 2. Proof of Theorem 2. Since the symmetric function is constant on orbits, we can represent any symmetric function as f : On { 1, +1} and the correlation between two functions f, g can be written as f, g SQ = E [fg] = 2 n X Ok Oρ |Ok|f(x Ok)g(x Ok). (21) The probability p Oρ : Oρ [0, 1] of a uniformly random bitstring falling in orbit OK Oρ is equal to p Oρ(Ok) = |Ok|/2n. Note that the distribution of (f(x O), f(x O )) is equal to the uniform distribution whenever x O = x O since we have a uniform distribution over all possible symmetric functions. This occurs with probability 2 = p Oρ 2. (22) Therefore, Hρ is a (1 η)-pairwise independent function family with parameter η as above. To apply Theorem 13, note that for a given function f Hρ, to distinguish Df from DUnif(Hρ), it suffices to return a function f that has classification error at most 1/4 with respect to f . Since distinguishing the two distributions requires τ 2 2η queries, we arrive at the final result. D COMPLEXITY THEORETIC HARDNESS EXTENSION In this section, we would like to show a reduction from the task of training an invariant neural network on a given dataset to a previously NP hard problem. This extends the classic result of Blum & Rivest (1988) showing that there exists a sequence of datasets such that determining whether or not a 3-node neural network can fit that dataset is hard. Though we consider extending this to a simple graph neural network here for simplicity, we remark that we chose this only for convenience and similar extensions can be found for other classes of networks. The particular NP hard problem we consider is the decision version of the learning halfspaces with noise problem (Guruswami & Raghavendra, 2009; Feldman et al., 2012). Problem 1 (Learning halfspaces with noise (Guruswami & Raghavendra, 2009)). Given a set of N labeled examples {xi, yi}N i=1 where xi {0, 1}n and yi { 1, +1} and constants δ, ϵ where 0 < δ ϵ < 1, we distinguish between the following two cases: Case 1: There exists a vector v Rn, constant θ R, and set S [N] of size at least |S| (1 δ)N such that yi ( v, xi θ) 0 for all i S. Case 2: For all v Rn, θ R, and sets S [N] such that |S| (1 ϵ)N, there exists some i S such that yi ( v, xi θ) < 0. Consider the following class of message passing GNNs with a single hidden layer and k channels which acts on an adjacency matrix A {0, 1}n n and node features x Rn: fa,b,c(x, A) = c + i=1 1 relu (x + ai Ax) bi. (23) Published as a conference paper at ICLR 2024 We will map the below problem of fitting the weights a, b, c of the GNNs above to a given dataset consisting of a set of graphs and their corresponding outputs. Problem 2. Given a labeled set of graphs, node features, and outputs {Ai, xi, yi}N i=1 where Ai {0, 1}n n, xi Rn and yi { 1, +1} and constants δ, ϵ where 0 < δ ϵ < 1, distinguish between the following two cases: Case 1: There exists weights a1, . . . , ak, b1, . . . , bk, c R and set S [N] of size at least |S| (1 δ)N such that yifa,b,c(xi, Ai) 0 for all i S. Case 2: For all weights a1, . . . , ak, b1, . . . , bk, c R, and sets S [N] such that |S| (1 ϵ)N, there exists some i S such that yifa,b,c(xi, Ai) < 0. Proposition 14 (NP hardness of GNN training). Training the GNN in Eq. (23) to solve Problem 2 is NP hard. Proof. We show that solving Problem 2 is equivalent to solving Problem 1 which is previously shown to be NP hard (Guruswami & Raghavendra, 2009; Feldman et al., 2012). Given a dataset {xi, yi}N i=1 for Problem 1 over n-bit Boolean strings, we map this to a dataset over graphs {Ai, x i, y i}N i=1 of Pn k=1 k = n(n + 1)/2 nodes where x i = 1 for all inputs and yi = y i are the same for both datasets. Adjacency matrix Ai is constructed by including a disconnected kclique (including self loops) in the graph whenever [xi]k = 1. Since there are n(n + 1)/2 nodes, there are sufficient nodes to construct any such graph. E.g., for x = (1, 0, 1) , a corresponding adjacency matrix is 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 1 1 1 0 0 0 1 1 1 We also assume that the GNNs have k = n channels. In this setting, we have that for any i [N] fa,b,c(x i, Ai) = c + j=1 1 relu (x i + aj Aix i) bj ℓ=1 ℓ1[ ℓ clique Ai] relu( 1 ℓaj), where 1[ ℓ clique Ai] is the indicator function for whether or not a ℓ-clique is in the graph Ai. The above is clearly a linear function in the vector ri {0, 1}n where [ri]ℓ= 1[ ℓ clique Ai]. Furthermore, the graph is chosen so that ri = xi. Thus, any choice of weights a, b, c corresponds to a halfspace for xi. It only remains to be shown that any such halfspace can be written in the form of Eq. (25). Note that fa,b,c(xi, Ai) = c + [b1 b2 bn] Mri, (26) where c is an updated constant and relu( 1 a1) 2 relu( 1 2a1) n relu( 1 na1) relu( 1 a2) 2 relu( 1 2a2) n relu( 1 na2) ... ... ... ... relu( 1 an) 2 relu( 1 2an) n relu( 1 nan) Setting ak = (1/2 k) 1 makes the above matrix have non-zero entries on the diagonal and every entry above the diagonal. Such a matrix is invertible by Gaussian elimination so every possible halfspace can be constructed by proper choice of the weights b1, . . . , bn, c . Published as a conference paper at ICLR 2024 E GNN HARDNESS PROOFS E.1 LOWER BOUND FOR ERD OS R ENYI DISTRIBUTED GRAPHS Here, we will consider a model over graphs of n nodes where adjacency matrices A Unif({0, 1}n n) are drawn uniformly from all possible directed graphs. This distribution can be viewed as an Erd os R enyi G(n, p) over directed graphs including self edges with p = 0.5. Node features are fixed in this model to x Rn where we set x = 1 in all instances (i.e., node features are trivial). Note, that we consider Boolean functions here w.l.o.g. as {0, 1} valued as opposed to { 1, +1} valued as in previous sections since using {0, 1} valued functions in this section makes the story easier to follow. Our aim is to prove Theorem 3 copied below. Theorem 3 (SQ hardness of HER,n). Any SQ learner capable of learning HER,n up to classification error probability ϵ sufficiently small (ϵ < 1/4 suffices) with queries of tolerance τ requires at least Ω τ 2 exp(nΩ(1)) queries. The function class HER,n that achieves the desired lower bound are Boolean valued functions that depend on the counts of the degrees or number of neighbors of the nodes. To describe this class, we first introduce some notation. For a given graph A {0, 1}n n, let d A [n]n be a vector with entry [d A]i equal to the number of outgoing edges (or degree) for node i. From here, let c A [n]n+1 denote the count of the degrees where entry [c A]i counts the number of nodes that have i 1 outgoing edges in A or more formally k=1 1 [[d A]k = i 1] . (28) Note that c A is a length n + 1 vector since the number of outgoing edges can be anything from {0, 1, . . . , n}. Also note that this vector is an invariant vector as permuting the nodes leaves the counts invariant. Given a subset S [n + 1] and a bit b {0, 1}, the functions we aim to learn take the form g S,b(A) = b + X i S [c A]i mod 2, (29) which are a sort of parity function supported over the elements of the degree counts c A in S. Therefore, our hypothesis class H consists of 2n+2 functions: HER,n = {g S,b : S [n + 1], b {0, 1}}. (30) First, we will show below that the class HER,n is exponentially hard to learn in the SQ setting. Later in Lemma 15, we show that the functions in H can be constructed by the GNN architecture thus completing our proof. Proof of Theorem 3. To simplify the function g S,b even further, let bc A {0, 1}n+1 be a boolean valued version of c A where entry [bc A]i = [c A]i mod 2. Then the function g S,b is equivalently a parity function in bc A: g S,b(A) = b + X i S [c A]i mod 2 = ( 1)b Y i S ( 1)[bc A]i. (31) We will prove an SQ lower bound using Theorem 13 based on the (1 η)-pairwise independence property of the function class HER,n in Definition 12. Namely, note that for two graphs A, A {0, 1}n n, if bc A = bc A , then the distribution of (g S,b(A), g S,b(A )) for g S,b drawn uniformly from HER,n is equal to the product distribution Unif({0, 1}2). This follows from the fact that g S,b(A) = ( 1)b Q i S( 1)[bc A]i is a parity function of the entries in bc A. Since HER,n spans uniformly over all possible parity functions in {0, 1}n+1, the pairwise independence properties follows from this whenever bc A = bc A . Thus, the parameter 1 η in the pairwise independence is equal to the probability that bc A = bc A . We will establish that this probability is at least 1 O(exp( nΩ(1))) at the end of this proof thus Published as a conference paper at ICLR 2024 guaranteeing η 1 = Ω exp(nΩ(1)) . Once this is established, we can apply Theorem 13 and note that for a given function f HER,n, to distinguish Df from DUnif(Hρ) as in Theorem 13, it suffices to return a function f that has classification error at most 1/4 with respect to f . Since distinguishing the two distributions requires τ 2 2η queries, we arrive at the final result by noting that η 1 = Ω exp(nΩ(1)) . To finish this proof, we must establish that PA Unif({0,1}n n)[bc A = bc A ] 1 O(exp( nΩ(1))). (32) We will show this result by proving that for some constant C > 0 and p < 1/2, for large enough n, there are at least Cnp entries of bc A such that the probability of those entry being even or odd is at most 1/2 + o(1). To continue, let us denote this region p = {0, 1, . . . , n} [n/2 Cnp, n/2 + Cnp], p < 1/2, (33) as the set of integers within the range [n/2 Cnp, n/2 + Cnp]. This range is chosen to coincide with the largest entries of c A. The degree of any node is independently distributed as binomial with distribution bin(n, 0.5). To asymptotically expand i around the peak of the binomial coefficient, let us index i = n/2 + r for given r such that i p. Therefore, for any such n/2 + r = i p, we have that any given node j [n] has degree dk = i with probability P[dk = i] = 2 n n n/2 + r = 2 πn(1 + O(1/n)). (34) In the above, we use the asymptotic approximation (Spencer, 2014) for the binomial coefficient around its peak n n/2 of = (1 + O(1/n)) 2 πn2n exp r2 2n + O(r3/n2) . (35) Thus, the marginal distribution of any entry i p of [c A]i bin n, 2 πn(1 + O(1/n)) is also binomial distributed with probability converging to 2 πn. For entry i p, this implies that E[c A]i = O( n). From the Chernoff concentration bound for the binomial distribution, we also have that for any i p and for any δ > 0: P h |[c A]i E[c A]i| n1/4+δi 2 exp n1/2+2δ(E[c A]i) 1 = 2 exp Ω(n2δ) . (36) By a union bound over all i p, we therefore have that with exponentially high probability, |[c A]i E[c A]i| n1/4+δ for all i p. Summarizing the previous results, we have that there are 2Cnp entries of [c A] where for any entry i p, it holds that for some arbitrarily small δ > 0 2 π n(1 + O(1/n)) n1/4+δ [c A]i 2 π n(1 + O(1/n)) + n1/4+δ. (37) Since the probability that [c A]i is equal to any number m is at most O(n 1/4), the probability that [c A]i is odd is at most 1/2+O(n 1/4). We now aim to show that conditioned on other entries of c A, the probability that a given entry of c A is odd or even is roughly independent of this conditioning. Given a set of known values Sknown p equal to [c A]k = mk for k Sknown, the marginal distribution of any entry of [c A]j for j / Sknown conditioned on [c A]k for k Sknown is also binomial. We now estimate the parameters of this binomial distribution. Given the convergence guarantee in Eq. (37), P k Sknown[c A]k = O(|Sknown|n1/2) = O(n1/2+p) which is o(n) since p < 1/2. Therefore for any set Sknown p, there are at least n o(n) remaining nodes whose degree or number of outgoing edges are left to distribute. These remaining nodes cannot have degrees whose values fall in Sknown, but note that since |Sknown| O(np) and p < 1/2, there are Published as a conference paper at ICLR 2024 still Ω( n) buckets (values of the possible degrees) remaining each with probability proportional to Θ(1/ n) by Eq. (35). Therefore, from the convergence guarantee of Eq. (37), we have ([c A]j | [c A]k = mk, k Sknown) bin n o(n), Θ(1/ n) . (38) Thus, the conditional probability that [c A]j is equal to any number m is at most O(n 1/4), and this implies that the probability that [c A]j is odd is at most 1/2 + O(n 1/4) even when conditioned on entries in Sknown. Finally, let us calculate the desired probability that bc A = bc A . For ease of notation, let us index the elements of p as r1, . . . , r N. We have for large enough n that P[bc A = bc A ] P [[bc A]r1 = [bc A ]r1, . . . , [bc A]r N = [bc A ]r N ] = P [[bc A]r1 = [bc A ]r1] P [bc A]r2 = [bc A ]r2 | [bc A]r1 = [bc A ]r1 P [bc A]r N = [bc A ]r N | [bc A]r1 = [bc A ]r1, . . . , [bc A]r N 1 = [bc A ]r N 1 2 + O(n 1/4) | p| = O(exp( Ω(| p|))) = O(exp( Ω(np))). Since PA Unif({0,1}n n)[bc A = bc A ] = 1 PA Unif({0,1}n n)[bc A = bc A ], this completes the proof. Finally, we need to show that the functions g S,b can be constructed as Re LU networks. Before proceeding to give this construction, we should note that it is known that any Boolean function that can be computed in time O(T(n)) can also be expressed by a neural network of size O(T(n2)) (Parberry, 1994; Shamir, 2018). Our construction will be specific to the GNN class we consider and show that the boolean functions in g S,b are similarly efficiently constructible. Lemma 15 (g S,b as GNN). For a GNN f( ) of the form of Eq. (5) with hidden layer widths equal to k1 = O(n) and k2 = O(n), there exist weights a, b Rk1, u, v Rk2, W Rk2 k1 such that for any S [n + 1], b {0, 1}, g S,b(A) = f(A; a, b, u, v, W ) for all inputs A {0, 1}n n. Proof. Let us fix a given S [n + 1], b {0, 1}. The function we want to represent is g S,b(A) = b + X i S [c A]i mod 2, (40) where as a reminder, [c A]i counts the number of nodes that have i 1 outgoing edges (see Eq. (28)). For the construction, we will set k1 = k2 = n + 1. As a reminder, the two layers of the GNN take the form: [f (1) a,b(A)]i = 1 n σ (ai + bi Ax) (output of channel i [k1]) f (2) u,v,W (h) = i=1 uiσ( W:,i, h + vi). (41) Throughout this construction, the node features x = 1 are constant. For the first layer, we will choose ai = 2 i and bi = 1 for all i. This first layer can only be a function of d A = A1 where entry i is a count of the number of outgoing edges. Since entries in d A take values in {0, 1, . . . , n}, we can equivalently write this first layer output as a function of the counts c A. In fact, this first layer is linear as a function of c A and equal to 1 2 n n + 1 0 1 n 1 n ... ... ... ... ... 0 0 1 2 0 0 0 1 Published as a conference paper at ICLR 2024 0 2 4 6 8 x h(x) for n = 8 Figure 2: Sample form of function h(x) used in constructing g S,b(A) as a GNN. For the construction, we will have x = P where the matrix M is clearly full rank and invertible since it strictly positive along the diagonal and in entries above the diagonal. Our aim in the second layer is to take the sum b + P i S[c A]i and take its parity (i.e. mod 2). To achieve this via sums of Re LUs σ we can consider the function h( ) of the form below: h(x) = σ(x + b) 2σ(x + b 1) + 2σ(x + b 2) + ( 1)nσ(x + b n). (43) A sample version of this plot is in Fig. 2. Setting x = b + P i S[c A]i achieves the end goal with O(n) hidden nodes. To provide actual values for the second layer parameters, let 1S 0, 1n+1 denote the vector where entry i is equal to 1 if i S and equal to 0 otherwise. Then, in the second layer, we have W:,i = (M 1) 1S for all columns i [n + 1] a1 = 1 and ai = 2( 1)i 1 for all i {2, . . . , n + 1} vi = b i + 1 for all i [n + 1] Using the values above, we have f (2) u,v,W (h) = σ( (M 1) 1S, Mc A + b) + i=2 ( 1)i 12σ( (M 1) 1S, Mc A + b i + 1) i=2 ( 1)i 12σ i S [c A]i + b i + 1 = h ([c A]i) . (44) i S[c A]i only takes values in {0, 1, . . . , n}, the above is supported on all of its inputs and this completes the proof. E.2 INVARIANT HERMITE POLYNOMIAL Multivariate Hermite polynomials To handle the graph setting with scaling feature dimension, we first extend the technical machinery related to Hermite polynomials to the invariant settings we consider. Recall from Diakonikolas et al. (2020) that the d-variate normalized Hermite polynomial is defined for every multi-index J = (J1, . . . , Jd) Nd as: HJ : Rd R : (x1, x2, . . . , xd) 7 j=1 HJj(xj), Hi : R R : x 7 Hei(x) i! , i N, (45) Published as a conference paper at ICLR 2024 where Hei is the usual probabilist Hermite polynomial. Graph-invariant Hermite polynomial Fix a graph G on n vertices and let A := A(G) Rn n be a graph shift operator (say the Laplacian). Define, for any multi-index J Nd: HA J : Rn d R : X 7 1 n v=1 HJ((AX)v). (46) We have the following orthogonality property: Lemma 16 (Orthogonality of invariant Hermite polynomial). For a fixed A Rn n such that Pn u=1 A2 vu = 1 for all v [n], HA J , HA J L2(N) = δJ=J c J for all J, J Nd\{0} for some c J Θ(1) that depends only on A. Proof. Let Σ = Σ(ρ) = 1 ρ ρ 1 for some ρ ( 1, 1). We will also use Mehler s formula (Kibble, 1945) which states: p N(0,Σ)(x, y) p N (x)p N (y) = m=0 ρm Hm(x)Hm(y), (47) where p N(0,Σ) is the pdf of the bivariate normal distribution with mean 0 and covariance Σ and p N is the pdf of the univariate standard normal distribution. We first show orthogonality of Hermite polynomials evaluated on dependent random variables. For all i, j N: E(x,y) N2(0,Σ)[Hi(x)Hj(y)] = Z R Hi(x)Hj(y)p N(0,Σ)(x, y)dxdy (48) R Hi(x)Hj(y)p N (x)p N (y) p N(0,Σ)(x, y) p N (x)p N (y) dxdy (49) R Hi(x)Hj(y) m=0 ρm Hm(x)Hm(y)d P(x)d P(y) (50) R Hi(x)Hm(x)d P(x) Z R Hj(y)Hm(y)d P(y) m=0 ρmδi=mδj=m (52) = ρiδi=j. (53) We are now ready to present the proof of Lemma 16. Pick J, J Nd\{0}. We have: HA J , HA J L2(N) = 1 v,v [n] EX N n d[HJ((AX)v)HJ ((AX)v )] (54) v,v [n] EZ P [HJ(zv)HJ (zv )], (55) where zv is the v-th row of Z Rn d and P is a multivariate normal distribution over Rn d with mean 0 and covariance: Cov(Zvj, Zv j ) = 0, if j = j Pn u=1 Avu Av u =: ρvv , otherwise. (56) Furthermore, for any v and any j = j , Zvj is independent of Zv j . This is because columns of X are independent from one another. Note that when v = v and j = j the variance is Pn u=1 A2 vu = 1 by our assumption on A. Published as a conference paper at ICLR 2024 By the dependence structure of P, one can write: EZ P [HJ(zv)HJ (zv )] = j=1 EZ P [HJj(Zvj)HJ j(Zv j)] (57) j=1 E(Zvj,Zv j) N(0,Σ(ρvv ))[HJj(Zvj)HJ j(Zv j)] (58) j=1 ρJi vv δJi=J i (59) = ρ|J| vv δJ=J . (60) HA J , HA J L2(N) = δJ=J 1 v,v [n] ρ|J| vv = δJ=J c J, (61) where c J = 1 v,v [n] ρ|J| vv . Remark 17. The quantity ρvv deserves an explanation. Essentially, ρvv measures how similar two nodes v and v are, in term of composition of its incoming edges. Algebraically, it is the (vv )- th entry of the gram matrix AA . We can get the correct normalization of A in the premise of Lemma 16 by starting with a graph G, adding a self-loop to each node of G and set A = (D ) 1 2 M where M is the adjacency matrix of G (with self-loops) and D is the in-degree matrix, we can ensure that ρvv [0, 1] for each v = v : u=1 Avu Av u = |Nv Nv | p deg(v) deg(v ) 1, (62) where Nv is the set of vertices with an incoming edge to v [n]. This immediately gives a bound max J c J n2. Finally, it is also worth noting that c J does not vanish under the above normalization, since adding self-loops before the normalization to G ensures that ρvv = 1 for all v [n] even if v was an isolated vertex in the original graph. Therefore, without loss of generality, we can assume that c J Θ(1). It is possible to use a different normalization, such as the Laplacian, as long as one can ensure that ρvv , c J = Θ(1). The requirement that rows of A sums to 1 is also not strictly necessary but does greatly simplify our exposition. Note that we are not claiming a complete orthonormal basis for the space L2(Rn d, N n d). However, all of the functions we care about will lie in H(d) := span HA J J Nd. For a fixed n and a function f H(d), we define ˆf J = f, HA J L2(N) and the degree m part of f: f [m](X) = P |J|=m ˆf JHA J (X). Facts about gradients of Hermite polynomial also carry over to invariant Hermite polynomials: Lemma 18. For a fixed graph G on n vertices and corresponding A, we have: 1. Xvj HA M(X) = 1 n Pn v =1 p Mj HM Ej((AX)v )Av v, for all v [n], j [d], M Nd, where Ej is the vector with 1 at the j -th position and 0 everywhere else. 2. EX N n d[ HA M(X), HA L (X) ] = δM=L |M|c M. 3. EX N n d[ l HA M(X), l HA L (X) ] = δM=L |M|(|M| 1) . . . (|M| l + 1)c M. In the above, c M = n 1 Pn v ,v =1 ρ|M| v v . Published as a conference paper at ICLR 2024 Proof. 1. We have: Xvj HA M(X) = 1 n v =1 ( HM((AX)v ))j Av v (63) Mj HM Ej((AX)v )Av v, (64) where Ej is the vector with 1 at the j -th position and 0 everywhere else. The second equality is a known fact for the multivariate Hermite polynomials (See Appendix C of Diakonikolas et al. (2020)). 2. Sum the above over all pairs v, v and take expectation to get: EX N n d[ HA M(X), HA L (X) ] (65) 1 n EX N n d v ,v =1 ( HM((AX)v ))j Av v ( HL((AX)v ))j Av v By the orthogonality property of Hermite multivariate polynomial under dependent multivariate Gaussian (equation 60), we can compute: EX N n d[HM Ej((AX)v )HL Ej((AX)v )] = ρ|M| 1 v v δM=L, (67) EX N n d[ HA M(X), HA L (X) ] = 1 v,v ,v =1 Av v Av vρ|M| 1 v v δM=L v ,v =1 ρ|M| 1 v v v=1 Av v Av v (69) v ,v =1 ρ|M| v v . (70) 3. Recursively apply the first part to get: Xv1j1, . . . , Xvljl HG M(X) (71) HM Pl p=1 ejp((AX)v ). (72) Summing indices and passing through the expectation to get: EX N n d[ l HA M(X), l HA L (X) ] (73) v=1 Av v Av v ρ|M| l v v (74) =δM=L|M|(|M| 1) . . . (|M| l) 1 v ,v =1 ρ|M| v v . (75) Published as a conference paper at ICLR 2024 Corollary 19. Fix k, d, e N. Let p, q be linear combinations of HA M : Rd R with M Nd varying among all multi-indices of size k = |M|. Then, kp(X), kq(X) = k! EX N n d[p(X)q(X)]. (76) Proof. Let p(X) = P M:|M|=k b MHA M(X) and q(X) = P K:|K|=k f KHA K(X). Then: EX N n d[p(X)q(X)] = X M:|M|=k f Mb Mc M, (77) where as before, c M = n 1 Pn v ,v =1 ρ|M| v v . From the previous lemma, kp(X), kq(X) = EX N n d M:|M|=k b M k HA M(X), X K:|K|=k f M k HA K(X) M,K:|M|=|K|=k b Mf Mk!δM=Kc M (79) M:|M|=k b Mf Mc M (80) = k! EX N n d[p(X)q(X)]. (81) E.3 PROOF OF THEOREM 4 The proof of Theorem 4 is derived from the construction in Diakonikolas et al. (2020). Hence, much of the setup will be taken from their construction. However, care must be taken to use the proper invariant Hermite polynomial basis we derived in the previous subsection. For completeness, we extend their formalism here as needed and note where proofs are directly derived from their work. We first verify that the orthogonal system of invariant Hermite polynomial subsumes 1-hidden-layer GNNs. We formally show this below to continue the proof later on in the basis of the invariant Hermite polynomials. As a reminder, we defined H(d) := span HA J Lemma 20 (Inner product decomposition). For every k N and G Gn, F d,k G H(d). Proof. Pick f F d,k G , with a corresponding W and a. Let g : Rd R : x 7 P2k c=1 acσ( x, wc ) where wc is the c-th column of W . Then we have f(X) = Pn v=1 g((AX)v) where the subscript represents the row of the corresponding matrix. Since ρ L2(N), g is in L2(N) and we can write uniquely: g(x) = X J Nd ˆg JHJ(x). (82) v=1 g((AX)v) = J Nd ˆg JHJ((AX)v) = X J Nd (ˆg J n)HA J (X) H(d). (83) Now we verify that the special function f G in our construction of the family of hard functions has vanishing low degree moments: Lemma 21 (Low degree moment vanishes). For any k N, for any G and appropriate A, f G H(2). Furthermore, ( ˆf G)J = 0 for every |J| < k. Published as a conference paper at ICLR 2024 Proof. The first part follows from specializing Lemma 20 for d = 2. The second part follows from the fact that the corresponding g function in the proof of Lemma 20 for f is the function fφ,σ in Diakonikolas et al. (2020), whose low degree moments vanish. It is also easy to verify that CB G F d,k G : Lemma 22. For any k, d, n N, for some activation σ that is not a low-degree polynomial, CB G F d,k G . Proof. Pick B B arbitrarily and let h(X) = f G(XB). We have: h(X) = f G(XB) = 1 n σ (A(XB)W ) a = 1 n σ (AX(BW )) a . (84) Since BW is in Rd 2k and a is in R2k, we have h F d,k G . Functions in FB k also do not vanish, as long as A does not have vanishing rank and the activation function σ is not a low-degree polynomial, by the same argument as in Diakonikolas et al. (2020). From Diakonikolas et al. (2020), we know the following facts about matrices in Rd 2: Lemma 23 (Lemma 4.7 in Diakonikolas et al. (2020)). Fix c (0, 1 2). Then there is a set S Rd 2 of size at least 2Ω(dc) such that for each A, B S, A B 2 O(dc 1 The following Lemma generalizes Lemma 4.6 in Diakonikolas et al. (2020): Lemma 24. Fix a function H(2) p : Rn 2 R and U, V Rd 2 such that U U = V V = 12. Then: p( U)p( V ) L2(N) 2 n U V 2 m p[m] 2 L2(N), (85) where p[m] is the degree m part of p. Proof. Let f, g : Rn d R be defined as f(X) = p(XU) and g(X) := p(XV ). Let f [m], g[m] and p[m] be the degree m part of the corresponding function. From Corollary 19, 1 m! mf [m](X), mg[m](X) = m=0 EX N n d[f [m](X)g[m](X)] (86) = EX N n d[f(X)g(X)] (87) = EX N n 2[p(XU)p(XV )] (88) m=0 EX N n 2[p[m](XU)p[m](XV )] (89) 1 m! m Xp[m](XU), m Xp[m](XV ) . (90) Fix m R, for each indices v1, . . . , vm [n], s1, . . . , sm [d], in Einstein notation, ( v1,s1 . . . vm,smf [m])(X)( v1,s1 . . . vm,smg[m])(X) (91) = v1,j1 . . . vm,jmp[m](XU) v1,j 1 . . . vm,j mp[m](XV )Us1j1 . . . Usmjm Vs1j 1 . . . Vsmj m (92) Taking the sum over all v1, . . . , vm [n], s1, . . . , sm [d], while still in Einstein notation, to get: | m Xp[m](XU), m Xp[m](XV ) | (94) =| v1,j1 . . . vm,jmp[m](XU) v1,j 1 . . . vm,j mp[m](XV )(U V )j1j 1 . . . (U V )jmj m| (95) 2m m Xp[m](XU) 2 2 nm U V m 2 (96) =m! EX N n d[(p[m](X))2] 2 n U V 2 m , (97) Published as a conference paper at ICLR 2024 where we used 3-variable Cauchy-Schwarz inequality |aijkbijkcijk| q a2 i j k b2 i j k c2 i j k to obtain the inequality and Corollary 19 to obtain the last line. Notice that in the previous derivation, inequality only occurs at the Cauchy-Schwarz inequality step. We can do a more careful analysis to get: Corollary 25. In the same setting as Lemma 24, we have: p( U)p( V ) L2(N) (98) v1,...,vm=1 j1,j 1,...,jm,j m=1 v1,j1 . . . vm,jmp[m](XU) v1,j 1 . . . vm,j mp[m](XV )(U V )j1j 1 . . . (U V )jmj m. As the last ingredient, we need the following lemma that connects the family of hard functions to lower bounds on CSQ algorithms: Lemma 26 (Lemma 15 of Diakonikolas et al. (2020)). Let X be the input space and D some distribution over X. For a finite set of functions C = {f : X R}, define: ρ(C) := 1/|C|2 X f,g C f, g L2(D). (100) The correlational statistical query dimension is defined as: SDA(C, D, τ) := max{m | ρ(C ) τ, C C, |C |/|C| 1/m}. (101) Suppose that SDA(C, D, τ) m for some C, D, τ and m, then there is a small enough ϵ such that any CSQ algorithm that queries from an oracle of f C and learn a hypothesis h C with f h L2(D) ϵ either requires Ω(m) queries or at least one query with precision more than τ. We can now proceed to the proof of the main theorem: Theorem 27. Any correlational SQ algorithm that, for every concept g CB G learns a hypothesis h such that g h L2(N) ϵ for some small ϵ > 0 requires either 2dΩ(1) queries or queries of tolerance at most d kΩ(1) + 2 dΩ(1). Proof. This proof mirrors that in Diakonikolas et al. (2020). Fix c (0, 1/2). To upper bound pairwise correlation between different functions in CB G , consider for any Bi = Bj B: f G( Bi), f G( Bj) L2(N) m=0 (2 n B i Bj 2)m f [m] G 2 L2(N) (102) m>k (2 n B i Bj 2)m f [m] G 2 L2(N) (103) (2 n B i Bj 2)k X m>k f [m] G 2 L2(N) (104) O(dk(c 1/2)) f G 2 L2(N). (105) In the above derivation, the first line uses Lemma 24; the second line uses Lemma 21; and the last two lines use Lemma 23 and the fact that we can choose an appropriate parameter to make B i Bj 2 < 1. Dividing both sides by f G L2(N) gives g Bi G , g Bj G L2(N) O(dk(c 1/2)). (106) Now, since B i Bi = 12, we have f G( Bi) L2(N) = f G L2(N) and therefore g Bi G L2(N) = 1. Published as a conference paper at ICLR 2024 We are in a position to apply Lemma 26. Let τ := max Bi =Bj g Bi G , g Bj G L2(N). Computing average pairwise correlation of CB G yields: ρ(CB G ) = 1/|B| + (|B| 1)τ/(2|B|) 1/|B| + τ. (107) Finally, setting τ := 1/|B| + τ gives SDA(CB G, N, τ ) = 2Ω(dc) and an application of Lemma 26 completes the proof for F d,k G . To get the proof for F d,k n , by law of total expectation: E(X,G) N E[fn(XBi, G)fn(XBj, G)] (108) = |EEEN E[fn(XBi, G)fn(XBj, G)|G]| (109) G Gn Pr E(G) |EN [f G(XBi)f G(XBj)]| (110) G Gn Pr E(G) max Bi =Bj B(2 n B i Bj 2)k f G 2 L2(N) (111) O(dk(c 1/2)) fn 2 L2(N E), (112) where in the third line, we use the fact that f G(Z) = fn(Z, G) and triangle inequality; in the fourth line, we use Eq. (105) and in the last line, we use another instance of total expectation to get fn L2(N E) = EG E f G L2(N). It is worth pointing out that the only instance of G for which we cannot apply Eq. (105) in the fourth line (assuming we use the normalized Laplacian as A(G)) is when G is the empty graph. However, in that case, f G L2(N) = 0 since f G 0 and the inequality in the fourth line still holds. Lastly, proceed identically to the proof for F d,k G to use Lemma 26 and get the final bound. F FRAME-AVERAGING HARDNESS PROOFS Frame averaging. Let G be a subgroup of the permutation group Sn that acts on Rn d by permuting the rows and e its identity element. Let ρ : G GL(n) with ρ(g) the representation of g as a matrix in Rn n. For any group element g G and X Rn d, we will write g X as a shorthand for ρ(g)X where the product is matrix multiplication. A frame is a function F : Rn d 2G\{ } satisfying F(g X) = g F(X) set-wise. For given frames F, we will show a hardness result for the following class of functions: f : Rn d R with f(X) = 1 p g F(X) a σ(W (g 1X))1d | W Rn k, a Rk ) for given nonlinearity σ that is not a polynomial of low degree. F.1 EXPONENTIAL LOWER BOUND FOR CONSTANT FRAME AND REYNOLDS OPERATOR Lemma 28 (Constant frame must be whole group). If F(X) is a constant function, i.e. F(X) is independent of X, then F(X) = G X. Proof. When F(X) is a constant function, say F G for all X, then the frame property mandates that F = and there is some f F. The frame property also tells us that since f 1 G (and G is a group), f 1F = F and thus e F. Pick any element g G and observe that since e F and g F = F, g must also be in F. As this is true for all elements g G, F must be the whole group G itself. In the following, we will therefore assume that G is a polynomial-sized (in n) subgroup of Sn. We will slightly modify Lemma 17 of (Diakonikolas et al., 2020) to show: Published as a conference paper at ICLR 2024 Lemma 29. For any 0 < c < 1/2, and small ϵ (0, 1), for any polynomial-sized subset F Sn, there exists a set B of at least 2Ω(nc)/|F|2 matrices in Rn 2 such that for each pair B, B B and for each g, g F, it holds that: 1. (g B) (g B) = I; 2. (g B) (g B ) 2 O(nc 1/2) when B = B ; and 3. when g = g , (g B) (g B))ij =: a1 b1 b2 a2 where bi O(nc 1/2) and, ai Fg,g n 1 ϵ O(nc 1/2), (114) n O(nc 1/2), (115) for i [2]. Here Fg,g is the number of fixed points of g 1g (number of cycles of length one). The proof is delayed to App. F.3 for a more streamlined read. Family of hard functions. Recall the low-dimensional function from the proof of Theorem 4: f exp : R2 d R with f(X) = (a ) σ((W ) X)1d (116) for some special parameter a R2k and W R2 2k in Eq. (11). We now define the family of hard functions, indexed by a set of matrices B Rn 2 obtained from Lemma 29: CB F = n g B : Rn d R with g B(X) = g G f exp(B g 1X) q |G| f exp N | B B Rn 2o . (117) Theorem 30. For any feature dimension d = Θ(1) and width parameter k = Θ(n). Let G be a subgroup of Sn with |G| = Θ(poly(n)). Pick a set B 4 Rn 2 of size at least 2Ω(dΩ(1))/|G|2 according to Lemma 29 and normalize each function in the hard function family CB F to have N norm 1. Let ϵ > 0 be a sufficiently small error constant. Then for any g CB F , any CSQ algorithm that queries from oracles of concept f CB F and outputs a hypothesis h with f h N ϵ requires either 2nΩ(1)/|G|2 queries or at least one query with precision |G|2 nΩ(1) + p Proof. First we review properties of the constructed objects. By Lemma 29, g B and g B are almost orthogonal for any B = B B and g, g G. Then by Theorem 27 with matrix A = Id, for any B = B B and any g, g G, noting that B (g 1X) = (g B) X since a permutation matrix is orthonormal, we have |EX N [f exp((g B) X)f exp((g B ) X)]| O(nk(c 1/2)) f exp 2 N , (118) for the same c that was chosen in the statement of Lemma 29. Now we verify that g B N = Ω(1) for each g B CB F A(G). Let B = [x y] and Bg := g B, Bg := (g )B. g B 2 N (119) = 1 f exp 2 N EX N (f exp(B g X))2 f exp((B g X)f exp(B g X) |G| =1 + 1 |G| f exp 2 N g =g G EX N f exp(B g X)f exp(B g X) (121) 4The different notation highlights the fact that we are using a slightly different set of matrices from Lemma 23 and (Diakonikolas et al., 2020). Published as a conference paper at ICLR 2024 Now we split the sum into two parts. In the first part, we examine g = g such that max(| gx, g x |, | gy, g y |) < c1 for some constant 0 < c1 < 1/ 3, and leave the remaining cases for the other part. Collect the pairs g, g in the first part into a set G1. Then 1 |G| f exp 2 N g =g G1 EX N f exp(B g X)f exp(B g X) (122) 1 |G| f exp 2 N EX N f exp(B g X)f exp(B g X) (123) g =g G B g Bg k 2, (124) where k is the number of neurons in the hidden layer of f exp. The second line comes from the triangle inequality and the last line comes from Theorem 27. B g Bg k 2 B g Bg k F (125) = 2z2 + gx, (g )x 2 + gy, (g )y 2 k/2 (126) 3k/2 1 (2z)k + | gx, (g )x |k + | gy, (g )y |k . (127) The last line uses power mean inequality. By our assumptions, g =g G | gx, (g )x |k < |G|(|G| 1)(c1 3 < |G|(|G| 1)(c1 and similarly 3k/2 1 P g =g G | gy, (g )y |k |G|(|G| 1)(c1 Plugging back, 1 |G| f exp 2 N g =g G1 EX N f exp(B g X)f exp(B g X) (|G| 1)(c1 Since |G| = poly(n) and k = Θ(n), the final expression is o(1) in n. We now consider the remaining part which is reproduced here: 1 |G| f exp 2 N g =g G2\G1 EX N f exp((g B) X)f exp((g B) X) (130) Zooming in, we have: X g =g G2\G1 EX N f exp(B g X)f exp(B g X) (131) v1,...,vm=1 j1,j 1,...,jm,j m=1 v1,j1 . . . vm,jm(f exp)[m]( ) v1,j 1 . . . vm,j m(f exp)[m]( )(B g Bg )j1j 1 . . . (B g Bg )jmj m (132) v1,...,vm=1 j1,j 1,...,jm,j m=1 v1,j1 . . . vm,jm(f exp)[m]( ) v1,j 1 . . . vm,j m(f exp)[m]( )(B g Bg )j1j 1 . . . (B g Bg )jmj m, (133) Published as a conference paper at ICLR 2024 where the second line comes from Corollary 25 and the last line is because all low-degree moment of f exp vanishes. Further zooming in, for each g = g G2\G1, each m k and each v1, . . . , vm [2], denote p J as v1,J1 . . . vm,Jm(f exp)[m]( ) for each J [2]m, and the corresponding vector p R2m. We have, using Einstein notation to sum over j1, . . . jm, j 1, . . . , j m: v1,j1 . . . vm,jm(f exp)[m]( ) v1,j 1 . . . vm,j m(f exp)[m]( )(B g Bg )j1j 1 . . . (B g Bg )jmj m, = p (B g Bg ) mp (134) By Corollary 43, it suffices to show that C := B g Bg satisfies z Cz 0 for all z R2. However, note that by concentration results of Lemma 29, for any small ϵ (0, 1), 1 ϵ Fg,g δx, Fg,g 1 ϵ + δx 1 ϵ Fg,g δy, Fg,g 1 ϵ + δy gx, g y = ϵ1 (137) gy, g x = ϵ2, (138) for δx, δy 0, max(δx, δy, |ϵ1|, |ϵ2|) < O(nc 1/2) and Fg,g = Fg,g /n with Fg,g denotes the number of fixed points of g 1g as in Lemma 29. Note that 1 Fg,g > c1 is bounded away from 0 (by definition of G1 and concentration result in Lemma 29). Choose ϵ small enough that 1 ϵ Fg,g =: c2, c2 δx, c2 δy are all bounded away from 0 when n is large enough. Thus, for any z R2, z Cz z2 1(c2 δx) + z2 2(c2 δy) + z1z2(ϵ1 + ϵ2) (139) = (c2 δx) z1 ϵ1 + ϵ2 c2 δy (ϵ1 + ϵ2)2 which is clearly nonnegative for large enough n as c2 is bounded away from 0. We conclude that p (B g Bg ) mp is nonnegative and thus removing these terms leads to a lower bound on g B 2 N . For pairwise correlation, we have: | g B, g B N | 1 |G| f exp 2 N g,g G f exp((g B) X)f exp((g B ) X) 1 |G| f exp 2 N EX N f exp((g B) X)f exp((g B ) X) (142) O(nk(c 1/2))|G|. (143) Now we replace each g B by its normalization to make g B 2 N = 1. Note that this only decreases (or increases by at most a constant factor) the correlation between different hard functions since g B 2 N = Ω(1). We are in position to apply Lemma 26. Let τ := max Bi =Bj | g Bi, g Bj L2(N)|. Computing average pairwise correlation of CB G yields: ρ(CB G ) = 1/|B | + (1 1/|B |)τ 2/|B | + τ. (144) Finally, setting τ := 2/|B | + τ gives SDA(CB G , N, τ ) = 2Ω(nc)/|G|2 and an application of Lemma 26 completes the proof. Published as a conference paper at ICLR 2024 F.2 SUPERPOLYNOMIAL LOWER BOUNDS Throughout, we will use the notation k to denote a subset of size k. The technique from this part comes mainly from (Goel et al., 2020). Define: f S : Rn d R, where f S(X) = 1 d w { 1,1}k χ(w)σ( w, XS / (145) where χ is the parity function χ(w) = Q i wi, and XS X k, or alternatively XS Rk d, denotes the rows of X indexed by S. Here, w, XS := Pk i=1 wi(XS)i,: = w XS and σ : Rd Rd is an activation function. Throughout, we will set k log2 n so that the above network has at most O(n) hidden nodes. For any z { 1, 1}n, let X z be the matrix in Rn d whose v-th row is zv Xv,:. We will use the following lemma, which is a basic consequence of properties proven in Goel et al. (2020), but include the proofs here to be self-contained. Lemma 31. For all S = T k [n], g G, and z { 1, 1}n, we have: 1. f S(g (X z)) = χg 1 S(z)f S(g X) = χg 1 S(z)fg 1 S(X). 2. f S L2(N) Ω(e Θ(k)) and f S, f T L2(N) = 0. Proof. Fix S = T k [n]. 1. By definition: w { 1,1}k χ(w)σ( w, (X;,j)S / where X:,j is the j-th column of X. G acts on the rows of X, i.e. (g X)i,: = Xg 1i,:. Therefore, letting the order of S be fixed, we have (g X)S = Xg 1S,:. We have, for any g G and any z { 1, 1}n: f S(g (X z)) = w { 1,1}k χ(w)σ( w, ((g (X z));,j)S / w { 1,1}k χ(w)σ( w, ((X z);,j)g 1 S / w { 1,1}k χ(w)σ( w, (X;,j)g 1 S zg 1 S / = χg 1 S(z) w { 1,1}k χ(w zg 1 S)σ( w zg 1 S, (X;,j)g 1 S / = χg 1 S(z) w { 1,1}k χ(w)σ( w, (X;,j)g 1 S / = χg 1 S(z)fg 1 S(X) (152) = χg 1 S(z)f S(g X). (153) In the above derivation, we use the definition of f S to get the first line. The second line comes from the fact that permuting by g, then taking the subset S is equivalent to taking the subset g 1 S. The third line comes from the fact that each element of z is a scalar. The fourth line comes from the fact that χ(w) = χg 1 S(z) χg 1 S(w zg 1 S). The fifth line is a change of variable w 7 w zg 1 S. Finally, the last line comes from the definition of f S, and noting that the dependence of f S on S only comes from which row of X is selected by S; thus, fg 1 S(X) = f S(g X). Published as a conference paper at ICLR 2024 2. We have, for any S = T: f S, f T L2(N) = EX L2(N)[f S(X)f T (X)] (154) = Ez Unif({ 1,1}n)EX[f S(X z)f T (X z)] (155) = Ez Unif({ 1,1}n)EX[f S(X)f T (X)χS(z)χT (z)] (156) = EX[f S(X)f T (X)]Ez Unif({ 1,1}n)[χS(z)χT (z)] (157) by sign-invariance of N and orthogonality of parity functions under Unif({ 1, 1}n). When S = T, the same argument as Goel et al. (2020) Theorem 3.8 shows that f S does not vanish. Now, define the family of hard functions as: f S : X 7 1 p g F(X) f S(g X) | [n] k S S for some set S depending on F and G. The subsets in S such that these hard functions are orthogonal will determine the size of the family of hard functions, and therefore the CSQ dimension lower bound. Definition 32 (Sign-invariant frame). A frame F is sign-invariant if F(X) = F(X z) for almost all X (under the given data distribution) and for all z { 1, 1}n. Lemma 33. If F is any sign-invariant frame, then there exists S with |S| Ω(nΩ(log n))/M(n) such that the functions in HS,F are pairwise orthogonal, where M(n) is the size of the largest orbit of [n] m where m = Ω(log n) under G. If moreover |F(X)| = 1 for almost all x, then |S| = Ω(nΩ(log n)). Proof. We have for any S = T: f S, f T L2(N) = EX N 1 |F(X)| g,g F(X) f S(g X)f T (g X) (160) = Ez Unif{ 1,1}n EX N 1 |F(X z)| g,g F(X z) f S(g (X z))f T (g (X z)) = Ez Unif{ 1,1}n EX N 1 |F(X z)| X g,g F(X z) χg 1 S(z)fg 1 S(X)χ(g ) 1 T (z)f(g ) 1 T (X) (162) = Ez Unif{ 1,1}n EX N 1 |F(X)| X g,g F(X) χg 1 S(z)fg 1 S(X)χ(g ) 1 T (z)f(g ) 1 T (X) (163) = EX N 1 |F(X)| fg 1 S(X)f(g ) 1 T (X) X g,g F(X) Ez Unif{ 1,1}nχg 1 S(z)χ(g ) 1 T (z) Here, we have used that the frame is sign-invariant to replace F(X z) with simply F(X), and rearrange the expectations accordingly. Now, if we have no further assumptions on F, then we Published as a conference paper at ICLR 2024 require S and T to be m = Ω(log n)-sized subsets in different orbits (in other words, S = g T for all g G). Then, by Lemma 31, g 1S = (g ) 1T for any g and g , and the inner expectation over z (and therefore the entire expression) is 0. This yields a lower bound on |S| of Ω(nΩ(log n)))/M(n), from choosing one log n-sized subset from each orbit. If we moreover have that F has size one almost everywhere, then S = T suffices to ensure orthogonality, and we can set |S| = nm where m = Θ(log n) by letting S be all Θ(log n)-sized subsets of [n]. Note that if subsets are size Θ(log n), the hidden width of the network is polynomial and furthermore, if m log2 n then the width is O(n). Remark 34. Note that the definition of frame always allows one to choose a singleton frame F(X) for any X whose stabilizer is trivial under the action of G on Rn d. For any other X with nontrivial stabilizer, since G is a subgroup of the permutation group, X must have at least 2 equal entries and thus lies in a manifold of dimensions strictly smaller than n d. Density of N thus ensures that Pr X N [|F(X)| = 1] = 1 as long as we choose the singleton frame at every X without selfsymmetry. A practical choice of frame in this case, which is also sign-invariant, is the frame F(X) which is the set of all permutations that lexicographically sort X by absolute value. It is not hard to see that with probability 1 under N, there is only 1 permutation that sorts X. Remark 35. The case of sign-invariant frames includes F(X) = G, the Reynolds operator. Corollary 36. Let F be a sign-invariant frame. For any c > 0, any CSQ algorithm that queries from an oracle of concept f HS,F and outputs a hypothesis h with f h N Θ(1) needs at least Ω(nΩ(log n))/M(n)) queries or at least one query with precision o(n c/2). Corollary 37. Let F be a sign-invariant singleton frame. For any c > 0, any CSQ algorithm that queries from an oracle of concept f HS,F and outputs a hypothesis h with f h N Θ(1) needs at least Ω(nΩ(log n)) queries or at least one query with precision o(n c/2). Proof. These corollaries are immediate applications of Theorem 4.1 and Lemma 2.6 from Goel et al. (2020) to Lemma 33. It remains to show that the norms of the hard functions in HS,F do not vanish. We show this below for sign-invariance, almost surely polynomial frames. Lemma 38. Let HS,F be the hard functions derived in Lemma 33 from the class of hard functions HS in Eq. (145) (which comes from Goel et al. (2020), but with an extra input dimension). Let E[f 2 S] Unif(HS) denote the average squared norm of functions uniformly drawn from HS and let MHS,F = max f S HS,F E[ f S(X)2] denote the maximum squared norm of the hard functions in HS,F. If MHS,F / E[f 2 S] Unif(HS) = O(poly(n)), then there exists a constant c > 0 and subset C HS,F of size |C| = Ω(|HS,F|/ poly(n)) many functions whose norms are at least c E[f 2 S] Proof. We use a second moment method argument to prove the above statement. Let the random variable Z denote the squared norm E[( f S(X))2] of a function drawn from the distribution Unif(HS,F) which is uniform over all functions in HS,F. The Paley-Zygmund inequality states that for any given θ [0, 1] P(Z > θE[Z]) (1 θ)2 E[Z]2 E[Z2]. (165) The probability above calculates the portion of functions in Unif(HS,F) which have non-vanishing norm. Now, we calculate the first two moments of Z over the |HS,F| = n k functions in the class where k = Θ(log n) as in the construction in Goel et al. (2020). For the first moment, Published as a conference paper at ICLR 2024 EX N [Z] = 1 |HS,F| f S HS,F EX N [ f S(X)2] (166) g,g F(X) f S(g X)f S(g X) g F(X) f S(X)2 + X g =g F(X) f S(g X)f S(g X) S k[n] f S(g X)2 S k[n] EX N g,g F(X) g 1S =(g ) 1S fg 1S(X)f(g ) 1S(X) S k[n] f S(g X)2 g,g G g 1S =(g ) 1S |F(X)| fg 1S(X)f(g ) 1S(X) where in the third line, we had the inequality since we ignore g = g such that g 1S = (g ) 1S (the summands in this case is fg 1S(X)2 0 so ignoring them gives a lower bound) and then use reverse triangle inequality |a + b| |a| |b|. The first term is S k[n] f S(g X)2 S k[n] f S(X)2 Now we show that the second term is 0. For a fix S k [n], g, g G such that g 1S = (g ) 1S, we have: |F(X)| fg 1S(X)f(g ) 1S(X) =Ez Unif({ 1,1}n) δg,g F(X z) |F(X z)| fg 1S(X z)f(g ) 1S(X z) |F(X)| fg 1S(X)f(g ) 1S(X)Ez χg 1S(z)χ(g ) 1S(z) where we have used the fact that the frame F is sign-invariant and the decomposition result from Equation (1) of (Goel et al., 2020). Thus the second term is 0. Published as a conference paper at ICLR 2024 For the second moment, we have E[Z2] = 1 |HS,F| f S HS,F E[ f S(X)2]2 f S HS,F max f S HS,F E[ f S(X)2]2 = M 2 HS,F . Thus, applying Paley-Zygmund, we have P Z > θ E[f 2 S] (1 θ)2 E[Z]2 E[Z2] (1 θ)2 E[f 2 S] 2 Unif(HS) M 2 HS,F . (173) Setting c = (1 θ)2 completes the proof. Corollary 39. Assuming that the activation is Re LU or sigmoid, the subset of non-vanishing functions in HS,F is superpolynomial in n. Proof. It suffices to show that MHS,F / E[f 2 S] Unif(HS) is O(poly(n)). Theorem 3.8 of Goel et al. (2020) guarantees that E[f 2 S] Unif(HS) Ω e Θ(k) = Ω poly(n) 1 (since k = Θ(log n)). To upper bound MHS,F , we uniformly bound, for each f S HS,F, EX N [ f S(X)2] = EX N w { 1,1}k χ(w)σ w, (g X:,j)S χ(w)σ w, (g X:,j)S ( w, (g X:,j)S )2 s S (g X:,j)2 s EX N h |F(X)|3/222kdkx2 [1] i poly(n)EX[x2 [1]] (178) where x[1] is the largest entry in absolute value of X. In the above, we use Cauchy-Schwarz inequality for the second and fourth line, while the third line uses the fact that Re LU(x)2 x2 for all x R. Since with probability 1, |F(X)| is at most polynomial in n by our assumption, it suffices to show that the order (either largest or smallest) statistic x[1] has small moment when X is a iid Gaussian random vector of length n d. Bounding this moment is a classical study in concentration inequalities. By an inequality of Talagrand (Chatterjee, 2014; Talagrand, 1994), we know that Ex[x2 [1]] (Ex[x[1]])2 C/ log(nd) for some constant C. The first moment is also bounded by O( p log(nd)) and thus Ex[x2 [1]] O(log(nd)). Thus MHS,F = O(poly(n)) and our conclusions follow. Published as a conference paper at ICLR 2024 F.3 PROOF OF EXTRA TOOLS Lemma 40 (Sub-Gaussian concentration for inner-product with permuted vector). Let G Sn with at least 2 elements and fix arbitrary g = g G. Then for any c (0, 1/2): x µg,g Ω(nc) exp( Ω(n2c)), (179) for some µg,g h Fg,g n+1, Fg,g n i , i.e. µg,g = Θ(Fg,g n 1/2), and where Fg,g is the number of fixed points in g 1g (i.e. the number of cycles of length 1 in its cycle representation). f(x) = gx, g x where x N(0, In). f(x) = 1 x (xh + xh 1) 1 x gx, g x x, x x, (181) where xh consists of x with entries reordered by h = g 1g , i.e. [xh]i = [x]h 1(i). Via triangle inequality, f(x) 3 x x 3. (182) Thus, by Gaussian Lipschitz concentration (Lemma 2.2 of (Pisier, 1986)), we have that: x Ef(x) Ω(nc) exp( Ω(n2c)). (183) Finally, we compute µ := Exf(x). To do this, note that for every i = j [n], Ex xixj x 2 = Ex ( xi)xj x 2 = 0, (184) by a change of variable formula x 7 (x1, . . . , xi 1, xi, xi+1, . . . , xn). At the same time, for every i = j [n], Ex x2 i x 2 = Ex x2 j x 2 = 1 k=1 Ex x2 k x 2 = 1 n Ex x 2, (185) by a change of variable that swaps xi and xj using a permutation Jacobian matrix with determinant 1. Thus µ = Fg,g n Ex x 2, where Fg,g is the number of fixed points of g 1g . We appeal to elementary analysis of Gaussian vectors (for instance, Chandrasekaran et al. (2012)) to obtain: 2Γ((n + 1)/2) n + 1, n], (186) where Γ is Euler s Gamma function. Thus, we have µ Fg,g n + 1, Fg,g n Lemma 41. Let x Rn be a unit vector iid uniformly distributed over the n-dimensional unit sphere. Let G Sn and g = g G. Let Fg,g be the number of fixed points of h := g 1g s action on [n]. Then for all c (0, 1/2) and small ϵ (0, 1): gx, g x Fg,g n 1 ϵ Ω(nc 1/2) exp( Ω(n2c)), (188) n Ω(nc 1/2) exp( Ω(n2c)). (189) Published as a conference paper at ICLR 2024 Proof. Recall that an alternative way to sample unit vectors i.i.d. uniformly from the hypersphere is to first sample z N(0, In) and then set x := z z 2 . Thus, we study gz,g z To do this, note that concentration of z 2 is well studied (Proposition 2.2 and Corollary 2.3 of Barvinok (2005)), for any 0 < ϵ < 1: z 2 r n 1 ϵ (1 ϵ)n exp( ϵ2n/4). (190) Concentration of the random variable gz, g z / z 2 was studied in Lemma 40 as: h | gz, g z / z 2 µg,g | Ω(nc+1/2) i exp( Ω(n2c)), (191) for some µg,g [Fg,g / n + 1, Fg,g / n]. Since 1 n 1 n+1 O(n 3/2) and Fg,g n, we can also write: x Fg,g n Ω(nc) exp( Ω(n2c)), (192) x Fg,g n Ω(nc) exp( Ω(n2c)). (193) We have by union bound: z 2 2 Fg,g n 1 ϵ > Ω(nc 1/2) Pr gz, g z z 2 > Fg,g n + Ω(nc) + Pr[ z 2 < p exp( Ω(n2c)) + exp( ϵ2n/4) (195) exp( Ω(n2c)). (196) On the other side, n < Ω(nc 1/2) Pr gz, g z z 2 < Fg,g n Ω(nc) + Pr z 2 > r n 1 ϵ exp( Ω(n2c)) + exp( ϵ2n/4) (198) exp( Ω(n2c)). (199) Lemma 42 (Kronecker product of PSD matrices is PSD). For some m, n N. Fix A Rm m and B Rn n not necessarily symmetric. If for all x Rm, x Ax 0 and for all y Rn, y By 0 then for all z Rmn, z (A B)z 0, where is the Kronecker product. Proof. Write z = (z[1], . . . , z[m]) where z[i] Rn for each i [m], then we have: j,j =1 z[i] j (Ai,i Bj,j )z[i ] j (200) i,i =1 Ai,i j,j =1 z[i] j Bj,j z[i ] j (201) i,i =1 Ai,i Ci,i , (202) where Ci,i = Pn j,j =1 z[i] j Bj,j z[i ] j . Now it is not hard to see that C is symmetric. At the same time, for any w Rm, i,i =1 wiwi j,j =1 z[i] j Bj,j z[i ] j = j,j =1 (wiz[i] j )Bj,j (wi z[i ] j ). (203) Published as a conference paper at ICLR 2024 By property of B, the inner sum is nonnegative and thus w Cw is nonnegative. Therefore we conclude that C is symmetric positive semi-definite and thus admits a spectral decomposition: k=1 λkv[k](v[k]) , (204) for some set of vectors v[k] and nonnegative λk 0, k [r] for some r. i,i =1 Ai,i Ci,i = i,i =1 Ai,i k=1 λkv[k] i v[k] i = k=1 λkv[k] i Ai,i v[k] i . (205) By property of A and the fact that λk are nonnegative, the inner sum is nonnegative and we get our conclusion. Corollary 43 (PSD of tensor power). For any m, n N, if A Rm m is such that x Ax > 0 for all x Rm, then A n satisfies: z A nz > 0 for all z Rmn Proof. Apply Lemma 42 n 1 times. Proof of Lemma 29. We use Corollary D.3 from (Diakonikolas et al., 2017) copied below: Lemma 44 (Corollary D.3 from (Diakonikolas et al., 2017)). Let x, y Rn be two unit vectors iid uniformly distributed over the n-dimensional unit sphere. Then: Pr x,y[| x, y | Ω(nc 1/2)] exp( Ω(n2c)), (206) for any c (0, 1/2). Let g, g F be two arbitrary permutations that act on Rn by permuting the vector. Since the uniform distribution over the n-dimenional unit sphere is invariant to permutation of the indices, its pushforward via g and g is still uniform. Therefore, we have, for any g, g F: Pr x,y[| gx, g y | Ω(nc 1/2)] exp( Ω(n2c)). (207) Draw T = 2Ω(nc)/|F|2 such iid vectors to form a set S Rn. We have, by union bound: Pr x1,...,x T[ i = j [T], g, g F, | gxi, g xj | Ω(nc 1/2)] (208) g,g F Pr xi,xj[| gxi, g xj | Ω(nc 1/2)] (209) 2Ω(nc) exp( Ω(n2c)) (210) exp( Ω(n2c)). (211) On the other hand, let Fg,g be the number of fixed points of g 1g for some g = g F, we also want to union bound using Lemma 41, for any small constant ϵ (0, 1): Pr x1,...,x T i [T], g = g F, gxi, g xi Fg,g n 1 ϵ Ω(nc 1/2) (212) g =g F Pr xi gxi, g xi Fg,g n 1 ϵ Ω(nc 1/2) Ω(nc 1/2)] (213) 3 2Ω(nc) exp( Ω(n2c)) (214) exp( Ω(n2c)). (215) Published as a conference paper at ICLR 2024 Pr x1,...,x T i [T], g = g F, gxi, g xi n Ω(nc 1/2) exp( Ω(n2c)) Therefore, for every small ϵ (0, 1), there exists a set S Rn of size T = 2Ω(nc)/|F|2 such that for every x = y S, for every g = g F, we have: gx 2 = 1 (217) max(| gx, gy |, | gx, g y |) O(nc 1/2), (218) gxi, g xi Fg,g n 1 ϵ O(nc 1/2), (219) n O(nc 1/2). (220) Pair elements of S (as column vectors) together arbitrarily to obtain a set B of matrices in Rn 2 of size T/2. Each element of B has two columns, each one of these columns is a distinct vector in S. We have, for any B =: (x1,1, x1,2), B =: (x2,1, x2,2) B , and for any g, g F, (g B) (g B ) 2 (g B)(g B ) F = s X i,j [2] ( gx1i, g x2j )2 O(nc 1/2). (221) The remaining conditions in Lemma 29 are also straightforward to check. G LEARNING INVARIANT POLYNOMIALS Here, we aim to extend the algorithm of Andoni et al. (2014) to the setting of invariant polynomials. This algorithm is an SQ algorithm, but not a CSQ algorithm. First, let us review the GROWING-BASIS algorithm. Assume inputs x Rn are distributed according to some product distribution D = µ1 µn. The goal of the algorithm is to learn the coefficients of some polynomial f (x) = P S a SHS(x) where S = (S1, S2, . . . , Sn) is a tuple of positive integers which indicate the degree of the polynomial in the i-th variable. The monomials in the polynomial take the form S a SHS(x), where HS(x) = i=1 HSi(xi), (222) where HSi is the Si-th degree polynomial in an class of polynomials orthogonal under the distribution D: Hi, Hj = R Hi(x)Hj(x)µi(x)dx = 0 for all i = j. An example of such an orthogonal set of polynomials are the Legendre polynomials which are orthogonal under the uniform distribution over [ 1, 1]. Next, to rigorously obtain a separation on invariant function classes, we need to define a few objects. We will often simplify notation or definitions for ease of readability; the presentation of this material in its full mathematical rigor can be found in various textbooks, e.g. Derksen & Kemper (2015). Independent generators Let g1, . . . , gr be invariant functions and K a field, either R or C. The ring of polynomial K[g1, . . . , gr] is the set of all sums P α Nr cαgα where gα = Qr i=1 gαi i . Assume that {gi}r i=1 are independent in the sense that for all i [r], there is no polynomial P K[g1, . . . , gi 1, gi+1, . . . , gr] that agrees with gi as a function Kn K. It is not hard to see that if a member of K[g1, . . . , gr] can be evaluated as a function Kn K then they are also invariant functions. The converse is not true, in general, as some invariant functions may not be written as polynomials over certain generators. However, some special groups have nice, finitely many generators whose polynomial ring spans almost the whole space of invariant functions. Denote by Fd,r := K[g1, . . . , gr] d the set of all polynomials in gi s with degree at most d, i.e., P i αi d cαgα. Published as a conference paper at ICLR 2024 Vector space of bounded degree polynomials generated by a finite set A different way of thinking about Fd,r is as a vector space. Let V d(g1, . . . , gr) be the (r + 1)d-dimensional vector space over the field k, whose canonical basis is the set {e I}I ({0} [r])d. It is not hard to show that V d(g1, . . . , gr) is isomorphic to K[g1, . . . , gr] d by identifying e I to the monomial Qd i=1 g Ii where g0 1. This perspective as a vector space allows for a nice calculus tool: let P be a product distribution over Kr (tuples of length r, with each entry an element in K) and f, g P := R f gd P. Here, we once again think of gi s as functions Kn K and so are elements in Fd,r. Applying Gram-Schmidt orthogonalization to basis elements of V d(g1, . . . , gr) generates another set of orthogonal (in , P) elements H1, . . . H(r+1)d.5 Note that there are still (r + 1)d such orthogonal functions since Gram Schmidt preserves dimensionality of the input and ouput vector spaces. This process in analogous to that constructing the Hermite and Legendre polynomials for Gaussian in uniform distributions respectively. When P is a product distribution, we can reindex this basis as {Hv}v ({0} [r])d. Sparse polynomial Before giving any formal results, we must state some technical preliminary restrictions that must be set for learning to be possible. Our learning setting restricts Fd,r to only contain polynomials in g1, . . . , gr with at most k terms in the expansion using {Hv}v ({0} [r])d. In other words, when viewed as vectors in V d(g1, . . . , gr) under the basis {Hv}v ({0} [r])d, only k elements are non-zero. Call this new set Fd,r,k. We should also note that for technical reasons, we restrict the outputs of the polynomials to be normalized within some bounded range to properly normalize the function with respect to the error metric and allow for the SQ query bounds to also properly query such functions. In the GROWINGBASIS algorithm, one must make queries to estimate Hv, f and Hv, (f )2 . Given any statistical query function has output bounded in the range [ 1, +1], to measure the correlations, we must choose a query function that divides by the maximum value of |Hv(f )2| in some high probability region D Kr with measure P(D) 1 o(1) and outputs zero elsewhere. This way, statistical query functions whose outputs are bounded in the range [ 1, +1] can effectively calculate the proper correlations needed in the algorithm.6 Thus, we introduce a normalization term ˆ Md,k = maxx D,v ({0} [r])d |Hv(f )2| and introduce this quantity into the query tolerance to allow for proper estimation of relevant variables. This normalization plays a similar role to the quantity Md,k in Andoni et al. (2014) for their guarantees of sampling complexity. When the target function f is properly bounded, it is not too hard to see that this quantity is also easily bounded. For example, when |f | = O(1) in the range [ 1, +1]n, ˆ Md,k is at most a constant for inputs drawn from the uniform distribution over the interval [ 1, +1] since the Legendre polynomials are also appropriately bounded. In general, functions can always be normalized by dividing by an appropriate constant. Finally, the GROWING-BASIS algorithm depends on a parameter τd that controls how much higher order terms show up in the square of a polynomial. More formally, let Hi : K K denote the i-th orthogonal polynomial for some given distribution on K. For any such orthogonal polynomial, we have the decomposition (see equation 2.1 of Andoni et al. (2014)) Ht(x) = 1 + j=1 ct,j Hj(x). (223) Let ct = ct,2t, then τd = mint d ct. This quantity is generally bounded for common distributions, e.g., when Ht are the Hermite or Legendre polynomials then τd = Ω(1) (Andoni et al., 2014). With this set up, we are ready to state and prove our main result for this section: Theorem 45 (Adapted from theorems 2.1 and 2.2 of Andoni et al. (2014)). Let K be a field, g1, . . . , gr be independent functions Kn K that are invariant to a group G acting alge- 5We also note that it is technically more efficient to apply this procedure to each of the generators individually. That is, apply Gram-Schmidt to the space [1, gi, . . . , gd i ]. Since there is a product distribution, this procedure constructs orthogonal polynomials in each basis which can then be combined to construct orthogonal polynomials in K[g1, . . . , gr]. 6We note that there are likely more effective means of handling this normalization requirement, by e.g., composing multiple queries together to calculate the relevant quantities. However, we do not consider this strengthening here for sake of simplicity. Published as a conference paper at ICLR 2024 braically on K[g1, . . . , gr] as defined above. Let P be a product distribution and define the inner product p, q P := Eg:=(g1,...,gr) P[p(g)q(g)]. Let {Hv}v ({0} [r])d be the output of Gram Schmidt on Fd,r with respect to this inner product. Further assume that the polynomial quotient H2m,0,...,0/(Hm,0,...,0)2 (ignoring remainders) is τd for every m [ d/2 ]. Let Fd,r,k be the space of polynomials in Fd,r that is k sparse in Hi basis. Then any CSQ algorithm with oracle access to , f P for some f Fd,r,k that outputs a hypothesis h Fd,r,k while achieving mean square error h f P < O(1) must make either at least (r + 1)d calls to the oracle or require precision less than O(r c) for any c > 0. However, GROWING-BASIS can learn h with error h f P < O(1) using only O(krd(1/τd)d) calls to oracles , f P and , (f )2 P with precision Ω(τ d d /(krd ˆ Md,k)). Proof. The CSQ lower bound can be obtained by invoking standard correlational statistical queries dimension bound form orthogonal polynomials (Diakonikolas et al., 2020; Goel et al., 2020; Reyzin, 2020; Sz or enyi, 2009). Specifically, by correctness of Gram-Schmidt orthonormalization, the set of basis elements {Hv}v ({0} [r])d also form a pairwise orthogonal set, which each themselves have norm 1 under p, q P. thus CSQ dimension of f Fd,r,k is immediately lower bounded by Ω(rd) and the conclusion for CSQ algorithms follow. The SQ upper bound via GROWING-BASIS can be obtained by observing that Andoni et al. (2014) s proof still works verbatim when the main ingredients are there: a product distribution, an inner product with an orthornormal basis, and access to oracle , (f )2 P. Note that although the GROWING-BASIS algorithm is presented error free, we allow small errors, by making a thresholding in Step 2 and 8 of Andoni et al. (2014) equal to the error tolerance ϵ. This way, we are guaranteed to miss only terms with low correlation and thus bounded effect on the final mean square error. To be more precise, every time we made an error on the cutoff of step 8, we lose as most a fraction of the precision that amounts to τ d d where as stated before τd is the coefficient of H2n,0,...,0 in (Hn,0,...,0)2. Intuitively, when losing a monomial in the final product, the norm of the monomial can multiplicatively compound over each variable. τd is the lowest possible amount each variable can contribute to this loss (See (Andoni et al., 2014), Lemma 2.3). Finally, we also must divide the query tolerance by a factor ˆ Md,k to account for the normalization of the target function as discussed earlier. Corollary 46. In the same setting as Theorem 45, if instead, we are interested in polynomials that are k-sparse in the canonical basis {Qr i=1 gαi i | P i αi = 1} (recall that Fd,r,k was sparse in the orthogonal basis {Hv}v ({0} [r])d). Then the query complexity of GROWING-BASIS becomes at most O(krd2d) with precision Ω(1/(krd2d)) whenever τd = Ω(1) and targets are normalized so that ˆ Md,k = O(1). Proof. By the same argument as Lemma 2.1 of (Andoni et al., 2014), any change of basis in the vector space of polynomials of bounded degree incur at worst a multiplicative factor 2d in the sparsity parameter. Since the new function class is k-sparse in the canonical basis, it is k2d-sparse in the orthogonal basis. Apply Theorem 45 to this new sparsity level to get the desired result. This separation is weaker than when there is sparsity in the orthogonal basis. However, there is still a polynomial (in r) separation between CSQ and SQ if d and k are set to be constant independent of r and a superpolynomial separation when d = Θ(log(n)). Finally, for the above theorem to give a meaningful separation, we list some example groups for which there is a simple distribution D over inputs such that the push-forward distribution P = ϕ#D via the map ϕ : x := (x1, . . . , xn) 7 (g1(x), . . . gr(x)) is a product distribution. Example 5. Consider inputs X Rn 3 which correspond to n points in R3. Ordered point clouds are symmetric under the action X U for matrix U O(3) in the orthogonal group. Let inputs X have entries drawn i.i.d. Gaussian. Consider generators g1, . . . , gn where gi(x) = Xi,: 2, which is the sum of the squares of the i-th row of X and clearly invariant to orthogonal transformations. The pushforward distribution after the mapping X 7 [g1(X), . . . , gn(X)] is a product distribution over Chi-squared random variables χ2(3)n. Published as a conference paper at ICLR 2024 0 500 1000 1500 2000 Train Set Test Set MSE 0/1 Error (a) GNN learning HER,n 0 250 500 750 1000 Error (log axis) Train Set Test Set (b) CNN learning CB F Figure 3: Replication of experiments as in Fig. 1, except here, we consider a minimal architecture consisting of a single layer of graph or cyclic convolution followed by a single hidden layer MLP. This is the minimal number of layers needed to learn the desired function classes for the architectures considered. For the CNN plot, the jumps in the train set MSE are due to perturbations in the loss at very low values near computer precision. Error Value optimizer = Adam optimizer = SGD optimizer = RMSprop data Train Set Test Set metric MSE 0/1 Error Figure 4: Replication of experiments in Fig. 1a with different optimizers show that the performance of the GNN is virtually the same across the various optimizers. Performance is averaged over 10 runs. For each run, the learning rate is chosen by perturbing the default learning rate by a random multiplicative factor in the range [0.1, 10]. Example 6. Consider inputs x Cn drawn from U[C {|z| = 1}]n uniformly from unit norm complex numbers in each entry. The power sum symmetric polynomials consist of a generating basis P i xi, ..., P i xd i . Another way to parameterize each xj is as xj = eiθ where θ is distributed U[0, 2π], for all j [n]. Thus xt j can also be parameterized by eiθ with the exact same distribution of θ U[0, 2π], for all j [n], t [d]. The distribution of P i xi, ..., P i xd i is therefore a product distribution (Law(Pn i=1 Zi))d where Zi U[C {|z| = 1}].These generators and input distributions come from the classical text of Macdonald (1979). Here, as long as r n, r only depends on d and not n. Plugging this r into Theorem 45 gives straightforward bounds that are independent of n-the original number of variables, but does not give meaningful separation in Corollary 46. As a reminder, we also have an example in the main text for the sign group (Lim et al., 2022). H EXPERIMENTAL DETAILS GNN experiments were performed using the Pytorch Geometric library for implementing geometric architectures (Fey & Lenssen, 2019). The hard functions were drawn from HER,n (see Eq. (7)) Published as a conference paper at ICLR 2024 where subset S [n] was drawn uniformly from all the possible subsets of size n/2 and b was either 0 or 1 with equal probability. For our experiments, we set n = 15 and trained on n2 = 225 datapoints. The overparameterized GNN used during training consisted of 3 layers of graph convolution followed by a node aggregation average pooling layer and a two layer Re LU MLP with width 64. The graph convolution layers used 32 channels. For the CNN experiments, we constructed the hard functions in Eq. (17) setting n = 50 and k = 10. Here, matrices B B were drawn randomly from all possible n 2 orthogonal matrices by taking a QR decomposition of a random n 2 matrix with Gaussian i.i.d. elements. For sake of convenience, we did not divide by the norm of the function. The CNN architecture implemented consisted of three layers of fully parameterized cyclic convolution each with 100 output channels. This was followed by a global average pooling layer and and a two layer Re LU MLP of width 100. The network was given 10n = 500 training samples and was trained with the Adam optimizer with batch size 32. Experiments in the main text considered architectures which were overparameterized in terms of the number of layers with respect to the target function. For sake of completeness, we include in Fig. 3 results for merely sufficiently parameterized networks which consist of a single convolution layer, pooling and single hidden layer MLP. The results in Fig. 3 are consistent with those observed in Fig. 1. Additionally, to further confirm that these results are robust to hyperparameter choices for the optimizer, we repeated the GNN learning experiments with the choice of optimizer varying between Adam, SGD, and RMSprop (Paszke et al., 2019). Fig. 4 shows the results for these different optimizers. For each optimizer, 10 simulations were performed where for each simulation, the learning rate was chosen by multiplying the default learning rate by a random number in the range [0.1, 10]. All experiments were run using Pytorch on a single GPU (Paszke et al., 2019). Default initialization schemes were used for the initial network weights. In our experiments, we used the Adam optimizer and tuned the learning rate in the range [0.0001, 0.003]. For CNN experiments, to increase stability of training in later stages, we added a scheduler that divided the learning rate by two every 200 epochs. Plots are created by combining and averaging five random realizations of each experiment.