# metalearning_symmetries_by_reparameterization__66aa353e.pdf Published as a conference paper at ICLR 2021 META-LEARNING SYMMETRIES BY REPARAMETERIZATION Allan Zhou, Tom Knowles, Chelsea Finn Dept of Computer Science, Stanford University {ayz,tknowles,cbfinn}@stanford.edu Many successful deep learning architectures are equivariant to certain transformations in order to conserve parameters and improve generalization: most famously, convolution layers are equivariant to shifts of the input. This approach only works when practitioners know the symmetries of the task and can manually construct an architecture with the corresponding equivariances. Our goal is an approach for learning equivariances from data, without needing to design custom task-specific architectures. We present a method for learning and encoding equivariances into networks by learning corresponding parameter sharing patterns from data. Our method can provably represent equivariance-inducing parameter sharing for any finite group of symmetry transformations. Our experiments suggest that it can automatically learn to encode equivariances to common transformations used in image processing tasks. We provide our experiment code at https: //github.com/Allan Yang Zhou/metalearning-symmetries. 1 INTRODUCTION In deep learning, the convolutional neural network (CNN) (Le Cun et al., 1998) is a prime example of exploiting equivariance to a symmetry transformation to conserve parameters and improve generalization. In image classification (Russakovsky et al., 2015; Krizhevsky et al., 2012) and audio processing (Graves and Jaitly, 2014; Hannun et al., 2014) tasks, we may expect the layers of a deep network to learn feature detectors that are translation equivariant: if we translate the input, the output feature map is also translated. Convolution layers satisfy translation equivariance by definition, and produce remarkable results on these tasks. The success of convolution s built in inductive bias suggests that we can similarly exploit other equivariances to solve machine learning problems. However, there are substantial challenges with building in inductive biases. Identifying the correct biases to build in is challenging, and even if we do know the correct biases, it is often difficult to build them into a neural network. Practitioners commonly avoid this issue by training in desired equivariances (usually the special case of invariances) using data augmentation. However, data augmentation can be challenging in many problem settings and we would prefer to build the equivariance into the network itself. For example, robotics sim2real transfer approaches train agents that are robust to varying conditions by varying the simulated environment dynamics (Song et al., 2020). But this type of augmentation is not possible once the agent leaves the simulator and is trying to learn or adapt to a new task in the real world. Additionally, building in incorrect biases may actually be detrimental to final performance (Liu et al., 2018b). In this work we aim for an approach that can automatically learn and encode equivariances into a neural network. This would free practitioners from having to design custom equivariant architectures for each task, and allow them to transfer any learned equivariances to new tasks. Neural network layers can achieve various equivariances through parameter sharing patterns, such as the spatial parameter sharing of standard convolutions. In this paper we reparameterize network layers to learnably represent sharing patterns. We leverage meta-learning to learn the sharing patterns that help a model generalize on new tasks. The primary contribution of this paper is an approach to automatically learn equivariance-inducing parameter sharing, instead of using custom designed equivariant architectures. We show theoretically that reparameterization can represent networks equivariant to any finite symmetry group. Our Published as a conference paper at ICLR 2021 experiments show that meta-learning can recover various convolutional architectures from data, and learn invariances to common data augmentation transformations. 2 RELATED WORK A number of works have studied designing layers with equivariances to certain transformations such as permutation, rotation, reflection, and scaling (Gens and Domingos, 2014; Cohen and Welling, 2016; Zaheer et al., 2017; Worrall et al., 2017; Cohen et al., 2019; Weiler and Cesa, 2019; Worrall and Welling, 2019). These approaches focus on manually constructing layers analagous to standard convolution, but for other symmetry groups. Rather than building symmetries into the architecture, data augmentation (Beymer and Poggio, 1995; Niyogi et al., 1998) trains a network to satisfy them. Diaconu and Worrall (2019) use a hybrid approach that pre-trains a basis of rotated filters in order to define roto-translation equivariant convolution. Unlike these works, we aim to automatically build in symmetries by acquiring them from data. Our approach is motivated in part by theoretical work characterizing the nature of equivariant layers for various symmetry groups. In particular, the analysis of our method as learning a certain kind of convolution is inspired by Kondor and Trivedi (2018), who show that under certain conditions all linear equivariant layers are (generalized) convolutions. Shawe-Taylor (1989) and Ravanbakhsh et al. (2017) analyze the relationship between desired symmetries in a layer and symmetries of the weight matrix. Ravanbakhsh et al. (2017) show that we can make a layer equivariant to the permutation representation of any discrete group through a corresponding parameter sharing pattern in the weight matrix. From this perspective, our reparameterization is a way of representing possible parameter sharing patterns, and the training procedure aims to learn the correct parameter sharing pattern that achieves a desired equivariance. Prior work on automatically learning symmetries include methods for learning invariances in Gaussian processes (van der Wilk et al., 2018) and learning symmetries of physical systems (Greydanus et al., 2019; Cranmer et al., 2020). Another very recent line of work has shown that more general Transformer (Vaswani et al., 2017) style architectures can match or outperform traditional CNNs on image tasks, without baking in translation symmetry (Dosovitskiy et al., 2020). Their results suggest that Transformer architectures can automatically learn symmetries and other inductive biases from data, but typically only with very large training datasets. One can also consider automatic data augmentation strategies (Cubuk et al., 2018; Lorraine et al., 2019) as a way of learning symmetries, though the symmetries are not embedded into the network in a transferable way. Concurrent work by Benton et al. (2020) aims to learn invariances from data by learning distributions over transformations of the input, similar to learned data augmentation. Our method aims to learn parameter sharing of the layer weights which induces equivariance. Additionally, our objective for learning symmetries is driven directly by generalization error (in a meta-learning framework), while the objective in Benton et al. (2020) adds a regularizer to the training loss to encourage symmetry learning. Our work is related to neural architecture search (Zoph and Le, 2016; Brock et al., 2017; Liu et al., 2018a; Elsken et al., 2018), which also aims to automate part of the model design process. Although architecture search methods are varied, they are generally not designed to exploit symmetry or learn equivariances. Evolutionary methods for learning both network weights and topology (Stanley and Miikkulainen, 2002; Stanley et al., 2009) are also not motivated by symmetry considerations. Our method learns to exploit symmetries that are shared by a collection of tasks, a form of metalearning (Thrun and Pratt, 2012; Schmidhuber, 1987; Bengio et al., 1992; Hochreiter et al., 2001). We extend gradient based meta-learning (Finn et al., 2017; Li et al., 2017; Antoniou et al., 2018) to separately learn parameter sharing patterns (which enforce equivariance) and actual parameter values. Separately representing network weights in terms of a sharing pattern and parameter values is a form of reparameterization. Prior work has used weight reparameterization in order to warp the loss surface (Lee and Choi, 2018; Flennerhag et al., 2019) and to learn good latent spaces (Rusu et al., 2018) for optimization, rather than to encode equivariance. Hyper Networks (Ha et al., 2016; Schmidhuber, 1992) generate network layer weights using a separate smaller network, which can be viewed as a nonlinear reparameterization, albeit not one that encourages learning equivariances. Modular meta-learning (Alet et al., 2018) is a related technique that aims to achieve combinatorial generalization on new tasks by stacking meta-learned modules, each of which is a neural network. Published as a conference paper at ICLR 2021 This can be seen as parameter sharing by re-using and combining modules, rather than using our layerwise reparameterization. 3 PRELIMINARIES In Sec. 3.1, we review gradient based meta-learning, which underlies our algorithm. Sections 3.2 and 3.3 build up a formal definition of equivariance and group convolution (Cohen and Welling, 2016), a generalization of standard convolution which defines equivariant operations for other groups such as rotation and reflection. These concepts are important for a theoretical understanding of our work as a method for learning group convolutions in Sec. 4.2. 3.1 GRADIENT BASED META-LEARNING Our method is a gradient-based meta-learning algorithm that extends MAML (Finn et al., 2017), which we briefly review here. Suppose we have some task distribution p(T ), where each task dataset is split into training and validation datasets {Dtr i , Dval i }. For a model with parameters θ, loss L, and learning rate α, the inner loop updates θ on the task s training data: θ = θ α θL(θ, Dtr). In the outer loop, MAML meta-learns a good initialization θ by minimizing the loss of θ on the task s validation data, with updates of the form θ θ η d dθL(θ , Dval). Although MAML focuses on meta-learning the inner loop initialization θ, one can extend this idea to meta-learning other things such as the inner learning rate α. In our method, we meta-learn a parameter sharing pattern at each layer that maximizes performance across the task distribution. 3.2 GROUPS AND GROUP ACTIONS Symmetry and equivariance is usually studied in the context of groups and their actions on sets; refer to Dummit and Foote (2004) for more comprehensive coverage. A group G is a set closed under some associative binary operation, where there is an identity element and each element has an inverse. Consider the group (Z, +) (the set of integers with addition): we can add any two integers to obtain another, each integer has an additive inverse, and 0 is the additive identity. Figure 1: Convolution as translating filters. Left: Standard 1-D convolution slides a filter w along the length of input x. This operation is translation equivariant: translating x will translate y. Right: Standard convolution is equivalent to a fully connected layer with a parameter sharing pattern: each row contains translated copies of the filter. Other equivariant layers will have their own sharing patterns. A group G can act on a set X through some action ρ : G Aut(X) which maps each g G to some transformation on X. ρ must be a homomorphism, i.e. ρ(gh) = ρ(g)ρ(h) for all g, h G, and Aut(X) is the set of automorphisms on X (bijective homomorphisms from X to itself). As a shorthand we write gx := ρ(g)(x) for any x X. Any group can act on itself by letting X = G: for (Z, +), we define the action gx = g + x for any g, x Z. The action of a group G on a vector space V is called a representation, which we denote π : G GL(V ). Recall GL(V ) is the set of invertible linear maps on V . Assume the vectors v V are discrete, with components v[i]. If we already have G s action on the indices, a natural corresponding representation is defined (π(g)v)[i] := v[g 1i]. As a concrete example, consider the representation of G = (Z, +) for infinite length vectors. The indices are also integers, so the group is acting on itself as defined above. Then (π(g)v)[i] = v[g 1i] = v[i g] for any g, i Z. Hence this representation of Z shifts vectors by translating their indices by g spaces. 3.3 EQUIVARIANCE AND CONVOLUTION A function (like a neural network layer) is equivariant to some transformation if transforming the function s input is the same as transforming its output. To be more precise, we must define what those transformations of the input and output are. Consider a neural network layer φ : Rn Rm. Published as a conference paper at ICLR 2021 Assume we have two representations π1, π2 of group G on Rn and Rm, respectively. For each g G, π1(g) transforms the input vectors, while π2(g) transforms the output vectors. The layer φ is G-equivariant with respect to these transformations if φ(π1(g)v) = π2(g)φ(v), for any g G, v Rn. If we choose π2 id we get φ(π1(g)v) = φ(v), showing that invariance is a type of equivariance. Deep networks contain many layers, but function composition preserves equivariance. So if we achieve equivariance in each individual layer, the whole network will be equivariant. Pointwise nonlinearities such as Re LU and sigmoid are already equivariant to any permutation of the input and output indices, which includes translation, reflection, and rotation. Hence we are primarily focused on enforcing equivariance in the linear layers. Prior work (Kondor and Trivedi, 2018) has shown that a linear layer φ is equivariant to the action of some group if and only if it is a group convolution, which generalizes standard convolutions to arbitrary groups. For a specific G, we call the corresponding group convolution G-convolution to distinguish it from standard convolution. Intuitively, G-convolution transforms a filter according to each g G, then computes a dot product between the transformed filter and the input. In standard convolution, the filter transformations correspond to translation (Fig. 1). G-equivariant layers convolve an input v Rn with a filter ψ Rn. Assume the group G = {g1, , gm} is finite: φ(v)[j] = (v ψ)[j] = X i v[i](π(gj)ψ)[i] = X i v[i]ψ[g 1 j i] (1) In this work, we present a method that represents and learns parameter sharing patterns for existing layers, such as fully connected layers. These sharing patterns can force the layer to implement various group convolutions, and hence equivariant layers. 4 ENCODING AND LEARNING EQUIVARIANCE To learn equivariances automatically, our method introduces a flexible representation that can encode possible equivariances, and an algorithm for learning which equivariances to encode. Here we describe this method, which we call Meta-learning Symmetries by Reparameterization (MSR). 4.1 LEARNABLE PARAMETER SHARING As Fig. 1 shows, a fully connected layer can implement standard convolution if its weight matrix is constrained with a particular sharing pattern, where each row contains a translated copy of the same underlying filter parameters. This idea generalizes to equivariant layers for other transformations like rotation and reflection, but the sharing pattern depends on the transformation. Since we do not know the sharing pattern a priori, we reparameterize fully connected weight matrices to represent them in a general and flexible fashion. A fully connected layer φ : Rn Rm with weight matrix W Rm n is defined for input x by φ(x) = Wx. We can optionally incorporate biases by appending a dimension with value 1 to the input x. We factorize W as the product of a symmetry matrix U and a vector v of k filter parameters : vec(W) = Uv, v Rk, U Rmn k (2) For fully connected layers, we reshape1 the vector vec(W) Rmn into a weight matrix W Rm n. Intuitively, U encodes the pattern by which the weights W will share the filter parameters v. Crucially, we can now separate the problem of learning the sharing pattern (learning U) from the problem of learning the filter parameters v. In Sec. 4.3, we discuss how to learn U from data. The symmetry matrix for each layer has mnk entries, which can become too expensive in larger layers. Kronecker factorization is a common approach for approximating a very large matrix with smaller ones (Martens and Grosse, 2015; Park and Oliva, 2019). In Appendix A we describe how we apply Kronecker approximation to Eq. 2, and analyze memory and computation efficiency. In practice, there are certain equivariances that are expensive to meta-learn, but that we know to be useful: for example, standard 2D convolutions for image data. However, there may be still other symmetries of the data (i.e., rotation, scaling, reflection, etc.) that we still wish to learn 1We will use the atypical convention that vec stacks matrix entries row-wise, not column-wise. Published as a conference paper at ICLR 2021 Figure 2: We reparameterize the weights of each layer in terms of a symmetry matrix U that can enforce equivariant sharing patterns of the filter parameters v. Here we show a U that enforces permutation equivariance. More technically, the layer implements group convolution on the permutation group S2: U s block submatrices π(e), π(g) define the action of each permutation on filter v. Note that U need not be binary in general. Figure 3: For each task, the inner loop updates the filter parameters v to the task using the inner loop loss. Note that the symmetry matrix U does not change in the inner loop, and is only updated by the outer loop. Algorithm 1: MSR: Meta-Training input: {Tj}N j=1 p(T ): Meta-training tasks input: {U, v}: Randomly initialized symmetry matrices and filters. input: α, η: Inner and outer loop step sizes. while not done do sample minibatch {Ti}n i=1 {Tj}N j=1; forall Ti {Ti}n i=1 do {Dtr i , Dval i } Ti ; // task data δi v L(U, v, Dtr i ); v v αδi ; // inner step /* outer gradient */ Gi d d UL(U, v , Dval i ); /* outer step */ U U η P automatically. This suggests a hybrid approach, where we bake-in equivariances we know to be useful, and learn the others. Indeed, we can directly reparameterize a standard convolution layer by reshaping vec(W) into a convolution filter bank rather than a weight matrix. By doing so we bake in translational equivariance, but we can still learn things like rotation equivariance from data. 4.2 PARAMETER SHARING AND GROUP CONVOLUTION By properly choosing the symmetry matrix U of Eq. 2, we can force the layer to implement arbitrary group convolutions (Eq. 1) by filter v. Recall that group convolutions generalize standard convolution to define operations that are equivariant to other transformations, such as rotation. Hence by choosing U properly we can enforce various equivariances, which will be preserved regardless of the value of v. Proposition 1 Suppose G is a finite group {g1, . . . , gm}. There exists a U G Rmn n such that for any v Rn, the layer with weights vec(W) = U Gv implements G-convolution on input x Rn. Moreover, with this fixed choice of U G, any G-convolution can be represented by a weight matrix vec(W) = U Gv for some v Rn. Intuitively, U can store the symmetry transformations π(g) for each g G, thus capturing how the filters should transform during G-convolution. For example, Fig. 2 shows how U can implement convolution on the permutation group S2. We present a proof in Appendix B. Subject to having a correct U G, v is precisely the convolution filter in a G-convolution. This will motivate the notion of separately learning the convolution filter v and the symmetry structure U in the inner and outer loops of a meta-learning process, respectively. Published as a conference paper at ICLR 2021 Synthetic Problems MSE (lower is better) Small train dataset Large train dataset Method k = 1 k = 2 k = 5 k = 1 k = 2 k = 5 MAML-FC 3.4 .60 2.1 .35 1.0 .10 3.4 .49 2.0 .27 1.1 .11 MAML-LC 2.9 .53 1.8 .24 .87 .08 2.9 .42 1.6 .23 .89 .08 MAML-Conv .00 .00 .43 .09 .41 .04 .00 .00 .53 .08 .49 .04 MTSR-FC (Ours) 3.2 .49 1.4 .17 .86 .06 .12 .03 .07 .02 .07 .01 MSR-Joint-FC (Ours) .25 .16 .12 .04 .21 .03 .01 .00 .08 .02 .12 .02 MSR-FC (Ours) .07 .02 .07 .02 .16 .02 .00 .00 .05 .01 .09 .01 Table 1: Meta-test MSE of different methods on synthetic data with (partial) translation symmetry. Small vs large train dataset refers to the number of examples per training task. Among methods with non-convolutional architectures, MSR-FC is closest to matching actual convolution (MAML-Conv) performance on translation equivariant (k = 1) data. On data with less symmetry (k = 2, 5), MSR-FC outperforms MAML-Conv and other MAML approaches. MSR-Joint is an ablation of MSR where both U and v of Eq. 2 are updated on task train data, rather than just v. MTSR is an ablation of MSR where we train the reparameterization using multi-task learning, rather than meta-learning. Results are shown with 95% confidence intervals over test tasks. 4.3 META-LEARNING EQUIVARIANCES Meta-learning generally applies when we want to learn and exploit some shared structure in a distribution of tasks p(T ). In this case, we assume the task distribution has some common underlying symmetry: i.e., models trained for each task should satisfy some set of shared equivariances. We extend gradient based meta-learning to automatically learn those equivariances. Suppose we have an L-layer network. We collect each layer s symmetry matrix and filter parameters: U, v {U 1, , U L}, {v1, , v L}. Since we aim to learn equivariances that are shared across p(T ),the symmetry matrices should not change with the task. Hence, for any Ti p(T ) the inner loop fixes U and only updates v using the task training data: v v α v L(U, v, Dtr i ) (3) where L is simply the supervised learning loss, and α is the inner loop step size. During metatraining, the outer loop updates U by computing the loss on the task s validation data using v : d UL(U, v , Dval i ) (4) We illustrate the inner and outer loop updates in Fig. 3. Note that in addition to meta-learning the symmetry matrices, we can also still meta-learn the filter initialization v as in prior work. In practice we also take outer updates averaged over mini-batches of tasks, as we describe in Alg. 1. After meta-training is complete, we freeze the symmetry matrices U. On a new test task Tk p(T ), we use the inner loop (Eq. 3) to update only the filter v. The frozen U enforces meta-learned parameter sharing in each layer, which improves generalization by reducing the number of taskspecific inner loop parameters. For example, the sharing pattern of standard convolution makes the weight matrix constant along any diagonal, reducing the number of per-task parameters (see Fig. 1). 5 CAN WE RECOVER CONVOLUTIONAL STRUCTURE? We now introduce a series of synthetic meta-learning problems, where each problem contains regression tasks that are guaranteed to have some symmetries, such as translation, rotation, or reflection. We combine meta-learning methods with general architectures not designed with these symmetries in mind to see whether each method can automatically meta-learn these equivariances. 5.1 LEARNING (PARTIAL) TRANSLATION SYMMETRY Our first batch of synthetic problems contains tasks with translational symmetry: we generate outputs by feeding random input vectors through a 1-D locally connected (LC) layer with filter size 3 and no bias. Each task corresponds to different values of the LC filter, and the meta-learner must minimize mean squared error (MSE) after observing a single input-output pair. For each problem we constrain the LC filter weights with a rank k {1, 2, 5} factorization, resulting in partial translation symmetry (Elsayed et al., 2020). In the case where rank k = 1, the LC layer Published as a conference paper at ICLR 2021 is equivalent to convolution (ignoring the biases) and thus generates exactly translation equivariant task data. We apply both MSR and MAML to this problem using a single fully connected layer (MSR-FC and MAML-FC), so these models have no translation equivariance built in and must meta-learn it to solve the tasks efficiently. For comparison, we also train convolutional and locally connected models with MAML (MAML-Conv and MAML-LC). Since MAML-Conv has built in translation equivariance, we expect it to at least perform well on the rank k = 1 problem. Figure 4: After observing translation equivariant data, MSR enforces convolutional parameter sharing on the weight matrix. An example weight matrix is shown above. We also ran two ablations of MSR that use the same reparameterization (Eq. 2) but vary the training procedure. In MSRJoint, we allow U and v to be jointly updated in the inner loop, instead of only updating v in the inner loop. Hence MSRJoint is trained identically to MAML, but with reparameterized weights. MTSR is an ablation of MSR that trains using multi-task learning instead of meta-learning. Given data from training task Ti, MTSR jointly optimizes U (shared symmetry matrix) and v(i) (task specific filter parameters) using the MSE loss. For a new test task we freeze the optimized U and optimize a newly initialized filter v using the test task s training data, then evaluate MSE on held out data. Even though the true filter that generates the data has width 3, for MSR and MTSR we initialize the learned filter v to be the same size as the input, per Prop. 1. In principle, these methods should automatically meta-learn that the true filter is sparse, and to ignore the extra dimensions in v. Appendix D.1 further explains the experimental setup. Table 1 shows how each method performs on each of the synthetic problems, with columns denoting the rank k of the problem s data. The small vs large train dataset results differ only in that the latter contains 5 or 10 times more examples per training task, depending on k. On fully translation equivariant data (k = 1), MAML-Conv performs best due to its architecture having built in translation equivariance. MSR-FC is the only non-convolutional architecture to perform comparably to MAML-Conv for k = 1. Fig. 4 shows that MSR-FC has learned to produce weight matrices with convolutional parameter sharing structure, indicating it has learned convolution from the data. Appendix C.1 visualizes the meta-learned U, which we find implements convolution as Sec. 4.2 predicted. Meanwhile, MAML-FC and MAML-LC perform significantly worse as they are unable to meta-learn this structure. On partially symmetric data (k = 2, k = 5), MSR-FC performs well due to its ability to flexibly meta-learn even partial symmetries. MAML-Conv performs worse here since the convolution assumption is overly restrictive, while MAML-FC and MAML-LC are not able to meta-learn much structure. MSR-Joint-FC performs comparably to or worse than MSR-FC across the board. Note that following prior work (Li et al., 2017), all methods use meta-learned inner learning rates on parameters that change in the inner loop. For MSR-Joint-FC we observe that the meta-learned inner loop learning rates corresponding to U are significantly smaller than the inner learning rates corresponding to v, suggesting that U is changing relatively little in the inner loop (see Appendix Table 4). MTSR-FC performs significantly worse than MSR-FC with small training datasets, but performs comparably with large datasets. This indicates that although our reparameterization can be trained by either multi-task learning or meta-learning, the meta-learning approach (Alg. 1) is more efficient at learning from less data. 5.2 LEARNING EQUIVARIANCE TO ROTATIONS AND FLIPS Rotation/Flip Equivariance MSE Method Rot Rot+Flip MAML-Conv .504 .507 MSR-Conv (Ours) .004 .001 Table 2: MSR learns rotation and flip equivariant parameter sharing on top of a standard convolution model, and thus achieves much better generalization error on meta-test tasks compared to MAML on rotation and flip equivariant data. We also created synthetic problems with 2-D synthetic image inputs and outputs, in order to study rotation and flip equivariance. We generate task data by passing randomly generated inputs through a single layer E(2)-equivariant steerable CNN (Weiler and Cesa, 2019) configured to be equivariant to combinations of translations, discrete rotations by increments of 45 , and reflections. Hence our synthetic task data contains rotation and reflection in addition to translation symmetry. Each task corresponds to different values of the data-generating network s weights. We apply MSR and MAML to a single standard con- Published as a conference paper at ICLR 2021 volution layer, which guarantees translation equivariance. Each method must still meta-learn rotation and reflection (flip) equivariance from the data. Table 2 shows that MSR easily learns rotation and rotation+reflection equivariance on top of the convolutional model s built in translational equivariance. Appendix C.2 visualizes the filters MSR produces, which we see are rotated and/or flipped versions of the same filter. 6 CAN WE LEARN INVARIANCES FROM AUGMENTED DATA? Algorithm 2: Augmentation Meta-Training input: {Ti}N i=1: Meta-training tasks input: META-TRAIN: Any meta-learner input: AUGMENT: Data augmenter forall Ti {Ti}N i=1 do {Dtr i , Dval i } Ti ; // task data split ˆDval i AUGMENT Dval ; ˆTi {Dtr, ˆDval i } META-TRAIN { ˆTi}N i=1 Practitioners commonly use data augmentation to train their models to have certain invariances. Since invariance is a special case of equivariance, we can also view data augmentation as a way of learning equivariant models. The downside is that we need augmented data for each task. While augmentation is often possible during meta-training, there are many situations where it is impractical at meta-test time. For example, in robotics we may meta-train a robot in simulation and then deploy (meta-test) in the real world, a kind of sim2real transfer strategy (Song et al., 2020). During meta-training we can augment data using the simulated environment, but we cannot do the same at meta-test time in the real world. Can we instead use MSR to learn equivariances from data augmentation at training time, and encode those learned equivariances into the network itself? This way, the network would preserve learned equivariances on new meta-test tasks without needing any additional data augmentation. Alg. 2 describes our approach for meta-learning invariances from data augmentation, which wraps around any meta-learning algorithm using generic data augmentation procedures. Recall that each task is split into training and validation data Ti = {Dtr i , Dval i }. We use the data augmentation procedure to only modify the validation data, producing a new validation dataset ˆDval i for each task. We re-assemble each modified task ˆTi {Dtr i , ˆDval i }. So for each task, the meta-learner observes unaugmented training data, but must generalize to augmented validation data. This forces the model to be invariant to the augmentation transforms without actually seeing any augmented training data. We apply this augmentation strategy to Omniglot (Lake et al., 2015) and Mini Imagenet (Vinyals et al., 2016) few shot classification to create the Aug-Omniglot and Aug-Mini Imagenet benchmarks. Our data augmentation function contains a combination of random rotations, flips, and resizes (rescaling), which we only apply to task validation data as described above. The problem is set up analogous to (Finn et al., 2017): for each task, the model must classify images into one of either 5 or 20 classes (n-way) and receives either 1 or 5 examples of each class in the task training data (k-shot). Unlike Finn et al. (2017) our Aug-Omniglot and Aug-Mini Imagenet benchmarks contain transformed task validation data. We tried combining Alg. 2 with our MSR method and three other meta-learning algorithms: MAML (Finn et al., 2017), ANIL (Raghu et al., 2019), and Prototypical Networks (Proto Nets) (Snell et al., 2017). While the latter three methods all have the potential to learn equivariant features through Alg. 2, we hypothesize that since MSR enforces learned equivariance through its symmetry matrices it should outperform these feature-metalearning methods. We also paired MAML with a model that has built in equivariance to the group D8 (45 -increment rotation and reflections) using the E2-CNN library (Weiler and Cesa, 2019). We call this baseline MAML+D8 . Appendix D.3 describes the experimental setup and methods implementations in more detail. Table 3 shows each method s meta-test accuracies on both benchmarks. Across different settings MSR performs either comparably to the best method, or the best. MAML and ANIL perform similarly to each other, and usually worse than MSR, suggesting that learning equivariant or invariant features is not as helpful as learning equivariant layer structures. Proto Nets perform well on the easier Aug-Omniglot benchmark, but evidently struggle with learning a transformation invariant metric space on the harder Aug-Mini Imagenet problems. MSR even outperforms the architecture with built in rotation and reflection symmetry (MAML+D8) across the board. MSR s advantage may be due Published as a conference paper at ICLR 2021 Aug-Omniglot Aug-Mini Imagenet 5 way 20 way 5 way Method 1-shot 5-shot 1-shot 5-shot 1-shot 5-shot MAML 87.3 0.5 93.6 0.3 67.0 0.4 79.9 0.3 42.5 1.1 61.5 1.0 MAML (Big) 89.3 0.4 94.8 0.3 69.6 0.4 83.2 0.3 37.2 1.1 63.2 1.0 ANIL 86.4 0.5 93.2 0.3 67.5 3.5 79.8 0.3 43.0 1.1 62.3 1.0 Proto Nets 92.9 0.4 97.4 0.2 85.1 0.3 94.3 0.2 34.6 0.5 54.5 0.6 MAML + D8 94.6 0.4 96.4 0.3 82.6 0.3 85.1 0.3 44.9 1.2 56.8 1.1 MSR (Ours) 95.3 0.3 97.7 0.2 84.3 0.2 92.6 0.2 45.5 1.1 65.2 1.0 Table 3: Meta-test accuracies on Aug-Omniglot and Aug-Mini Imagenet few-shot classification. These benchmarks test generalization to augmented validation data from un-augmented training data. MSR performs comparably to or better than other methods under this augmented regime. Results are shown with 95% confidence intervals over test tasks. to the additional presence of scaling transformations in the image data; we are not aware of architectures that build in rotation, reflection, and scaling equivariance at the time of writing. Note that MSR s reparameterization increases the number of meta-learned parameters at each layer, so MSR models contain more total parameters than corresponding MAML models. The MAML (Big) results show MAML performance with very large models containing more total parameters than the corresponding MSR models. The results show that MSR also outperforms these larger MAML models despite having fewer total parameters. 7 DISCUSSION AND FUTURE WORK We introduce a method for automatically meta-learning equivariances in neural network models, by encoding learned equivariance-inducing parameter sharing patterns in each layer. On new tasks, these sharing patterns reduce the number of task-specific parameters and improve generalization. Our experiments show that this method can improve few-shot generalization on task distributions with shared underlying symmetries. We also introduce a strategy for meta-training invariances into networks using data augmentation, and show that it works well with our method. By encoding equivariances into the network as a parameter sharing pattern, our method has the benefit of preserving learned equivariances on new tasks so it can learn more efficiently. Machine learning thus far has benefited from exploiting human knowledge of problem symmetries, and we believe this work presents a step towards learning and exploiting symmetries automatically. This work leads to numerous directions for future investigation. In addition to generalization benefits, standard convolution is practical since it exploits the parameter sharing structure to improve computational efficiency, relative to a fully connected layer of the same input/output dimensions. While MSR we can improve computational efficiency by reparameterizing standard convolution layers, it does not exploit learned structure to further optimize its computation. Can we automatically learn or find efficient implementations of these more structured operations? Additionally, MSR is focused on learning finite symmetry groups, while approximating infinite ones (e.g., learning 45 - increment rotation symmetry as an approximation to continuous rotation symmetry). Unfortunately, the number of parameters increases with the resolution of the approximation, so further research would be useful in discovering more scalable methods of approximating and learning continuous symmetries. Finally, our method is best for learning symmetries which are shared across a distribution of tasks. Further research on quickly discovering symmetries which are particular to a single task would make deep learning methods significantly more useful on many difficult real world problems. ACKNOWLEDGEMENTS We would like to thank Sam Greydanus, Archit Sharma, and Yiding Jiang for reviewing and critiquing earlier drafts of this paper. This work was supported in part by Google. CF is a CIFAR Fellow. Published as a conference paper at ICLR 2021 F. Alet, T. Lozano-P erez, and L. P. Kaelbling. Modular meta-learning. ar Xiv preprint ar Xiv:1806.10166, 2018. A. Antoniou, H. Edwards, and A. Storkey. How to train your maml. ar Xiv preprint ar Xiv:1810.09502, 2018. S. Bengio, Y. Bengio, J. Cloutier, and J. Gecsei. On the optimization of a synaptic learning rule. In Preprints Conf. Optimality in Artificial and Biological Neural Networks, volume 2. Univ. of Texas, 1992. G. Benton, M. Finzi, P. Izmailov, and A. G. Wilson. Learning invariances in neural networks. ar Xiv preprint ar Xiv:2010.11882, 2020. D. Beymer and T. Poggio. Face recognition from one example view. In Proceedings of IEEE International Conference on Computer Vision, pages 500 507. IEEE, 1995. A. Brock, T. Lim, J. M. Ritchie, and N. Weston. Smash: one-shot model architecture search through hypernetworks. ar Xiv preprint ar Xiv:1708.05344, 2017. T. Cohen and M. Welling. Group equivariant convolutional networks. In International conference on machine learning, pages 2990 2999, 2016. T. S. Cohen, M. Weiler, B. Kicanaoglu, and M. Welling. Gauge equivariant convolutional networks and the icosahedral cnn. ar Xiv preprint ar Xiv:1902.04615, 2019. M. Cranmer, S. Greydanus, S. Hoyer, P. Battaglia, D. Spergel, and S. Ho. Lagrangian neural networks. ar Xiv preprint ar Xiv:2003.04630, 2020. E. D. Cubuk, B. Zoph, D. Man e, V. Vasudevan, and Q. V. Le. Autoaugment: Learning augmentation policies from data. corr abs/1805.09501 (2018). ar Xiv preprint ar Xiv:1805.09501, 2018. T. Deleu, T. W urfl, M. Samiei, J. P. Cohen, and Y. Bengio. Torchmeta: A Meta-Learning library for Py Torch, 2019. URL https://arxiv.org/abs/1909.06576. Available at: https://github.com/tristandeleu/pytorch-meta. N. Diaconu and D. E. Worrall. Learning to convolve: A generalized weight-tying approach. ar Xiv preprint ar Xiv:1905.04663, 2019. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. ar Xiv preprint ar Xiv:2010.11929, 2020. D. S. Dummit and R. M. Foote. Abstract algebra, volume 3. Wiley Hoboken, 2004. G. F. Elsayed, P. Ramachandran, J. Shlens, and S. Kornblith. Revisiting spatial invariance with low-rank local connectivity. ar Xiv preprint ar Xiv:2002.02959, 2020. T. Elsken, J. H. Metzen, and F. Hutter. Neural architecture search: A survey. ar Xiv preprint ar Xiv:1808.05377, 2018. C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1126 1135. JMLR. org, 2017. S. Flennerhag, A. A. Rusu, R. Pascanu, H. Yin, and R. Hadsell. Meta-learning with warped gradient descent. ar Xiv preprint ar Xiv:1909.00025, 2019. R. Gens and P. M. Domingos. Deep symmetry networks. In Advances in neural information processing systems, pages 2537 2545, 2014. A. Graves and N. Jaitly. Towards end-to-end speech recognition with recurrent neural networks. In International conference on machine learning, pages 1764 1772, 2014. Published as a conference paper at ICLR 2021 E. Grefenstette, B. Amos, D. Yarats, P. M. Htut, A. Molchanov, F. Meier, D. Kiela, K. Cho, and S. Chintala. Generalized inner loop meta-learning. ar Xiv preprint ar Xiv:1910.01727, 2019. S. Greydanus, M. Dzamba, and J. Yosinski. Hamiltonian neural networks. In Advances in Neural Information Processing Systems, pages 15353 15363, 2019. D. Ha, A. Dai, and Q. V. Le. Hypernetworks. ar Xiv preprint ar Xiv:1609.09106, 2016. A. Hannun, C. Case, J. Casper, B. Catanzaro, G. Diamos, E. Elsen, R. Prenger, S. Satheesh, S. Sengupta, A. Coates, et al. Deep speech: Scaling up end-to-end speech recognition. ar Xiv preprint ar Xiv:1412.5567, 2014. S. Hochreiter, A. S. Younger, and P. R. Conwell. Learning to learn using gradient descent. In International Conference on Artificial Neural Networks, pages 87 94. Springer, 2001. D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. ar Xiv preprint ar Xiv:1412.6980, 2014. T. G. Kolda and B. W. Bader. Tensor decompositions and applications. SIAM review, 51(3):455 500, 2009. R. Kondor and S. Trivedi. On the generalization of equivariance and convolution in neural networks to the action of compact groups, 2018. A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097 1105, 2012. B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332 1338, 2015. Y. Le Cun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278 2324, 1998. Y. Lee and S. Choi. Gradient-based meta-learning with learned layerwise metric and subspace. ar Xiv preprint ar Xiv:1801.05558, 2018. Z. Li, F. Zhou, F. Chen, and H. Li. Meta-sgd: Learning to learn quickly for few-shot learning. ar Xiv preprint ar Xiv:1707.09835, 2017. H. Liu, K. Simonyan, and Y. Yang. Darts: Differentiable architecture search. ar Xiv preprint ar Xiv:1806.09055, 2018a. R. Liu, J. Lehman, P. Molino, F. P. Such, E. Frank, A. Sergeev, and J. Yosinski. An intriguing failing of convolutional neural networks and the coordconv solution. In Advances in Neural Information Processing Systems, pages 9605 9616, 2018b. J. Lorraine, P. Vicol, and D. Duvenaud. Optimizing millions of hyperparameters by implicit differentiation. ar Xiv preprint ar Xiv:1911.02590, 2019. J. Martens and R. Grosse. Optimizing neural networks with kronecker-factored approximate curvature. In International conference on machine learning, pages 2408 2417, 2015. P. Niyogi, F. Girosi, and T. Poggio. Incorporating prior information in machine learning by creating virtual examples. Proceedings of the IEEE, 86(11):2196 2209, 1998. E. Park and J. B. Oliva. Meta-curvature. In Advances in Neural Information Processing Systems, pages 3309 3319, 2019. A. Raghu, M. Raghu, S. Bengio, and O. Vinyals. Rapid learning or feature reuse? towards understanding the effectiveness of maml. ar Xiv preprint ar Xiv:1909.09157, 2019. S. Ravanbakhsh, J. Schneider, and B. Poczos. Equivariance through parameter-sharing. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 2892 2901. JMLR. org, 2017. Published as a conference paper at ICLR 2021 O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211 252, 2015. A. A. Rusu, D. Rao, J. Sygnowski, O. Vinyals, R. Pascanu, S. Osindero, and R. Hadsell. Metalearning with latent embedding optimization. ar Xiv preprint ar Xiv:1807.05960, 2018. J. Schmidhuber. Evolutionary principles in self-referential learning. On learning how to learn: The meta-meta-... hook.) Diploma thesis, Institut f. Informatik, Tech. Univ. Munich, 1:2, 1987. J. Schmidhuber. Learning to control fast-weight memories: An alternative to dynamic recurrent networks. Neural Computation, 4(1):131 139, 1992. J. Shawe-Taylor. Building symmetries into feedforward networks. In 1989 First IEE International Conference on Artificial Neural Networks,(Conf. Publ. No. 313), pages 158 162. IET, 1989. J. Snell, K. Swersky, and R. Zemel. Prototypical networks for few-shot learning. In Advances in neural information processing systems, pages 4077 4087, 2017. X. Song, Y. Yang, K. Choromanski, K. Caluwaerts, W. Gao, C. Finn, and J. Tan. Rapidly adaptable legged robots via evolutionary meta-learning. ar Xiv preprint ar Xiv:2003.01239, 2020. K. O. Stanley and R. Miikkulainen. Evolving neural networks through augmenting topologies. Evolutionary computation, 10(2):99 127, 2002. K. O. Stanley, D. B. D Ambrosio, and J. Gauci. A hypercube-based encoding for evolving largescale neural networks. Artificial life, 15(2):185 212, 2009. S. Thrun and L. Pratt. Learning to learn. Springer Science & Business Media, 2012. M. van der Wilk, M. Bauer, S. John, and J. Hensman. Learning invariances using the marginal likelihood. In Advances in Neural Information Processing Systems, pages 9938 9948, 2018. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998 6008, 2017. O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra, et al. Matching networks for one shot learning. In Advances in neural information processing systems, pages 3630 3638, 2016. M. Weiler and G. Cesa. General e (2)-equivariant steerable cnns. In Advances in Neural Information Processing Systems, pages 14334 14345, 2019. D. Worrall and M. Welling. Deep scale-spaces: Equivariance over scale. In Advances in Neural Information Processing Systems, pages 7364 7376, 2019. D. E. Worrall, S. J. Garbin, D. Turmukhambetov, and G. J. Brostow. Harmonic networks: Deep translation and rotation equivariance. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5028 5037, 2017. M. Zaheer, S. Kottur, S. Ravanbakhsh, B. Poczos, R. R. Salakhutdinov, and A. J. Smola. Deep sets. In Advances in neural information processing systems, pages 3391 3401, 2017. B. Zoph and Q. V. Le. Neural architecture search with reinforcement learning. ar Xiv preprint ar Xiv:1611.01578, 2016. Published as a conference paper at ICLR 2021 A APPROXIMATION AND TRACTABILITY A.1 FULLY CONNECTED From Eq. 2 we see that for a layer with m output units, n input units, and k filter parameters the symmetry matrix U has mnk entries. This is too expensive for larger layers, so in practice, we need a factorized reparameterization to reduce memory and compute requirements when k is larger. For fully connected layers, we use a Kronecker factorization to scalably reparameterize each layer. First, we assume that the filter parameters v Rkl can be arranged in a matrix V Rk l. Then we reparameterize each layer s weight matrix W similar to Eq. 2, but assume the symmetry matrix is the Kronecker product of two smaller matrices: vec(W) = (U1 U2)vec(V ), U1 Rn l, U2 Rm k (5) Since we only store the two Kronecker factors U1 and U2, we reduce the memory requirements of U from mnkl to mk + nl. In our experiments we generally choose V Rm n so U1 Rn n and U2 Rm m. Then the actual memory cost of each reparameterized layer (including both U and v) is m2 + n2 + mn, compared to mn for a standard fully connected layer. So in the case where m n, MSR increases memory cost by roughly a constant factor of 3. After approximation MSR also increases computation time (forward and backward passes) by roughly a constant factor of 3 compared to MAML. A standard fully connected layer requires a single matrix-matrix multiply Y = WX in the forward pass (here Y and X are matrices since inputs and outputs are in batches). Applying the Kronecker-vec trick to Eq. 5 gives: W = U1V U T 2 vec(W) = (U1 U2)vec(V ) (6) So rather than actually forming the (possibly large) symmetry matrix U1 U2, we can directly construct W simply using 2 additional matrix-matrix multiplies W = U1V U T 2 . Again assuming V Rm n and m n, each matrix in the preceding expression is approximately the same size as W. A.2 2D CONVOLUTION When reparameterizing 2-D convolutions, we need to produce a filter (a rank-4 tensor W RCo Ci H W ). We assume the filter parameters are stored in a rank 3 tensor V Rp q s, and factorize the symmetry matrix U into three separate matrices U1 RCo p, U2 RCi q and U3 RHW s. A similar Kronecker product approximation gives: W = V 1 U1 2 U2 3 U3, W RCo Ci HW (7) W = reshape( W), W RCi Ci H W (8) where n represents n-mode tensor multiplication (Kolda and Bader, 2009). Just as in the fully connected case, this convolution reparameterization is equivalent to a Kronecker factorization of the symmetry matrix U. An analysis of the memory and computation requirements of reparameterized convolution layers proceeds similarly to the above analysis for the fully connected case. As we describe below, in our augmented experiments using convolutional models each MSR outer step takes roughly 30% 40% longer than a MAML outer step. In practice, for any experiment where we reparameterize a standard 2-D convolution with weights W RCo Ci H W , we choose p = Co, q = Ci, and s = HW. Equivalently, we choose V RCo Ci HW . Although not necessary, this choice conveniently makes the matrices U1, U2 and U3 into square matrices, which we can initialize to identity matrices at the start of meta-learning. B PROOF OF PROPOSITION 1 To show the connection with the existing literature, we first present a slightly generalised definition of G-convolution that is more common in the existing literature. We instead model an input signal Published as a conference paper at ICLR 2021 as a function f : X R on some underlying space X. We then consider a finite group G = {g1, . . . , gn} of symmetries acting transitively on X, over which we desire G-equivariance. Many (but not all) of the groups discussed in (Weiler and Cesa, 2019) are finite groups of this form. It is proven by (Kondor and Trivedi, 2018) that a function φ is equivariant to G if and only if it is a G-convolution on this space. In the domain of finite groups, we can consider a slight simplification of this notion: a finite G cross-correlation of f with a filter ψ : X R. This is defined by (Cohen and Welling, 2016) as: [φ(f)](g) = (f ψ)(g) = X x X f(x)ψ(g 1x).2 (9) Figure 5: The theoretical convolutional weight symmetry matrix for the group g = C4, where g is a Nπ 2 -radian rotation of a 3x3 image (N {0, 1, 2, 3}. Notice that the image is flattened into a length 9 vector. The matrix π(g) describes the action of a Nπ 2 radian rotation on this image. We can now connect this notion with the linear layer, as described in our paper. First, in order for a fully connected layer s weight matrix W to act on function f, we must first assume that f has finite support {x1, . . . , xs} i.e. f(x) is only non-zero at these s points within X. This means that f can be represented as a dual vector f Rs given by f i = f(xi), on which W can act.3 We aim to show a certain value of U G Rns s allows arbitrary G cross-correlations and only G cross-correlations to be represented by fully connected layers with weight matrices of the form vec(W) = U Gv, (10) where v Rs is any arbitrary vector of appropriate dimension. The reshape specifically gives W Rn s, which transforms the vector f Rs. With this in mind, we first use that the action of the group can be represented as a matrix transformation on this vector space, using the matrix representation π: [π(g)f]i = f(g 1xi) (11) where notably π(g) Rs s. We consider U G Rns s, and v Rs. Since v Rs, we can also treat v as a the dual vector of a function ˆv : X R with support {x1, . . . , xs}, described by ˆv(xi) = vi. We can interpret ˆv as a convolutional filter, just like ψ in Eq. 9. W then acts on v just as it acts on f, namely: [π(g)v]i = ˆv(g 1xi). (12) Now, we define U G by stacking the matrix representations of gi G: π(g1) ... π(gn) which implies the following value of W : W = reshape(U Gv) = reshape 2This definition avoids notions such as lifting of X to G and the possibility of more general group representations, for the sake of simplicity. We recommend Kondor and Trivedi (2018) for a more complete theory of G-convolutions. 3This is using the natural linear algebraic dual of the free vector space on {x1, . . . , xs}. Published as a conference paper at ICLR 2021 Meta-learned LRs on synthetic problems Variable Small train dataset Large train dataset k = 1 k = 2 k = 5 k = 1 k = 2 k = 5 U 0.017 0.011 0.052 0.009 0.021 0.039 v +0.241 +0.326 +0.401 +0.241 +0.307 +0.312 Table 4: In the ablation MSR-Joint-FC of Sec. 4 we jointly updated U and v in the inner loop with metalearned inner loop learning rates for each. This is in contrast with standard MSR, where only v is updated in the inner loop (also with a meta-learned learning rate), and U is only updated in the outer loop. The inner learning rates were initialized at 0.02 for all variables. The table shows the inner loop learning rates at the end of training. The relative magnitudes suggest that v is being updated significantly more than U in the inner loop. This then grants that the output of the fully connected layer with weights W is: j=1 (π(gi)v)jf j. (15) Using that f has finite support {x1, . . . , xs}, and that (π(gi)v)j = ˆv(g 1 i xj), we have that: j=1 ˆv(g 1 i xj)f(xj) = X x X ˆv(g 1 i x)f(x). (16) Lastly, we can interpret WGf as a function φG(f) mapping each gi G to its ith component: [φG(f)](gi) = (Wf)i = X x X ˆv(g 1 i x)f(x) (17) which is precisely the cross-correlation as described in Eq. 9 with filter ψ = ˆv. This implies that φG must be equivariant with respect to G. Moreover, all such G-equivariant functions are G crosscorrelations parameterized by v, so with U G fixed as in Eq.-13, we have that W = U Gv can represent all G-equivariant functions. This means that if v is chosen to have the same dimension as the input, and the weight symmetry matrix is sufficiently large, any equivariance to a finite group can be meta-learned using this approach. Moreover, in this case the symmetry matrix has a very natural and interpretable structure, containing a representation of the group in block submatrices this structure is seen in practice in our synthetic experiments. Lastly, notice that v corresponds (dually) to the convolutional filter, justifying the notion that we learn the convolutional filter in the inner loop, and the group action in the outer group. In the above proof, we ve used the original definition of group convolution (Cohen and Welling, 2016) for the sake of simplicity. It is useful to note that a slight generalization of the proof applies for more general equivariance between representations, as defined in equation (3.3) (i.e. the case when π(g) is an arbitrary linear transformation, and not necessarily of the form π(g)f(x) = f(g 1x).) This is subject to a unitarity condition on the group representation (Worrall and Welling, 2019). Without any modification to the method, arbitrary linear approximations to group convolution can be learnt when the representation is not a permutation of the indices. For example, non axis-aligned rotations can be easily approximated through both bilinear and bicubic interpolation, whereby the value of a pixel x after rotation is a linear interpolation of the 4 or 16 pixels nearest to the true value of this pixel before rotation g 1x. Practically, this allows us to approximate equivariance to 45 degree rotations of 2D images, for which there don t exist representations of the form in Eq. 12. C FURTHER SYNTHETIC EXPERIMENT RESULTS C.1 VISUALIZING TRANSLATION EQUIVARIANT SYMMETRY MATRICES Fig. 6 visualizes the actual symmetry matrix U that MSR-FC meta-learns from translation equivariant data. Each column is one of the submatrices π(i) corresponding to the action of the discrete Published as a conference paper at ICLR 2021 Figure 6: The submatrices of the meta-learned symmetry matrix of MSR-FC on the translation equivariant problem (Sec. 5.1). Intensity corresponds to each entry s absolute value. We see that the symmetry matrix has been meta-learned to implement standard convolution: each π(i) translates the size filter v R3 by i spaces. Note that in actuality the submatrices are stacked on top of each other in U as in Eq. 13, but we display them side-by-side for visualization. Synthetic problem data quantity k = 1 k = 2 k = 5 Rot Rot+flip No. train tasks 400 800 800 8000 8000 No. test tasks 100 200 200 2000 2000 Examples/train task (Small) 2 2 4 20 20 Examples/train task (Large) 20 20 20 - - Train examples/test task 1 1 1 1 1 Table 5: The amount of training and test data provided to each method in the synthetic experiments of Table 1 and Table 2. The last row indicates that on the test tasks,, all methods were expected to solve each problem using a single example from that task. translation group element i Z on the filter v. In other words, MSR automatically meta-learned U to contain these submatrices π(i) such that each π(i) translates the filter by i spaces, effectively meta-learning standard convolution! In the actual symmetry matrix the submatrices are stacked on top of each other as in Eq. 13, but we display each submatrix side-by-side for easy visualization. The figure is also cropped for space: there are a total of 68 submatrices but we show only the first 20, and each submatrix is cropped from 70 3 to 22 3. C.2 VISUALIZING ROTATION AND FLIP EQUIVARIANT FILTERS In Sec. 5.2 we ran three experiments reparameterizing convolution layers to meta-learn 90 rotation, 45 rotation, and 45 rotation+flip equivariance, respectively. Figure 7 shows that MSR produces rotated and flipped versions of filters in order to make the convolution layers equivariant to the corresponding rotation or flip transformations. D EXPERIMENTAL DETAILS Throughout this work we implemented all gradient based meta-learning algorithms in Py Torch using the Higher (Grefenstette et al., 2019) library. D.1 TRANSLATION SYMMETRY SYNTHETIC PROBLEMS For the (partial) translation symmetry problems we generated regression data using a single locally connected layer. Each task corresponds to different weights of the data generating network, whose entries we sample independently from a standard normal distribution. For rank k locally connected filters we sampled k width-3 filters and then set the filter value at each spatial location to be a random Published as a conference paper at ICLR 2021 linear combination of those k filters. Table 5 shows how many distinct training and test tasks we generated data for. For each particular task, we generated data points by randomly sampling the entries of the input vector from a standard normal distribution, passing the input vector into the data generating network, and saving the input and output as a pair. MSR and MAML training: During meta-training we trained each method for 1, 000 outer steps on task batches of size 32, enough for the training loss to converge for every method in every problem. We used the Adam (Kingma and Ba, 2014) optimizer in the outer loop with learning rate .0005. Like most meta-learning methods, MAML and MSR split each task s examples into a support set (task training data) and a query set (task validation data). On training tasks MAML and MSR used 3 SGD steps on the support data before computing the meta-training objective on the query data, while using 9 SGD steps on the support data of test tasks. We also used meta-learned per-layer learning rates initialized to 0.02. At meta-test time we evaluated average performance and error bars on held-out tasks. MTSR training: We reparameterize fully connected layers into symmetry matrix U and filter v, similar to MSR. MTSR maintains single shared U, but initializes a separate filter v(i) for each training task Ti. Given example data Di from Ti, we jointly optimize {U, v(i)} using the loss L(U, v(i), Di). In practice each update step updates U and all {v(i)} in parallel using the full batch of training tasks. Given a test task we initialize a new filter v alongside our already trained U. We then update v on training examples from the test task before evaluating on held out examples from the test task. We use 500 gradient steps for each task at both training and test time, again using the Adam optimizer with learning rate 0.001. We ran all experiments on a single machine with a single NVidia RTX 2080Ti GPU. Our MSR-FC experiments took about 9.5 (outer loop) steps per second, while our MSR-Conv experiments took about 2.8 (outer loop) steps per second. D.2 ROTATION+FLIP SYMMETRY SYNTHETIC PROBLEMS The setup of the rotation and rotation+flip symmetry problems is very similar to that of the translation symmetry problems. Here we generated regression data using a single E(2)-steerable (Weiler and Cesa, 2019) layer. Each task again corresponds to a particular setting of the weights of this data generating network, whose entries are sampled from a standard normal distribution for each task. We generate examples for each task similarly to above, and Table 5 shows the quantity of data available for training and test tasks. MAML and MSR training setups here are similar to the translation setups, but we reparameterize the filter of a standard convolution layer to build in translation symmetry and focus on learning rotation/flip symmetry. Unlike the translation experiments, here we use 1 SGD step in the inner loop for both train and test tasks, and initialize the learned learning rates to 0.1. D.3 AUGMENTATION EXPERIMENTS To create Aug-Omniglot and Aug-Mini Imagenet, we extended the Omniglot and Mini Imagenet benchmarks from Torch Meta (Deleu et al., 2019). Each task in these benchmarks is split into support (train) and query (validation) datasets. For the augmented benchmarks we applied data augmentation to only the query dataset of each task, which consisted of randomly resized crops, reflections, and rotations by up to 30 . Using the torchvision library, the augmentation function is: # Data augmentation a p p l i e d to ONLY the query s e t . s i z e = 28 # Omniglot image s i z e . 84 f o r Mini Imagenet . augment fn = Compose ( Random Resized Crop (28 , s c a l e = (0. 8 , 1 . 0 ) ) , Random Vertical Flip ( p =0 .5) , Random Horizontal Flip ( p =0 .5) , Random Rotation (30 , resample=Image . BILINEAR ) , ) For the augmented Omniglot and Mini Imagenet 1-shot experiments, MAML used exactly the same convolutional architecture (same number of layers, number of channels, filter sizes, etc.) as prior Published as a conference paper at ICLR 2021 work on Omniglot and Mini Imagenet (Vinyals et al., 2016; Finn et al., 2017). For MSR we reparameterize each layer s weight matrix or convolutional filter using the Kronecker approximation (Appendix A) such that the reparameterized layer has the same number of input and output neurons as the corresponding layer in the MAML model. For Mini Imagenet 5-shot, we experimented with increasing architecture size via more channels and/or larger filters, which yielded better accuracies on meta-validation tasks. For MSR, MAML, and ANIL we increased the number of output channels from 32 to 128 and increased the kernel size from 3 to 5 in the first 3 convolution layers. We then inserted a 1 1 convolution layer with 64 output channels right before the linear output layer. For the Proto Net architecture we similarly increased the output channels at each layer from 32 to 128, but found that keeping the kernel size at 3 worked best. For MAML (Big) experiments we increased the architecture size of the MAML model to exceed the number of meta-parameters (symmetry matrices + filter parameters) in the corresponding MSR model. For Mini Imagenet 5-Shot we inserted an additional linear layer with 3840 output units before the final linear layer. For Mini Imagenet 1-Shot we increased the number of output channels at each of the 3 convolution layers from 32 to 64, then inserted an additional linear layer with 1920 output units before the final linear layer. For the Omniglot experiments we increased the number of output channels at each of the 3 convolution layers to 150. For all experiments and gradient based methods we trained for 60, 000 (outer) steps using the Adam optimizer with learning rate .0005 for Mini Imagenet 5-shot and .001 for all other experiments. In the inner loop we used SGD with meta-learned per-layer learning rates initialized to 0.4 for Omniglot and .05 for Mini Imagenet. We meta-trained using a single inner loop step in all experiments, and used 3 inner loop steps at meta-test time. Although MAML originally meta-trained with 5 inner loop steps on Mini Imagenet, we found that this destabilized meta-training on our augmented version. We hypothesize that this is due to the discrepancy between support and query data in our augmented problems. During meta-training we used a task batch size of 32 for Omniglot and 10 for Mini Imagenet. At meta-test time we evaluated average performance and error bars using 1000 held-out meta-test tasks. We ran all experiments on a machine with a single NVidia Titan RTX GPU. For our Aug-Omniglot, we ran two experiments at simultaneously on the same machine, which likely slowed each invididual experiment down. Our MSR method took about 0.6 steps per second, whereas the MAML baseline took about 0.86 steps per second. For Aug-Miniimagenet we ran one experiment per machine. MSR took 4.2 steps per second, while MAML took 5.6 steps per second on these experiments. Published as a conference paper at ICLR 2021 Figure 7: MSR produced convolution filters, after meta-learning 90 rotation, 45 rotation, and 45 rotation+flip equivariance in the Sec. 5.2 experiments. Notice that MSR learns to achieve the corresponding equivariance by producing rotated/flipped versions of the same filter.