# unsupervised_colearning_on_gmanifolds_across_irreducible_representations__b9c04cb9.pdf Unsupervised Co-Learning on G-Manifolds Across Irreducible Representations Yifeng Fan1 Tingran Gao2 Zhizhen Zhao1 1University of Illinois at Urbana-Champaign 2University of Chicago {yifengf2, zhizhenz}@illinois.edu tingrangao@galton.uchicago.edu We introduce a novel co-learning paradigm for manifolds naturally admitting an action of a transformation group G, motivated by recent developments on learning a manifold from attached fibre bundle structures. We utilize a representation theoretic mechanism that canonically associates multiple independent vector bundles over a common base manifold, which provides multiple views for the geometry of the underlying manifold. The consistency across these fibre bundles provide a common base for performing unsupervised manifold co-learning through the redundancy created artificially across irreducible representations of the transformation group. We demonstrate the efficacy of our proposed algorithmic paradigm through drastically improved robust nearest neighbor identification in cryo-electron microscopy image analysis and the clustering accuracy in community detection. 1 Introduction Fighting with the curse of dimensionality by leveraging low-dimensional intrinsic structures has become an important guiding principle in modern data science. Apart from classical structural assumptions commonly employed in sparsity or low-rank models in high dimensional statistics [63, 11, 12, 49, 2, 64, 67], recently it has become of interest to leverage more intricate properties of the underlying geometric model, motivated by algebraic or differential geometry techniques, for efficient learning and inference from massive complex datasets [15, 16, 44, 46, 8]. The assumption that high dimensional datasets lie approximately on a low-dimensional manifold, known as the manifold hypothesis, has been the cornerstone for the development of manifold learning [62, 52, 18, 3, 4, 5, 17, 57, 66] in the past few decades. In many real applications, the low-dimensional manifold underlying the dataset of high ambient dimensionality admits additional structures that can be fully leveraged to gain deeper insights into the geometry of the data. One class of such examples arises in scientific fields such as cryo-electron microscopy (cryo-EM), where large numbers of random projections for a three-dimensional molecule generate massive collections of images that can be determined only up to in-plane rotations [59, 72]. Another source of examples is the application in computer vision and robotics, where a major challenge is to recognize and compare three-dimensional spatial configurations up to the action of Euclidean or conformal groups [28, 10]. In these examples, the dataset of interest consists of images or shapes of potentially high spatial resolution, and admits a natural group action g G that plays the role of a nuisance or latent variable that needs to be quotient out before useful information is revealed. In geometric terms, on top of a differentiable manifold M underlying the dataset of interest, as assumed in the manifold hypothesis, we also assume the manifold admits a smooth right action of a Lie group G, in the sense that there is a smooth map φ : G M M satisfying φ (e, m) = m and φ (g2, φ (g1, m)) = φ (g1g2, m) for all m M and g1, g2 G, where e is the identity element of G. A left action can be defined similarly. Such a group action reflects abundant information about the symmetry of the underlying manifold, with which one can study geometric and topological 33rd Conference on Neural Information Processing Systems (Neur IPS 2019), Vancouver, Canada. properties of the underlying manifold through the lens of the orbit, stabilized, or induced finiteor infinite-dimensional representations of G. In modern differential and symplectic geometry literature, a smooth manifold M admitting the action of a Lie group G is often referred to as a G-manifold (see e.g. [40, 6], [50, 1, 33] and references therein), and this transformation-centered methodology has been proven fruitful [42, 53, 40, 30] by several generations of geometers and topologists. Recent development of manifold learning has started to digest and incorporate the additional information encoded in the G-actions on the low-dimensional manifold underlying the high-dimensional data. In [36], the authors constructed a steerable graph Laplacian on the manifold of images modeled as a rotationally invariant manifold (or U (1)-manifold in geometric terms) that serves the role of graph Laplacian in manifold learning but with naturally built-in rotational invariance by construction. In [38], the authors proposed a principal bundle model for image denoising, which achieved state-of-the-art performance by combining patch-based image analysis with rotationally invariant distances in microscopy [47]. A major contribution of this paper is to provide deeper insights into the success of these group-transformation-based manifold learning techniques from the perspective of multi-view learning [56, 60, 37] or co-training [7], and propose a family of new methods that systematically utilize these additional information in a systematic way, by exploiting the inherent consistency across representation theoretic patterns. Motivated by the recent line of research bridging manifold learning with principal and associated fibre bundles [57, 58, 22, 20, 19], we point out that to a G-manifold admitting a principal bundle structure is naturally associated as many vector bundles as the number of distinct irreducible representations of the transformation group G, and each of these vector bundles provide a separate view towards unveiling the geometry of the common base manifold on which all the fibre bundles reside. Specifically, the main contributions of this paper are summarized as follows: (1) We propose a new unsupervised co-learning paradigm on G-manifold and propose an optimal alignment affinity measure for high-dimensional data points that lie on or close to a lower dimensional G-manifold, using both the local cycle consistency of group transformations on the manifold (graph) and the algebraic consistency of the unitary irreducible representations of the transformations; (2) We introduce the invariant moments affinity in order to bypass the computationally intensive pairwise optimal alignment search and efficiently learn the underlying local neighborhood structure; and (3) We empirically demonstrate that our new framework is extremely robust to noise and apply it to improve cryo-EM image analysis and the clustering accuracy in community detection. Code is available on https://github.com/frankfyf/G-manifold-learning. 2 Related Work Manifold Learning: After the ground-breaking works of [62, 52], [5, 56, 41] provided reproducing kernel Hilbert space frameworks for scalar and vector valued kernel and interpreted the manifold assumption as a specific type of regularization; [3, 4, 14] used the estimated eigenfunctions of the Laplace Beltrami operator to parametrize the underlying manifold; [24, 25, 59] investigated into the representation theoretic pattern of an integral operator acting on certain complex line bundles over the unit two-sphere naturally arising from cryo-EM image analysis; [57, 58, 22] demonstrated the benefit of using differential operators defined on fibre bundles over the manifold, instead of the Laplace Beltrami operator on the manifold itself, in manifold learning tasks. Recently, [20, 19, 23] proposed to utilize the consistency across multiple irreducible representations of a compact Lie group to improve spectral decomposition based algorithms. Co-training and Multi-view Learning: In their seminal work [7], Blum and Mitchell demonstrated both in theory and in practice that distinct views of a dataset can be combined together to improve the performance of learning tasks, through their complementary yet consistent prediction for unlabelled data. Similar ideas exploiting the consistency of the information contained in different sets of features has long existed in statistical literature such as canonical correlation analysis [29]. Since then, multi-view learning has remained a powerful idea percolating through aspects of machine learning ranging from supervised and semi-supervised learning to active learning and transfer learning [21, 43, 61, 13, 55, 56, 34, 35]. See surveys [60, 69, 70, 37] for more detailed accounts. 3 Geometric Motivation We first provide a brief overview of the key concepts used in this paper from elementary group representation theory. Interested readers are referred to [54, 9] for more details. Groups and Representation: A group G is a set with an operation G G G obeying the following axioms: (1) g1, g2 G, g1g2 G; (2) g1, g2, g3 G, g1(g2g3) = (g1g2)g3; (3) There is a unique e G called the identity of G, such that eg = ge = g, g G; (4) g G, there is a corresponding element g 1 G called the inverse of g, such that gg 1 = g 1g = e. A dρ dρ-dimensional representation of a group G over a field F is a matrix valued function ρ : G Fdρ dρ such that ρ(g1)ρ(g2) = ρ(g1g2), g1, g2 G. In this paper, we assume F = C. A representation ρ is said to be unitary if ρ(g 1) = ρ(g) for any g G and ρ is said to be reducible if it can be decomposed into a direct sum of lower-dimensional representations as ρ(g) = Q 1(ρ1(g) L ρ2(g))Q for some invertible matrix Q Cdρ dρ, otherwise ρ is irreducible, the symbol L denotes the direct sum. For a compact group, there exists a complete set of inequivalent irreducible representations (in brevity: irreps) and any representation can be reduced into a direct sum of irreps. Fourier Transform: In many applications of interest, the Lie group is compact and thus always admits irreps, and the concept of irreps allows generalizing the Fourier transform to any compact group. By the renowned Peter Weyl theorem, any square integrable function f L2(G) can be decomposed as k=0 dk Tr [Fkρk(g)] , and Fk = Z G f(g)ρ k(g)dµg, (1) where each ρk : G Cdk dk is a unitary irrep of G with dimension dk N. This is the compact Lie group analogy of the standard Fourier series over the unit circle. The generalized Fourier coefficient Fk in (1) is defined by the integral taken with respect to the Haar measure on G. Motivation: Motivated by [38, 36], we consider the principal bundle structures on a G-manifold M. Below we state the definitions of fibre bundle and principal bundle for convenience; see [6] for more details. Briefly speaking, a fibre bundle is a manifold which is locally diffeomorphic to a product space, and a principal fibre bundle is a fibre bundle with a natural group action on its fibres. Definition 1 (Fibre Bundle) Let M, B, F be three differentiable manifolds, and let π : M B denote a smooth surjective map between M and B. We say that M π B (or just M for short) is a fibre bundle with typical fibre F over B if B admits an open cover U such that π 1 (U) is diffeomorphic to product space U F for any open set U U. For any x B, we denote Fx := π 1 (x) and call it the fibre over x. Definition 2 (Principal Bundle) Let M be a fibre bundle, and G a Lie group. We call M a principal G-bundle if (1) M is a fibre bundle, (2) M admits a right action of G that preserves the fibres of M, in the sense that for any m M we have π (m) = π (g m), and (3) For any two points p, q M on the same fibre of M, there exists a group element g G satisfying p g = q. If M is a principal G-bundle over B, any representation ρ of G on a vector space V induces an associated vector bundle over B with typical fibre V , denoted as M ρ V , defined as a quotient space M ρ V := M V where the equivalence relation is defined by (m g, v) (m, ρ (g) v) for all m M, g G, and v V . This construction gives rise to as many different associated vector bundles as the number of distinct representations of the Lie group G. This allows us to study the G-manifold M, as a principal G-bundle, through tools developed for learning an unknown manifold from attached vector bundle structures, such as vector diffusion maps (VDM) [57, 58]. We consider each of these associated vector bundles as a distinct view towards the unknown data manifold M, as the representations inducing these vector bundles are different. In the rest of this paper, we will illustrate with several examples how to design learning and inference algorithms that exploit the inherent consistency in these associated vector bundles by representation theoretic machinery. Unlike the co-training setting where the consistency is induced from the labelled samples onto the unlabelled samples, in our unsupervised setting no labelled training data is provided and the consistency is induced purely from the geometry of the G-manifold. Problem Setup: Given a collection of n data points {x1, . . . , xn} Rl, we assume they lie on or close to a low dimensional smooth manifold M of intrinsic dimension d l, and that M is a G-manifold admitting the structure of a principal G-bundle with a compact Lie group G. The data space M is closed under the action of G. That is, g x M for all group transformations g G and data points x M, where denotes the group action. As an example, in a cryo-EM image dataset each image is a projection of a macromolecule with a random orientation, therefore M = SO(3), which is the 3-D rotation group, G = SO(2) which is the in-plane rotation of images. The G-invariant distance dij between two data points xi and xj is defined as dij = min g G xi g xj , and gij = arg min g G xi g xj . (2) where is the Euclidean distance on the ambient space Rl and gij is the associated alignment which is assumed to be unique. Then we build an undirected graph G = (V, E) whose nodes are represented by data points, edge connection is given based on dij using the ϵ-neighborhood criterion, i.e. (i, j) E iff dij <= ϵ, or κ-nearest neighbor criterion, i.e. (i, j) E iff j is one of the κ nearest neighbors of i. The edge weights wij are defined using a kernel function on dij as wij = Kσ(dij). The resulting graph G is defined on the quotient space B := M/G and is invariant to the group transformations gij within data points, e.g. for the viewing angles of cryo-EM images B = SO(3)/SO(2) = S2. In a noiseless world, G should be a neighborhood graph which only connects data points on B with small dij. However, in many applications, noise in the observational data severely degrades the estimations of G-invariant distances dij and optimal alignments gij. This leads to errors in the edge connection of G, which connect distant data points on B where their underlying geodesic distances are large. Given the noisy graph, we consider the problem of removing the wrong connections and recovering the underlying clean graph structure on B, especially under high level of noise. We propose a robust, unsupervised co-learning framework for addressing this, it has two steps which first builds a series of adjacency matrices with different irreps and filters the original noisy graph as denoising, further it checks the affinity between node pairs for identifying true neighbors in the clean graph. The main intuition is to systematically explores the consistency of the group transformation of the principal bundles across all irreps of G, results in a robustness measurement of the affinity (see Fig. 1). Figure 1: Illustration of our co-learning paradigm: Given a graph with irreps ρk for k = 1, . . . , kmax, we identify neighbors by investigating the following consistencies: Within each graph of a single irrep ρk, if nodes i, j and s are neighbors, the cycle consistency of the group transformation holds: ρk(gjs)ρk(gsi)ρk(gij) Idk dk; Across different irreps, if i, j are neighbors, the transformation gij should be consistent algebraically (along the orange lines connecting the blue dots). Weight Matrices Using Irreps: We start from building a series of weight matrices using multiple irreps of the compact Lie group G. Given the graph G = (V, E) with n nodes and the group transformations g G, we assign weight on each edge (i, j) E by taking into account both the scalar edge connection weight wij and the associated alignment gij using unitary irreps ρk for k = 1, . . . , kmax. The resulting graph can be described by a set of weight matrices Wk: Wk(i, j) = wijρk(gij) (i, j) E 0 otherwise (3) where wij = wji and ρk(gji) = ρ k(gij) for all (i, j) E. Recall the unitary irrep ρk(gij) Cdk dk is a dk dk matrix, therefore Wk is a block matrix with n n blocks of size dk dk. In particular, the corresponding degree matrix Dk is also a block diagonal matrix with the (i, i)-block Dk(i, i) as: Dk(i, i) = deg(i)Idk dk, deg(i) := X j:(i,j) E wij. (4) The Hilbert space H, as a unitary representation of the compact Lie group G, admits an isotypic decomposition H = L Hk, where a function f is in Hk if and only if f(xg) = gkf(x). Then for each irrep ρk, we construct a normalized matrix Ak = D 1 k Wk, which is an averaging operator for vector fields in Hk. That is, for any vector zk Hk: (Akzk)(i) = 1 deg(i) j:(i,j) E wijρk(gij)zk(j). (5) Notice that Ak is similar to a Hermitian matrix e Ak as: e Ak = D1/2 k Ak D 1/2 k = D 1/2 k Wk D 1/2 k (6) Algorithm 1: Weight Matrices Filtering Input: Initial graph G = (V, E) with n nodes, for each (i, j) E the scalar weight wij and alignment gij, maximum frequency kmax, cutoff parameter mk for k = 1, . . . , kmax, and spectral filter ηt Output: The filtered weight matrices f Wk,t for k = 1, . . . , kmax 1 for k = 1, . . . , kmax do 2 Construct the block weight matrix Wk of size ndk ndk in (3) and the normalized symmetric matrix e Ak in (6) 3 Compute the largest mkdk eigenvalues λ(k) 1 λ(k) 2 , , . . . , λ(k) mkdk of e Ak and the corresponding eigenvectors {u(k) l }mkdk l=1 4 for i = 1, . . . , n do 5 Construct the G-equivariant mapping, ψ(k) t : i 7 h ηt(λ1)1/2u(k) 1 (i), . . . , ηt(λmk)1/2u(k) mkdk(i) i 6 (Optional) Compute the SVD of ψ(k) t (i) = UΣV and the normalized mapping eψ(k) t (i) = UV . 8 Vertically concatenate eψ(k) t (i) or ψ(k) t (i) to form the matrix Ψ(k) t of size ndk mkdk 9 Construct the filtered and normalized weight matrix f Wk,t = Ψ(k) t Ψ(k) t . which has real eigenvalues and orthonormal eigenvectors {λ(k) l , u(k) l }ndk l=1, and all the eigenvalues are within [ 1, 1]. For simplicity, we assume data points are uniformly distributed on B. If not, the normalization proposed in [17] can be applied to Wk. Now suppose there is a random walk on G with a transition matrix A0 and the trivial representations ρ0(g) = 1, g G, then A2t 0 (i, j) is the transition probability from i to j with 2t steps. Due to the usage of ρ0(gij), A2t 0 (i, j) not only takes into account the connectivity between the nodes i and j, but also checks the consistency of transformations along all length-2t paths between i and j. Generally, in other cases when k 1, A2t k (i, j) is a sub-block matrix which still encodes such consistencies. Intuitively if i, j are true neighbors on G, their transformations should be in agreement and we expect A2t k (i, j) 2 HS or e A2t k (i, j) 2 HS to be large, where HS is the Hilbert-Schmidt norm. Previously, vector diffusion maps (VDM) [57, 58] considers k = 1 and defines the pairwise affinity as e A2t 1 (i, j) 2 HS. Weight Matrices Filtering: For denoising the graph, we generalize the VDM framework by first computing the filtered and normalized weight matrix f Wk,t = ηt( e Ak) for all irreps ρk s, where ηt( ) denotes a spectral filter acting on the eigenvalues, for example ηt(λ) = λ2t as VDM. Moreover, since the small eigenvalues of e Ak are more sensitive to noise, a truncation is applied by only keeping the top mkdk eigenvalues and eigenvectors. Specifically, we equally divide u(k) l of length ndk into n blocks and denote the ith block as u(k) l (i). In this way, we define a G-equivariant mapping as: ψ(k) t : i 7 h ηt(λ1)1/2u(k) 1 (i), . . . , ηt(λmkdk)1/2u(k) mkdk(i) i Cdk mkdk, i = 1, 2, . . . , n. (7) It can be further normalized to ensure the diagonal blocks of f Wk,t are identity matrices, i.e. f Wk,t(i, i) = Idk dk for all nodes i. The steps for weight matrices filtering are detailed in Alg. 1. The resulting denoised f Wk,t is then used for building our affinity measures. Optimal Alignment Affinity: At each irrep ρk, the filtered f Wk,t involves the transformation consistency of the graph represented by Wk and has its own ability to measure the affinity. Then similar to the unsupervised multi-view learning approach, it is advantageous to boost this by coupling the information under different irreps and to achieve a more accurate measurement (see Fig. 1). Furthermore, notice that if i and j are true neighbors, for each irrep ρk the block f Wk,t(i, j) should encode the same amount of associated alignment gij. Therefore, by applying the algebraic relation among f Wk,t across all irreps, we define the optimal alignment affinity according to the generalized Fourier transform in (1) and the definition of the weight matrices in (3): SOA t (i, j) := max g G 1 kmax k=1 Tr h f Wk,t(i, j)ρ k(g) i , (8) which can be evaluated using generalized FFTs [39]. Here both the cycle consistency within each graph and the algebraic relation across different irreps in Fig. 1 are considered. Power Spectrum Affinity: Searching for the optimal alignment among all transformations as above could be computationally challenging and extremely time consuming. Therefore, invariant features can be used to speed up the computation. First we consider the power spectrum, which is the Fourier transform of the auto-correlation defined as Pf(k) = Fk F k according to the convolution theorem. It is transformation invariant since under the right action of g G, the Fourier coefficients Fk Fkρk(g) and Pf g(k) = Fkρk(g)ρk(g) F k = Pf(k). Hence, for each k we compute the power spectrum Pk of f Wk,t and combine them as the power spectrum affinity: Spower spec t (i, j) = 1 kmax k=1 Tr [Pk(i, j)] , with Pk(i, j) = f Wk,t(i, j)f Wk,t(i, j) , (9) which does not require the search of optimal alignment and is thus computationally efficient. Recently, multi-frequency vector diffusion maps (MFVDM) [20] considers G = SO(2) and sums the power spectrum at different irreps as their affinity. Here, we extend it to a general compact Lie group. Bispectrum Affinity: Although, the power spectrum affinity combines the information at different irreps, it does not couple them and loses the relative phase information, i.e. the transformation across different irreps ρk (see Fig. 1). Consequently, the affinity might be inaccurate under high level of noise. In order to systematically impose the algebraic consistency without solving the optimization problem in (8), we consider another invariant feature called bispectrum, which is the Fourier transform of the triple correlation and has been used in several fields [32, 27, 72, 31]. Formally, let us consider two unitary irreps ρk1 and ρk2 on finite dimensional vector spaces Hk1 and Hk2 of the compact Lie group G. There is a unique decomposition of ρk1 N ρk2 into a set of unitary irreps ρk, k N, where N is the Kronecker product of matrices, and we use L to denote direct sum. There exists G-equivariant maps from Hk1 N Hk2 L Hk, called generalized Clebsch Gordan coefficients Ck1,k2 for G , which satisfies: ρk1(g) O ρk2(g) = Ck1,k2 k k1 N k2 ρk(g) C k1,k2. (10) Using (10) and the fact that Ck1,k2 and ρk s are unitary matrices, we have h ρk1(g) O ρk2(g) i Ck1,k2 k k1 N k2 ρ k(g) C k1,k2 = Idk1dk2 dk1dk2. (11) Particularly, the triple correlation of a function f on G can be defined as a3,f(g1, g2) = R G f (g)f(gg1)f(gg2)dµg. Then the bispectrum is defined as the Fourier transform of a3,f as Bf(k1, k2) = h Fk1 O Fk2 i Ck1,k2 k k1 N k2 F k C k1,k2. (12) Under the action of g, we have the following properties of the Fourier coefficients of f: (1) Fk Fkρk(g), and (2) Fk1 N Fk2 (Fk1ρk1(g)) N (Fk1ρk1(g)) = (Fk1 N Fk1) (ρk1(g) N ρk2(g)). Therefore, Bf is G-invariant according to (11) and (12). By combining the bispectrum at different k1 and k2, we establish the bispectrum affinity as, Sbispec t (i, j) = 1 k2max k2=1 Tr [Bk1,k2,t(i, j)] , with (13) Bk1,k2,t(i, j) = h f Wk1,t(i, j) O f Wk2,t(i, j) i Ck1,k2 f W k,t(i, j) C k1,k2. (14) 60 120 180 0 0.2 p = 0.5 Power spec. (ours) Bispec. (ours) Opt (ours) VDM 60 120 180 0 0.08 p = 0.1 Power spec. (ours) Bispec. (ours) Opt (ours) VDM 60 120 180 0 0.05 p = 0.09 Power spec. (ours) Bispec. (ours) Opt (ours) VDM 60 120 180 0 0.02 p = 0.08 Power spec. (ours) Bispec. (ours) Opt (ours) VDM Figure 2: Histograms of arccos vi, vj between estimated nearest neighbors on M = SO(3), G = SO(2) and B = S2 with different SNRs. The clean histogram should peak at small angles. The lines of the bispectrum and the optimal alignment affinities almost overlap in all these plots. We set kmax = 6, mk = 10 for all k s and t = 1. If the transformations are consistent across different k s, the trace of Bk1,k2,t in (14) should be large. Therefore, this affinity not only takes into account the consistency of the transformation at each irrep, but also explores the algebraic consistency across different irreps. Higher Order Invariant Moments: The power spectrum and bispectrum are second-order and third-order cumulants, certainly it is possible to design affinities by using higher order invariant features. For example, we can define the order-d + 1 G-invariant features as: Mk1,...,kd = [Fk1 N N Fkd] Ck1,...,kd h L k k1 N N kd F k i C k1,...,kd, where Ck1,...,kd is the extension of the Clebsch Gordan coefficients. However, using higher order spectra dramatically increases the computational complexity. In practice, the bispectrum is sufficient to check the consistency of the group transformations between nodes and across all irreps. Computational Complexity: Filtering the normalized weight matrix involves computing the top mkdk eigenvectors of the sparse Hermitian matrices Ak, for k = 1, . . . , kmax, which can be efficiently evaluated using block Lanczos method [51], and its cost is O(nmkd2 k(mk + lk)), where lk is the average number of non-zero elements in each row of e Ak. We compute the spectral decomposition for different k s in parallel. Computing the power spectrum invariant affinity for all pairs takes O(n2 Pkmax k=1 d2 k) flops. The computational complexity of evaluating the bispectrum invariant affinity is O(n2(Pkmax k1=0 Pkmax k2=0 d2 k1d2 k2)). For the optimal alignment affinity, the computational complexity depends on the cost of optimal alignment search Ca and the total cost is O(n2Ca). For certain group structures, where FFTs are developed, the optimal alignment affinity can be efficiently and accurately approximated. However, Ca is still larger than the computation cost of invariants. Examples with G = SO(2) and SO(3): If the group transformation is 2-D in-plane rotation, i.e. G = SO(2), the unitary irreps will be ρk(α) = eıkα, where α (0, 2π] is the rotation angle. The dimensions of the irreps are dk = 1, and k1 N k2 = k1 + k2. The generalized Clebsch Gordan coefficients is 1 for all (k1, k2) pairs. If G is the 3-D rotation group, i.e. G = SO(3), the unitary irreps are the Wigner D-matrices for ω SO(3) [68]. The dimensions of the irreps are dk = 2k + 1, and k1 N k2 = {|k1 k2|, . . . , k1 + k2}. The Clebsch Gordan coefficients for all (k1, k2) pairs can be numerically precomputed [26]. These two classical examples are frequently used in the real world and are investigated in our experiments. 5 Experiments We evaluate our paradigm through three examples: (1) Nearest neighbor (In brevity: NN) search on 2-sphere S2 with G = SO(2); (2) nearest viewing angle search for cryo-EM images; (3) spectral clustering with G = SO(2) or G = SO(3) transformation. We compare with the baseline vector diffusion maps (VDM) [57]. In particular, since the greatest advantage of our paradigm is the robustness to noise, we demonstrate this through datasets contaminated by extremely high level of noise. The setting of hyper parameters, e.g. kmax and mk, are shown in the captions, we point out that our algorithm is not sensitive to the choice of parameters. The experiments are conducted in MATLAB on a computer with Intel i7 7th generation quad core CPU. NN Search for M = SO(3), G = SO(2), B = S2: We simulate n = 104 points uniformly distributed over M = SO(3) according to the Haar measure. Each point can be represented by a 3 3 orthogonal matrix R = [R1, R2, R3], whose determinant is equal to 1. Then the vector v = R3 can be realized as a point on the unit 2-sphere (i.e. B = S2). The first two columns R1 and R2 spans Clean Noisy Reference volume 0 60 120 180 0 0.15 SNR = 0.01 s PCA Power spec. (ours) Bispec. (ours) Opt (ours) VDM 0 60 120 180 0 0.1 SNR = 0.007 s PCA Power spec. (ours) Bispec. (ours) Opt (ours) VDM 0 60 120 180 0 0.06 SNR = 0.005 s PCA Power spec. (ours) Bispec. (ours) Opt (ours) VDM 0 60 120 180 0 0.015 SNR = 0.003 s PCA Power spec. (ours) Bispec. (ours) Opt (ours) VDM Figure 3: Nearest viewing angle search for cryo-EM images. Left: clean, noisy (SNR = 0.01) projections image samples, and reference volume of 70s ribosome; Right: Histograms of arccos vi, vj between estimated nearest neighbors. s PCA is the initial noisy input of our graph structure. The lines of power spectrum and bispectrum almost overlap in all these plots. We set kmax = 20, mk = 20 for all k s and t = 1. the tangent plane of the sphere at v. Given two points i and j, there exists a rotation angle αij that optimally aligns the tangent bundles [R1 j, R2 j] to [R1 i , R2 i ] as in (2). Therefore, the manifold M is a G-manifold with G = SO(2). Then we build a clean neighborhood graph G = (V, E) by connecting nodes with vi, vj 0.97, and add noise following a random rewiring model [59]. With probability p, we keep the existing edge (i, j) E. With probability 1 p, we remove it and link i to another vertex drawn uniformly at random from the remaining vertices that are not already connected to i. For those rewired edges, their alignments are uniformly distributed over [0, 2π] according to the Haar measure. In this way, the probability p controls the signal to noise ratio (SNR) where p = 1 indicates the clean case, while p = 0 is fully random. For each node, we identify its 50 NNs based on the three proposed affinities and the affinity in VDM. In Fig. 2 we plot the histogram of arccos vi, vj of identified NNs under different SNRs. When p = 0.08 to p = 0.1 (over 90% edges are corrupted), bispectrum and optimal alignment achieve similar result and outperform power spectrum and VDM. This indicates our proposed affinities are able to recover the underlying clean graph, even at an extremely high noise level. Nearest Viewing Angle Search for Cryo-EM Images: One important application of the NN search above is in cryo-EM image analysis. Given a series of projection images of a macromolecule with unknown random orientations and extremely low SNR (see Fig. 3), we aim to identify images with similar projection directions and perform local rotational alignment, then image SNR can be boosted by averaging the aligned images. Therefore, each projection image can be viewed as a point lying on the 2-sphere (i.e. B = S2), and the group transformation is the in-plane rotation of an image (i.e., G = SO(2)). In our experiments, we simulate n = 104 projection images from a 3D electron density map of the 70S ribosome, the orientations of all projections are uniformly distributed over SO(3) and the images are contaminated by additive white Gaussian noise (see Fig. 3 for noisy samples). As preprocessing, we build the initial graph G by using fast steerable PCA (s PCA) [71] and rotationally invariant features [72] to initially identify the images of similar views and the corresponding in-plane rotational alignments. Similar to the example above, we compute the affinities for NNs identification. In Fig. 3, we display the histograms of arccos vi, vj of identified NNs under different SNRs. Result shows that all proposed affinities outperform VDM. The power spectrum and the bispectrum affinities achieve similar result, and outperform the optimal alignment affinity. This result is different from the previous example with the random rewiring model on S2. This is because those two examples have different noise model, the random rewiring model has independent noise on edges, whereas the examples using cryo-EM images have independent noise on nodes with dependent noise on edges. Spectral Clustering with SO(2) or SO(3) Transformations: We apply our framework to spectral clustering. In particular, we assume there exists a group transformation gij G in addition to the scalar weight wij between members (nodes) in a network. Formally, given n data points with K equal sized clusters, for each point i, we uniformly assign an in-plane rotational angle αi [0, 2π), or a 3-D rotation ωi SO(3). Then the optimal alignment is αij = αi αj, or ωij = ωiω 1 j . We build the clean graph by fully connecting nodes within each cluster. The noisy graph is then built following the random rewiring model with a rewiring probability p. We perform clustering by using our proposed affinities as the input of spectral clustering, and compare with the traditional spectral clustering [45, 65] which only takes into account the scalar edge connection, and VDM [57], which defines affinity based on the transformation consistency at a single representation. In Tab. 1, we use Rand index [48] to measure the performance (larger value is better). Our three affinities achieve similar accuracy and they outperform the traditional spectral clustering (scalar) and VDM. The results reported in Tab. 1 are evaluated over 50 trials for SO(2) and 10 trials for SO(3) respectively. Table 1: Rand index (larger value is better) of spectral clustering results with SO(2) or SO(3) group transformation. We set the number of clusters Left: K = 2 and right: K = 10. For K = 10 and SO(3) case, each cluster has 25 points, otherwise each cluster has 50 points. We set mk = K, kmax = 10 and t = 1 for all cases. G method K = 2 clusters K = 10 clusters p = 0.16 p = 0.20 p = 0.25 p = 0.16 p = 0.20 p = 0.25 Scalar 0.569 0.069 0.705 0.092 0.837 0.059 0.868 0.010 0.948 0.015 0.981 0.013 VDM 0.526 0.036 0.644 0.076 0.857 0.057 0.892 0.010 0.963 0.011 0.994 0.008 Power spec. (ours) 0.670 0.065 0.899 0.051 0.981 0.021 0.975 0.010 0.991 0.011 0.998 0.006 Opt (ours) 0.687 0.011 0.912 0.009 0.986 0.007 0.976 0.012 0.994 0.008 0.997 0.005 Bispec. (ours) 0.664 0.073 0.901 0.062 0.983 0.019 0.967 0.014 0.997 0.003 1 0 Scalar 0.572 0.061 0.666 0.095 0.862 0.056 0.838 0.003 0.838 0.007 0.909 0.019 VDM 0.600 0.048 0.840 0.056 0.974 0.023 0.850 0.011 0.919 0.013 0.965 0.014 Power spec. (ours) 0.921 0.038 0.986 0.016 1 0 0.874 0.011 0.939 0.011 0.981 0.017 Bispec. (ours) 0.911 0.043 0.990 0.010 1 0 0.869 0.012 0.943 0.009 0.979 0.011 (a) Affinity matrix of different methods (b) Bispectrum affinity component |Tr(Bk1,k2,t(i, j))| Figure 4: Spectral clustering for K = 2 clusters with SO(3) group transformation. The underlying clean graph is corrupted according to the random rewiring model. Left: Plot of the affinity matrix by different approaches. The clusters are of equal size and form two diagonal blocks in the clean affinity matrix (see the scalar column at p = 1). Here we do not include the affinity of each node with itself and the diagonal entries are 0; Right: Plot of the bispecturm affinity |Tr [Bk1,k2,t(i, j)] | at different k1, k2, p = 0.16. For a better understanding, we visualize the n n affinity matrices by different approaches as shown in Fig. 4a at K = 2 and G = SO(3). We observe that at high noise levels, such as p = 0.16 or 0.2, the underlying 2-cluster structure is visually easier to be identified through our proposed affinities. In particular, as the bispectrum affinity in (13) is the combination of the bispectrum coefficients Bf(k1, k2), Fig. 4b shows the component |Tr [Bk1,k2,t(i, j)] | at different k1, k2. Visually, the 2cluster structure appears in each (k1, k2) component with some variations across different components. Combining those information together results in a more robust classifier. 6 Conclusion In this paper, we propose a novel mathematical and computational framework for unsupervised co-learning on G-manifolds across multiple unitary irreps for robust nearest neighbor search and spectral clustering. We have a two stage algorithm: At the first stage, the graph adjacency matrices are individually denoised through spectral filtering. This step uses the local cycle consistency of the group transformation; The second stage checks the algebraic consistency over different irreps and we propose three different ways to combine the information across all irreps. Using invariant moments bypasses the pairwise alignment and is computationally more efficient than the affinity based on the optimal alignment search. Experimental results show the efficacy of the framework compared to the state-of-the-art methods, which do not take into account of the transformation group or only use a single representation. Acknowledgement: This work is supported in part by the National Science Foundation DMS185479 and DMS-1854831. [1] Andrey V. Alekseevsky and Dmitry V. Alekseevsky. Riemannian G-manifold with onedimensional orbit space. Annals of global analysis and geometry, 11(3):197 211, 1993. [2] Derek Bean, Peter J. Bickel, Noureddine El Karoui, and Bin Yu. Optimal M-estimation in high-dimensional regression. Proceedings of the National Academy of Sciences, 110(36):14563 14568, 2013. [3] Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. In NIPS, 2002. [4] Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural computation, 2003. [5] Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of machine learning research, 7(Nov):2399 2434, 2006. [6] Nicole Berline, Ezra Getzler, and Michèle Vergne. Heat Kernels and Dirac Operators (Grundlehren Text Editions). Springer, 1992 edition, 12 2003. [7] Avrim Blum and Tom Mitchell. Combining labeled and unlabeled data with co-training. In Proceedings of the eleventh annual conference on Computational learning theory, pages 92 100. ACM, 1998. [8] Paul Breiding, Sara Kališnik, Bernd Sturmfels, and Madeleine Weinstein. Learning algebraic varieties from samples. Revista Matemática Complutense, 31(3):545 593, 2018. [9] Theodor Bröcker and Tammo Tom Dieck. Representations of compact Lie groups, volume 98. Springer Science & Business Media, 2013. [10] Alexander Bronstein, Michael Bronstein, and Ron Kimmel. Numerical Geometry of Non-Rigid Shapes. Springer Publishing Company, Incorporated, 1 edition, 2008. [11] Emmanuel J Candès and Benjamin Recht. Exact matrix completion via convex optimization. Foundations of Computational mathematics, 9(6):717, 2009. [12] Venkat Chandrasekaran, Sujay Sanghavi, Pablo A Parrilo, and Alan S Willsky. Sparse and low-rank matrix decompositions. IFAC Proceedings Volumes, 42(10):1493 1498, 2009. [13] Minmin Chen, Kilian Q. Weinberger, and John C. Blitzer. Co-training for domain adaptation. In Proceedings of the 24th International Conference on Neural Information Processing Systems, NIPS 11, pages 2456 2464, USA, 2011. Curran Associates Inc. [14] Ronald R Coifman and Stéphane Lafon. Diffusion maps. Applied and computational harmonic analysis, 21(1):5 30, 2006. [15] Ronald R Coifman, Stephane Lafon, Ann B Lee, Mauro Maggioni, Boaz Nadler, Frederick Warner, and Steven W Zucker. Geometric diffusions as a tool for harmonic analysis and structure definition of data: diffusion maps. Proceedings of the National Academy of Sciences of the United States of America, 102(21):7426 31, may 2005. [16] Ronald R Coifman, Stephane Lafon, Ann B Lee, Mauro Maggioni, Boaz Nadler, Frederick Warner, and Steven W Zucker. Geometric diffusions as a tool for harmonic analysis and structure definition of data: multiscale methods. Proceedings of the National Academy of Sciences of the United States of America, 102(21):7432 7, may 2005. [17] Ronald R. Coifman and Stéphane Lafon. Diffusion Maps. Applied and Computational Harmonic Analysis, 21(1):5 30, 2006. Special Issue: Diffusion Maps and Wavelets. [18] David L. Donoho and Carrie Grimes. Hessian Eigenmaps: Locally Linear Embedding Techniques for High-Dimensional Data. Proceedings of the National Academy of Sciences, 100(10):5591 5596, 2003. [19] Yifeng Fan and Zhizhen Zhao. Cryo-electron microscopy image analysis using multi-frequency vector diffusion maps. ar Xiv preprint ar Xiv:1904.07772, 2019. [20] Yifeng Fan and Zhizhen Zhao. Multi-frequency vector diffusion maps. In ICML, 2019. [21] Jason Farquhar, David Hardoon, Hongying Meng, John S. Shawe-Taylor, and Sándor Szedmák. Two view learning: SVM-2K, theory and practice. In Y. Weiss, B. Schölkopf, and J. C. Platt, editors, Advances in Neural Information Processing Systems 18, pages 355 362. MIT Press, 2006. [22] Tingran Gao. The diffusion geometry of fibre bundles: Horizontal diffusion maps. ar Xiv preprint ar Xiv:1602.02330, 2016. [23] Tingran Gao and Zhizhen Zhao. Multi-frequency phase synchronization. In ICML, 2019. [24] Ronny Hadani and Amit Singer. Representation Theoretic Patterns in Three Dimensional Cryo-Electron Microscopy I: The Intrinsic Reconstitution Algorithm. Annals of Mathematics, 174(2):1219 1241, 2011. [25] Ronny Hadani and Amit Singer. Representation theoretic patterns in three-dimensional cryoelectron microscopy II the class averaging problem. Foundations of Computational Mathematics, 11(5):589 616, 2011. [26] Brian Hall. Lie groups, Lie algebras, and representations: an elementary introduction, volume 222. Springer, 2015. [27] Ramakrishna Kakarala and Dansheng Mao. A theory of phase-sensitive rotation invariance with spherical harmonic and moment-based representations. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 105 112. IEEE, 2010. [28] David G. Kendall. A survey of the statistical theory of shape. Statist. Sci., 4(2):87 99, 05 1989. [29] Jon R Kettenring. Canonical analysis of several sets of variables. Biometrika, 58(3):433 451, 12 1971. [30] Shoshichi Kobayashi. Transformation groups in differential geometry. Springer Science & Business Media, 2012. [31] Imre Risi Kondor. Group theoretical methods in machine learning. Ph D thesis, Columbia University, 2008. [32] Risi Kondor. A novel set of rotationally and translationally invariant features for images based on the non-commutative bispectrum. ar Xiv preprint cs/0701127, 2007. [33] Risi Kondor and Shubhendu Trivedi. On the generalization of equivariance and convolution in neural networks to the action of compact groups. In International Conference on Machine Learning, pages 2752 2760, 2018. [34] Abhishek Kumar and Hal Daume III. A co-training approach for multi-view spectral clustering. In Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML 11, pages 393 400, USA, 2011. Omnipress. [35] Abhishek Kumar, Piyush Rai, and Hal Daume. Co-regularized multi-view spectral clustering. In J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 1413 1421. Curran Associates, Inc., 2011. [36] Boris Landa and Yoel Shkolnisky. The steerable graph laplacian and its application to filtering image datasets. SIAM Journal on Imaging Sciences, 11(4):2254 2304, 2018. [37] Yingming Li, Ming Yang, and Zhongfei Mark Zhang. A survey of multi-view representation learning. IEEE Transactions on Knowledge and Data Engineering, pages 1 1, 2018. [38] Chen-Yun Lin, Arin Minasian, Xin Jessica Qi, and Hau-Tieng Wu. Manifold learning via the principle bundle approach. Frontiers in Applied Mathematics and Statistics, 4:21, 2018. [39] David K Maslen and Daniel N Rockmore. Generalized FFTs a survey of some recent results. In Groups and Computation II, volume 28, pages 183 287. American Mathematical Soc., 1997. [40] Peter W Michor. Topics in differential geometry, volume 93. American Mathematical Soc., 2008. [41] Hà Quang Minh, Loris Bazzani, and Vittorio Murino. A unifying framework in vector-valued reproducing kernel hilbert spaces for manifold regularization and co-regularized multi-view learning. Journal of Machine Learning Research, 17(25):1 72, 2016. [42] David Mumford, John Fogarty, and Frances Kirwan. Geometric invariant theory, volume 34. Springer Science & Business Media, 1994. [43] Ion Muslea, Steven Minton, and Craig A. Knoblock. Active learning with multiple views. J. Artif. Int. Res., 27(1):203 233, October 2006. [44] Boaz Nadler, Stephane Lafon, Ioannis Kevrekidis, and Ronald R Coifman. Diffusion maps, spectral clustering and eigenfunctions of Fokker-Planck operators. In Advances in neural information processing systems, pages 955 962, 2006. [45] Andrew Y. Ng, Michael I. Jordan, and Yair Weiss. On spectral clustering: Analysis and an algorithm. In Advances in neural information processing systems, pages 849 856, 2002. [46] Greg Ongie, Rebecca Willett, Robert D. Nowak, and Laura Balzano. Algebraic variety models for high-rank matrix completion. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 2691 2700, International Convention Centre, Sydney, Australia, 06 11 Aug 2017. PMLR. [47] Pawel A Penczek, Jun Zhu, and Joachim Frank. A common-lines based method for determining orientations for n > 3 particle projections simultaneously. Ultramicroscopy, 63(3-4):205 218, 1996. [48] William M Rand. Objective criteria for the evaluation of clustering methods. Journal of the American Statistical association, 66(336):846 850, 1971. [49] Garvesh Raskutti, Martin J Wainwright, and Bin Yu. Minimax-optimal rates for highdimensional sparse additive models over kernel classes. Journal of Machine Learning Research, 13:281 319, 2012. [50] Ken Richardson. The transverse geometry of g-manifolds and riemannian foliations. Illinois Journal of Mathematics, 45(2):517 535, 2001. [51] Vladimir Rokhlin, Arthur Szlam, and Mark Tygert. A randomized algorithm for principal component analysis. SIAM Journal on Matrix Analysis and Applications, 31(3):1100 1124, 2009. [52] Sam T. Roweis and Lawrence K. Saul. Nonlinear Dimensionality Reduction by Locally Linear Embedding. Science, 290(5500):2323 2326, 2000. [53] Alexander HW Schmitt. Geometric invariant theory and decorated principal bundles, volume 11. European Mathematical Society, 2008. [54] Jean-Pierre Serre. Linear representations of finite groups, volume 42. Springer, 1977. [55] Vikas Sindhwani and Partha Niyogi. A co-regularized approach to semi-supervised learning with multiple views. In Proceedings of the ICML Workshop on Learning with Multiple Views, 2005. [56] Vikas Sindhwani and David S. Rosenberg. An RKHS for multi-view learning and manifold co-regularization. In Proceedings of the 25th international conference on Machine learning, pages 976 983. ACM, 2008. [57] Amit Singer and Hau-Tieng Wu. Vector diffusion maps and the connection Laplacian. Communications on Pure and Applied Mathematics, 65(8):1067 1144, 2012. [58] Amit Singer and Hau-Tieng Wu. Spectral convergence of the connection Laplacian from random samples. Information and Inference: A Journal of the IMA, 6(1):58 123, 12 2016. [59] Amit Singer, Zhizhen Zhao, Yoel Shkolnisky, and Ronny Hadani. Viewing angle classification of cryo-electron microscopy images using eigenvectors. SIAM Journal on Imaging Sciences, 4(2):723 759, 2011. [60] Shiliang Sun. A survey of multi-view machine learning. Neural Computing and Applications, 23(7):2031 2038, Dec 2013. [61] Shiliang Sun and David R. Hardoon. Active learning with extremely sparse labeled examples. Neurocomputing, 73(16):2980 2988, 2010. 10th Brazilian Symposium on Neural Networks (SBRN2008). [62] Joshua B. Tenenbaum, Vin de Silva, and John C. Langford. A Global Geometric Framework for Nonlinear Dimensionality Reduction. Science, 290(5500):2319 2323, 2000. [63] Robert Tibshirani, Martin Wainwright, and Trevor Hastie. Statistical learning with sparsity: the lasso and generalizations. Chapman and Hall/CRC, 2015. [64] Roman Vershynin. High-dimensional probability: An introduction with applications in data science, volume 47. Cambridge University Press, 2018. [65] Ulrike von Luxburg. A tutorial on spectral clustering. Statistics and computing, 17(4):395 416, 2007. [66] Elif Vural and Christine Guillemot. A study of the classification of low-dimensional data with supervised manifold learning. Journal of Machine Learning Research, 18:1 55, 2018. [67] Martin J. Wainwright. High-dimensional statistics: A non-asymptotic viewpoint, volume 48. Cambridge University Press, 2019. [68] Eugene Paul Wigner. Gruppentheorie und ihre anwendung auf die quantenmechanik der atomspektren. Monatshefte für Mathematik und Physik, 39(1):A51, 1932. [69] Chang Xu, Dacheng Tao, and Chao Xu. A survey on multi-view learning. ar Xiv preprint ar Xiv:1304.5634, 2013. [70] Jing Zhao, Xijiong Xie, Xin Xu, and Shiliang Sun. Multi-view learning overview: Recent progress and new challenges. Information Fusion, 38:43 54, 2017. [71] Zhizhen Zhao, Yoel Shkolnisky, and Amit Singer. Fast steerable principal component analysis. IEEE transactions on computational imaging, 2(1):1 12, 2016. [72] Zhizhen Zhao and Amit Singer. Rotationally invariant image representation for viewing direction classification in cryo-EM. Journal of structural biology, 186(1):153 166, 2014.