# graph_as_point_set__3bae12c7.pdf Graph as Point Set Xiyuan Wang 1 Pan Li 2 Muhan Zhang 1 Graph is a fundamental data structure to model interconnections between entities. Set, on the contrary, stores independent elements. To learn graph representations, current Graph Neural Networks (GNNs) primarily use message passing to encode the interconnections. In contrast, this paper introduces a novel graph-to-set conversion method that bijectively transforms interconnected nodes into a set of independent points and then uses a set encoder to learn the graph representation. This conversion method holds dual significance. Firstly, it enables using set encoders to learn from graphs, thereby significantly expanding the design space of GNNs. Secondly, for Transformer, a specific set encoder, we provide a novel and principled approach to inject graph information losslessly, improving upon many previous positional/structural encoding methods. To demonstrate the effectiveness of our approach, we introduce Point Set Transformer (PST), a transformer architecture that accepts a point set converted from a graph as input. Theoretically, PST exhibits superior expressivity for both short-range substructure counting and long-range shortest path distance tasks compared to existing GNNs and graph transformers. Extensive experiments further validate PST s outstanding real-world performance. Besides Transformer, we also devise a Deepset-based set encoder, which achieves performance comparable to representative GNNs, affirming the versatility of our graph-to-set method. 1. Introduction Graph, composed of interconnected nodes, has a wide range of applications and has been extensively studied. In graph 1Institute for Artificial Intelligence, Peking University 2Georgia Institute of Technology. Correspondence to: Muhan Zhang . Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). machine learning, a central focus is to effectively leverage node connections. Various architectures have arisen for graph tasks, exhibiting significant divergence in their approaches to utilizing adjacency information. Two primary paradigms have evolved for encoding adjacency information. The first paradigm involves message passing between nodes via edges. Notable methods in this category include Message Passing Neural Network (MPNN) (Gilmer et al., 2017), a foundational framework for GNNs such as GCN (Kipf & Welling, 2017), GIN (Xu et al., 2019a), and Graph SAGE (Hamilton et al., 2017). Subgraph-based GNNs (Zhang & Li, 2021; Huang et al., 2023b; Bevilacqua et al., 2022; Qian et al., 2022; Frasca et al., 2022; Zhao et al., 2022; Zhang et al., 2023a) select subgraphs from the whole graph and run MPNN within each subgraph. These models aggregate messages from neighbors to update the central nodes representations. Additionally, Graph Transformers (GTs) integrate adjacency information into the attention matrix (Mialon et al., 2021; Kreuzer et al., 2021; Wu et al., 2021; Dwivedi & Bresson, 2020; Ying et al., 2021; Shirzad et al., 2023) (note that some early GTs have options to not use adjacency matrix by using only positional encodings, but the performance is significantly worse (Dwivedi & Bresson, 2020)). Some recent GTs even directly incorporate message-passing layers into their architectures (Ramp asek et al., 2022; Kim et al., 2021). In summary, this paradigm relies on adjacency relationships to facilitate information exchange among nodes. The second paradigm designs permutation-equivariant neural networks that directly take adjacency matrices as input. This category includes high-order Weisfeiler-Leman tests (Maron et al., 2019a), invariant graph networks (Maron et al., 2019b), and relational pooling (Chen et al., 2020). Additionally, various studies have explored manual feature extraction from the adjacency matrix, including random walk structural encoding (Dwivedi et al., 2022a; Li et al., 2020), Laplacian matrix eigenvectors (Wang et al., 2022; Lim et al., 2023; Huang et al., 2023a; Ma et al., 2023a), and shortest path distances (Li et al., 2020). However, these approaches typically serve as data augmentation steps for other models, rather than constituting an independent paradigm. Both paradigms heavily rely on pairwise features in graph encoding. In contrast, this paper explores whether we can Graph As Point Set Input Graph Interlinked nodes Point Set Independent points with coordinates containing full adjacency information Set Encoder Set Encoder O(r) equivariant Transformer 𝑟dimensions Figure 1. Our method converts the input graph to a point set first and encoding it with a set encoder. O(r) denotes the set of rdimension orthogonal transformations. give up adjacency matrix in graph models while achieving competitive performance. As shown in Figure 1, our novel graph-to-set method converts interconnected nodes into independent points, subsequently encoded by a set encoder. Leveraging our symmetric rank decomposition, we break down the augmented adjacency matrix A + D into QQT , wherein Q is constituted by column-full-rank rows, each denoting a node coordinate. This representation enables us to express the presence of edges as inner products of coordinate vectors (Qi and Qj). Consequently, interlinked nodes can be transformed into independent points and supplementary coordinates without information loss. Theoretically, two graphs are isomorphic iff the two converted point sets are equal up to an orthogonal transformation (because for any QQT = A+D, QR is also a solution where R is any orthogonal matrix). This equivalence empowers us to encode the set with coordinates in an orthogonal-transformationequivariant manner, akin to E(3)-equivariant models designed for 3D geometric deep learning. Importantly, our approach is versatile, allowing for using any equivariant set encoder, thereby significantly expanding the design space of GNNs. Furthermore, for Transformer, a specific set encoder, our method offers a novel and principled way to inject graph information. Appendix D additionally shows that it unifies various structural/positional encodings in previous GTs, including random walk (Li et al., 2020; Dwivedi et al., 2023; Ramp asek et al., 2022), heat kernel (Mialon et al., 2021), and resistance distance (Zhang et al., 2023b). To instantiate our method, we introduce an orthogonaltransformation-equivariant Transformer, namely Point Set Transformer (PST), to encode the point set. PST provably surpasses existing models in long-range and short-range expressivity. Extensive experiments verify these claims across synthetic datasets, molecule datasets, and long-range graph benchmarks. Specifically, PST outperforms all baselines on QM9 (Wu et al., 2017) dataset. Moreover, our graphto-set method is not constrained to Transformer. We also propose a Deepset (Zaheer et al., 2017)-based model, which outperforms GIN (Xu et al., 2019b) on our datasets. Differences from eigendecomposition. Note that our graph-to-set method is distinct from previous approaches that decompose adjacency matrices for positional encodings (Dwivedi et al., 2023; Wang et al., 2022; Lim et al., 2023; Bo et al., 2023). The key differences root in that previous methods primarily relied on eigendecomposition (EVD), whereas our method is based on symmetric rank decomposition (SRD). Their differences are as follows: SRD enables a practical conversion of graph problems into set problems. SRD of a matrix is unique up to a single orthogonal transformation, while EVD is unique up to a combination of orthogonal transformations within each eigenspace. This difference allows SRD-based models to easily maintain symmetry, ensuring consistent predictions for isomorphic graphs, while EVD-based methods (Lim et al., 2023) struggle because they need to deal with each eigenspace individually, making them less suitable for graph-level tasks where eigenspaces vary between graphs. Due to the advantage of SRD, we can utilize set encoder with coordinates to capture graph structure, thus expanding the design space of GNN. Moreover, our method provides a principled way to add graph information to Transformers. Note that previous GTs usually require multiple heuristic encodings together. Besides node positional encodings, they also use adjacency matrices: Grit (Ma et al., 2023b) and graphit (Mialon et al., 2021) use random walk matrix (normalized adjacency) as relative positional encoding (RPE). Graph Transformer (Dwivedi & Bresson, 2020), Graphormer (Ying et al., 2021), and SAN (Kreuzer et al., 2021) use adjacency matrix as RPE. Dwivedi & Bresson (2020) s ablation shows that adjacency is crucial. GPS (Ramp asek et al., 2022), Exphormer (Shirzad et al., 2023), higher-order Transformer (Kim et al., 2021), and Graph Vit/MLP-Mixer (He et al., 2023) even directly incorporate message passing blocks which use adjacency matrix to guide message passing between nodes. In summary, this paper introduces a novel approach to graph representation learning by converting interconnected graphs into independent points and subsequently encoding them using an orthogonal-transformation-equivariant set encoder like our Point Set Transformer. This innovative approach outperforms existing methods in both longand short-range tasks, as validated by comprehensive experiments. 2. Preliminary For a matrix Z Ra b, we define Zi Rb as the i-th row (as a column vector), and Zij R as its (i, j) element. For a vector Λ Ra, diag(Λ) Ra a is the diagonal matrix with Λ as its diagonal elements. And for S Ra a, diagonal(S) Ra represents the vector of its diagonal elements. Let G = (V, E, X) denote an undirected graph. Here, V = {1, 2, 3, ..., n} is the set of n nodes, E V V is the set of edges, and X Rn d is the node feature matrix, whose v-th row Xv is of node v. The edge set E can also be represented using the adjacency matrix A Rn n, where Auv is 1 if the edge exists (i.e., (u, v) E) and 0 otherwise. Graph As Point Set A graph G can also be represented by the pair (V, A, X) or (A, X). The degree matrix D is a diagonal matrix with node degree (sum of a row of matrix A) as the diagonal elements. Given a permutation function π : {1, 2, 3, ..., n} {1, 2, 3, ..., n}, the permuted graph is π(G) = (π(A), π(X)), where π(A) Rn n, π(A)π(u)π(v) = Auv, and π(X) Rn d, π(X)π(v) = Xv for all u, v V . Essentially, the permutation π reindex each node v to π(v) while preserving the original graph structure and node features. Two graphs are isomorphic iff they can be mapped to each other through a permutation. Definition 2.1. Graphs G1 = (A1, X1) and G2 = (A2, X2) are isomorphic, denoted as G1 G2, if there exists a permutation π such that π(A1) = A2 and π(X1) = X2. Isomorphic graphs can be transformed into each other by merely reindexing their nodes. In graph tasks, models should assign the same prediction to isomorphic graphs. Symmetric Rank Decomposition (SRD). Decomposing an matrix into two full-rank matrices is well-known (Puntanen et al., 2011). We further show that a positive semi-definite matrix can be decomposed into a full-rank matrix. Definition 2.2. (Symmetric Rank Decomposition, SRD) Given a (symmetric) positive semi-definite matrix L Rn n of rank r, its SRD is Q Rn r, where L=QQT . As L = QQT , rank(Q)=rank(L)=r, which implies that Q must be full column rank. Moreover, two SRDs of the same matrix are equal up to an orthogonal transformation. Let O(r) denote the set of orthogonal matrices in Rr r. Proposition 2.3. Matrices Q1 and Q2 in Rn r are SRD of the same matrix iff there exists R O(r), Q1 = Q2R. SRD is closely related to eigendecomposition. Let L = Udiag(Λ)U T denote the eigendecomposition of L, where Λ Rr is the vector of non-zero eigenvalues, and U Rn r is the matrix whose columns are the corresponding eigenvectors. Q = Udiag(Λ1/2) yields an SRD of L, where the superscript denotes element-wise square root operation. 3. Graph as Point Set In this section, we present our innovative method for converting graphs into sets of points. We first show that Symmetric Rank Decomposition (SRD) can theoretically achieve this transformation: two graphs are isomorphic iff the sets of coordinates generated by SRD are equal up to orthogonal transformations. Additionally, we parameterize SRD for better real-world performance. Proof details are in Appendix A. 3.1. Symmetric Rank Decomposition for Coordinates A natural approach to breaking down the interconnections between nodes is to decompose the adjacency matrix. While previous methods often used eigendecomposition outputs as supplementary node features, these features are not unique. Consequently, models relying on them fail to provide consistent predictions for isomorphic graphs, ultimately leading to poor generalization. To address this, we show that Symmetric Rank Decomposition (SRD) can convert graph-level tasks into set-level tasks with perfect alignment. Since SRD only applies to positive semi-definite matrices, we use the augmented adjacency matrix D + A, which is always positive semi-definite (proof in Appendix A.2). Theorem 3.1. Given two graphs G = (V, A, X) and G = (V , A , X ) with respective degree matrices D and D , G G iff R O(r), {{(Xv, RQv)| v V }} = {{(X v, Q v)|v V }}, where Q and Q are the SRD of D+A and D + A respectively, and r is the rank of Q. In this theorem, the graph G = (V, A, X) is converted to a set of points {(Xv, Qv)|v V }, where Xv is the original node feature of v, and Qv, the v-th row of SRD of D + A, is the r-dimensional coordinate of node v. Consequently, two graphs are isomorphic iff their point sets are equal up to an orthogonal transformation. Intuitively, we can imagine that the graph is mapped into an r-dimensional space, where each node has a coordinate, and the inner product between two coordinates represents edge existence. This mapping is not unique, since we can freely rotate the coordinates through an orthogonal transformation without changing inner products. This conversion can be loosely likened to the reverse process of constructing molecular graph from atoms 3D coordinates, where Euclidean distances between atoms determine node connections in the graph. Leveraging Theorem 3.1, we can convert a graph into a set and employ a set encoder for encoding it. Our method consistently produces representations for isomorphic graphs when the encoder is orthogonal transformation-invariant. The method s expressivity hinges on the set encoder s ability to differentiate non-equal sets, with greater encoder power enhancing overall performance on graph tasks. 3.2. Parameterized Coordinates In this section, we enhance SRD s practical performance through parameterization. As shown in Section 2, SRD can be implemented via eigendecomposition: Q = Udiag(Λ1/2), where Λ Rr denotes non-zero eigenvalues of the decomposed matrix, and U Rn r denotes corresponding eigenvectors. To parameterize SRD, we replace the element-wise square root with a function f : Rr Rr. This alteration further eliminates the constraint of nonnegativity on eigenvalues and enables the use of various symmetric matrices containing adjacency information to generate coordinates. Additionally, for model flexibility, the coordinates can include multiple channels, with each channel corresponding to a distinct eigenvalue function. Graph As Point Set Definition 3.2. (Parameterized SRD, PSRD) With a dchannel eigenvalue functionf :Rr Rr dand an adjacency function Z :Rn n Rn n producing symmetric matrices, PSRD coordinate of a graph G =(V, A, X) is Q(Z(A), f) Rn r d, whose i-th channel is Udiag(fi(Λ)) Rn r, where Λ Rr, U Rn r are non-zero eigenvalues and corresponding eigenvectors of Z(A), and fi :Rr Rr is the i-th channel of f. In the definition, Z maps adjacency matrix to its variants like Laplacian matrix, and f transforms eigenvalues. Q(Z(A), f)u Rr d is node u s coordinate. Similar to SRD, PSRD can also convert the graph isomorphism problems to set equality problems. Theorem 3.3. Given a permutation-equivariant adjacency function Z, for graphs G =(V, A, X) and G =(V , A , X ) If eigenvalue function f is permutation-equivariant and G G , then two point sets with PSRD coordinates are equal up to an orthogonal transformation, i.e., R O(r), {{Xv,RQ(Z(A),f)v|v V}}={{X v,Q(Z(A ),f)v|v V }}, where r is the rank of coordinates. If Z is injective, for all d 2, there exists a continuous permutation-equivariant function f : Rr Rr d that if two point sets with PSRD coordinates are equal up to an orthogonal transformation, G G . Given permutation equivariant f and Z, the point sets with PSRD coordinates are equal up to an orthogonal transformation for isomorphic graphs. Moreover, there exists f making reverse true. Therefore, we can safely employ permutationequivariant eigenvalue functions, ensuring consistent predictions for isomorphic graphs. An expressive eigenvalue function also allows for the lossless conversion of graphlevel tasks into set problems. In implementation, we utilize Deep Set (Zaheer et al., 2017) due to its universal expressivity for permutation-equivariant set functions. Detailed architecture is shown in Figure 3 in Appendix G. In summary, we use SRD and its parameterized generalization to decompose the adjacency matrix or its variants into coordinates. Thus, we transform a graph into a point set where each point represents a node and includes both the original node feature and the coordinates as its features. 4. Point Set Transformer Our method, as depicted in Figure 1, comprises two steps: converting the graph into a set of independent points and encoding the set. Section 3 demonstrates the bijective transformation of the graph into a set. To encode this point set, we introduce a novel architecture, Point Set Transformer (PST), designed to maintain orthogonality invariance and deliver remarkable expressivity. Additionally, to highlight our method s versatility, we propose a Deep Set (Zaheer et al., 2017)-based set encoder in Appendix K. PST s architecture is depicted in Figure 4 in Appendix G. PST operates with two types of representations for each point: scalars, which remain invariant to coordinate orthogonal transformations, and vectors, which adapt equivariantly to coordinate changes. For a point i, its scalar representation is si Rd, and its vector representation is vi Rr d, where d is the hidden dimension, and r is the rank of coordinates. si and vi are initialized with the input node feature Xi and PSRD coordinates (detailed in Section 3.2) containing graph structure information, respectively. Similar to conventional transformers, PST comprises multiple layers. Each layer incorporates two key components: Scalar-Vector Mixer. This component, akin to the feedforward network in Transformer, individually transforms point features. To enable information exchange between vectors and scalars, we employ the following architecture. s i MLP1(si diagonal(W1v T i vi W T 2 )), (1) v i vidiag(MLP2(si))W3 + vi W4 (2) Here, W1, W2, W3, and W4 Rd d are learnable matrices for mixing different channels of vector features. Additionally, MLP1 : R2d d and MLP2 : Rd d represent two multi-layer perceptrons transforming scalar representations. The operation diagonal(W1v T i vi W2) takes the diagonal elements of a matrix, which translates vectors to scalars, while vidiag(MLP2(si)) transforms scalar features into vectors. As v T i RT Rvi = v T i vi, R O(r), the scalar update is invariant to orthogonal transformations of the coordinates. Similarly, the vector update is equivariant to O(r). Attention Layer. Akin to ordinary attention layers, this component compute pairwise attention score to linearly combine point representations. Attenij = MLP((W s q si W s ksj) diagonal(W v q v T i vj W v k )) (3) Here, W s q and W v q denote the linear transformations for scalars and vectors queries, respectively, while W s k and W v k are for keys. The equation computes the inner products of queries and keys, similar to standard attention mechanisms. It is easy to see Attenij is also invariant to O(r). Then we linearly combine point representations with attention scores as the coefficients: j Attenijs j, vi X j Attenijv j (4) Each transformer layer is of time complexity O(n2r) and space complexity O(n2 + nr). Pooling. After several layers, we pool all points scalar representations as the set representation s. s Pool({si|i V }), (5) where Pool is pooling function like sum, mean, and max. Graph As Point Set 5. Expressivity In this section, we delve into the theoretical expressivity of our methods. Our PSRD coordinates and the PST architecture exhibit strong long-range expressivity, allowing for efficient computation of distance metrics between nodes, as well as short-range expressivity, enabling the counting of paths and cycles rooted at each node. Therefore, our model is more expressive than many existing models, including GIN (equivalent to the 1-WL test) (Xu et al., 2019b), PPGN (equivalent to the 2-FWL test, more expressive in some cases) (Maron et al., 2019a), GPS (Ramp asek et al., 2022), and Graphormer (Ying et al., 2021) (two representative graph transformers). More details are in Appendix B. 5.1. Long Range Expressivity This section demonstrates that the inner products of PSRD coordinates exhibits strong long-range expressivity, which PST inherits by utilizing inner products in attention layers. When assessing a model s capacity to capture long-range interactions (LRI), a key measure is its ability to compute shortest path distance (spd) between nodes. Since formally characterizing LRI can be challenging, we focus on analyzing models performance concerning this specific measure. We observe that existing models vary significantly in their capacity to calculate spd. Moreover, we find an intuitive explaination for these differences: spd between nodes can be expressed as spd(i, j, A) = arg mink{k|Ak ij > 0}, and the ability to compute AK, the K-th power of the adjacency matrix A, can serve as a straightforward indicator. Different models need different number of layers to compute AK. PSRD coordinates. PSRD coordinates can capture arbitrarily large shortest path distances through their inner products in one step. To illustrate it, we decompose the adjacency matrix as A = Udiag(Λ)U T , and employ coordinates as U and Udiag(ΛK). Their inner products are as follows: 1 step z }| { Udiag(ΛK)U T AK (6) Theorem 5.1. There exists permutation-equivariant functions fk, k = 0, 1, 2, ..., K, such that for all graphs G = (A, X), the shortest path distance between node i, j is a function of Q(A, f0)i, Q(A, fk)j , k=0, 1, 2, ...K, where Q(A, f) is the PSRD coordinate defined in Section 3.2, K is the maximum shortest path distance between nodes. 2-FWL. A powerful graph isomorphic test, 2-Folklore Weisfeiler-Leman Test (2-FWL), and its neural network version PPGN (Maron et al., 2019a) produce node pair representations in a matrix X Rn n. X is initialized with A. Each layer updates X with XX. So intuitively, computing AK takes log2 K layers. log2 K layers z }| { A A2=AA A4=A2A2 ... AK=AK/2AK/2 (7) Theorem 5.2. Let ck(G)ij denote the color of node tuple (i, j) of graph G at iteration k. Given graphs G = (A, X) and G =(A , X ), for all K N+, if two node tuples (i, j) in G and (i , j ) in G have spd(i, j, A) < spd(i , j , A ) 2K, then c K(G)ij = c K(G )i j . Moreover, for all L > 2K, there exists i, j, i , j , such that spd(i, j, A) > spd(i , j , A ) L while c K(G)ij =c K(G )i j . In other words, K iterations of 2-FWL can distinguish pairs of nodes with different spds, as long as that distance is at most 2K. Moreover, K-iteration 2-FWL cannot differentiate all tuples with spd > 2K from other tuples with different spds, which indicates that K-iteration 2-FWL is effective in counting shortest path distances up to a maximum of 2K. MPNN. Intuitively, each MPNN layer uses AX to update node representations X. However, this operation in general cannot compute AK unless the initial node feature X = I. K layers z }| { X AX A2X=AAX ... AKX=AAK 1X (8) Theorem 5.3. A graph pair exists that MPNN cannot differentiate, but their sets of all-pair spd are different. If MPNNs can compute spd between node pairs, they should be able to distinguish this graph pair from the sets of spd. However, we show no MPNNs can distinguish the pair, thus proving that MPNNs cannot compute spd. Graph Transformers (GTs) are known for their strong longrange capacities (Dwivedi et al., 2022b), as they can aggregate information from the entire graph to update each node s representation. However, aggregating information from the entire graph is not equivalent to capturing the distance between nodes, and some GTs also fail to compute spd between nodes. Details are in Appendix C. Note that this slightly counter-intuitive results is because we take a new perspective to study long range interaction rather than showing GTs are weak in long range capacity. Besides shortest path distances, our PSRD coordinates also enables the unification of various structure encodings (distance metrics between nodes), including random walk (Li et al., 2020; Dwivedi et al., 2023; Ramp asek et al., 2022), heat kernel (Mialon et al., 2021), resistance distance (Zhang & Li, 2021; Zhang et al., 2023b). Further insights and details are shown in Table 5 in Appendix D. 5.2. Short Range Expressitivity This section shows PST s expressivity in representative short-range tasks: path and cycle counting. Theorem 5.4. A one-layer PST can count paths of length 1 and 2, a two-layer PST can count paths of length 3 and 4, and a four-layer PST can count paths of length 5 and 6. Here, count means that the (i, j) element of the attention Graph As Point Set matrix in the last layer can express the number of paths between nodes i and j. Therefore, with enough layers, our PST models can count the number of paths of length 6 between nodes. Furthermore, our PST can also count cycles. Theorem 5.5. A one-layer PST can count cycles of length 3, a three-layer PST can count cycles of length 4 and 5, and a five-layer PST can count cycles of length 6 and 7. Here, count means the representation of node i in the last layer can express the number of cycles involving node i. Therefore, with enough layers, PST can count the number of cycles of length 7 between nodes. As 2-FWL is also restricted to counting cycles up to length 7 (F urer, 2017), the cycle counting power of PST is at least on par with 2-FWL. 6. Related Work Graph Neural Network with Eigen-Decomposition. Our approach employs coordinates derived from the symmetric rank decomposition (SRD) of adjacency or related matrices, differing from prior studies that primarily rely on eigendecomposition (EVD). While both approaches have similarities, SRD transforms the graph isomorphism problem into a set problem bijectively, which is challenging for EVD, because SRD of a matrix is unique up to a single orthogonal transformation, while EVD is unique up to multiple orthogonal transformations in different eigenspaces. This key theoretical difference has profound implications for model design. Early efforts, like Dwivedi et al. (2023), introduce eigenvectors into MPNNs input node feature (Gilmer et al., 2017), and subsequent works, such as Graph Transformers (GTs) (Dwivedi & Bresson, 2020; Kreuzer et al., 2021), incorporate eigenvectors as node positional encodings. However, due to the non-uniqueness of eigenvectors, these models produce varying predictions for isomorphic graphs, limiting their generalization. Lim et al. (2023) partially solve the non-uniqueness problem. However, their solutions are limited to cases with constant eigenvalue multiplicity in graph tasks due to the property of EVD. On the other hand, approaches like Wang et al. (2022), Bo et al. (2023), and Huang et al. (2024) completely solve nonuniqueness and even apply permutation-equivariant functions to eigenvalues, similar to our PSRD. However, these methods aim to enhance existing MPNNs and GTs with heuristic features. In contrast, we perfectly align graphlevel tasks with set-level tasks through SRD, allowing us to convert orthogonal-transformation-equivariant set encoders to graph encoders and to inject graph structure information into Transformers in a principled ways. Equivariant Point Cloud and 3-D Molecule Neural Networks. Equivariant point cloud and 3-D molecule tasks share resemblances: both involve unordered sets of 3-D co- ordinate points as input and require models to produce predictions invariant/equivariant to orthogonal transformations and translations of coordinates. Several works (Chen et al., 2021; Winkels & Cohen, 2018; Cohen et al., 2018; Gasteiger et al., 2021) introduce specialized equivariant convolution operators to preserve prediction symmetry, yet are later surpassed by models that learn both invariant and equivariant representations for each point, transmitting these representations between nodes. Notably, certain models (Satorras et al., 2021; Sch utt et al., 2021; Deng et al., 2021; Wang & Zhang, 2022) directly utilize vectors mirroring input coordinate changes as equivariant features, while others (Thomas et al., 2018; Batzner et al., 2022; Fuchs et al., 2020; Hutchinson et al., 2021; Worrall et al., 2017; Weiler et al., 2018) incorporate high-order irreducible representations of the orthogonal group, achieving proven universal expressivity (Dym & Maron, 2021). Our Point Set Transformer (PST) similarly learns both invariant and equivariant point representations. However, due to the specific conversion of point sets from graphs, PST s architecture varies from existing models. While translation invariance characterizes point clouds and molecules, graph properties are sensitive to coordinate translations in our method. Hence, we adopt inner products of coordinates. Additionally, these prior works center on 3D point spaces, whereas our coordinates exist in high-dimensional space, rendering existing models and theoretical expressivity results based on high-order irreducible representations incompatible with our framework. 7. Experiments In our experiments, we evaluate our model across three dimensions: substructure counting for short-range expressivity, real-world graph property prediction for practical performance, and Long-Range Graph Benchmarks (Dwivedi et al., 2022b) to assess long-range interactions. Our primary model, Point Set Transformer (PST) with PSRD coordinates derived from the Laplacian matrix, performs well on all tasks. Moreover, our graph-to-set method is adaptable to various configurations. In ablation study (see Appendix H), another set encoders Point Set Deep Set (PSDS, introduced in Appendix K), SRD coordinates different from PSRD, and coordinates decomposed from the adjacency matrix and normalized adjacency matrix all demonstrate good performance, highlighting the versatility of our approach. Although PST has higher time complexity compared to existing Graph Transformers and is slower on large graphs, it shows similar scalability to our baselines in real-world graph property prediction datasets (see Appendix I). Our PST uses fewer or comparable parameters than baselines across all datasets. Dataset details, experiment settings, and hyperparameters are provided in Appendix E and F. Graph As Point Set Table 1. Normalized MAE ( ) on substructure counting tasks. Following Huang et al. (2023b), models can count the structure if the test loss 10 units (yellow cell in the table), measured using a scale of 10 3. TT: Tailed Triangle. CC: Chordal Cycle, TR: Triangle-Rectangle. Method 2-Path 3-Path 4-Path 5-path 6-path 3-Cycle 4-Cycle 5-Cycle 6-Cycle 7-cycle TT CC TR MPNN 1.0 67.3 159.2 235.3 321.5 351.5 274.2 208.8 155.5 169.8 363.1 311.4 297.9 IDGNN 1.9 1.8 27.3 68.6 78.3 0.6 2.2 49 49.5 49.9 105.3 45.4 62.8 NGNN 1.5 2.1 24.4 75.4 82.6 0.3 1.3 40.2 43.9 52.2 104.4 39.2 72.9 GNNAK 4.5 40.7 7.5 47.9 48.8 0.4 4.1 13.3 23.8 79.8 4.3 11.2 131.1 I2-GNN 1.5 2.6 4.1 54.4 63.8 0.3 1.6 2.8 8.2 39.9 1.1 1.0 1.3 PPGN 0.3 1.7 4.1 15.1 21.7 0.3 0.9 3.6 7.1 27.1 2.6 1.5 14.4 PSDS 2.2 0.1 2.6 0.4 4.9 0.8 9.9 0.5 15.8 0.2 0.6 0.7 2.2 0.3 5.8 0.6 25.1 0.7 57.7 0.3 6.0 1.3 29.8 3.0 56.4 4.7 PST 0.7 0.1 1.1 0.1 1.5 0.1 2.2 0.1 3.3 0.3 0.8 0.1 1.9 0.2 3.1 0.3 4.9 0.3 8.6 0.5 3.0 0.1 4.0 0.7 9.2 0.9 7.1. Graph substructure counting As Chen et al. (2020) highlight, the ability to count substructures is a crucial metric for assessing expressivity. We evaluate our model s substructure counting capabilities on synthetic graphs following Huang et al. (2023b). The considered substructures include paths of lengths 2 to 6, cycles of lengths 3 to 7, and other substructures like tailed triangles (TT), chordal cycles (CC), and triangle-rectangle (TR). Our task involves predicting the number of paths originating from each node and the cycles and other substructures in which each node participates. We compare our Point Set Transformer (PST) with expressive GNN models, including ID-GNNs (You et al., 2021), NGNNs (Zhang & Li, 2021), GNNAK+(Zhao et al., 2022), I2-GNN(Huang et al., 2023b), and PPGN (Maron et al., 2019a). Baseline results are from Huang et al. (2023b), where uncertainties are unknown. Results are in Table 1. Following Huang et al. (2023b), a model can count a substructure if its normalized test Mean Absolute Error (MAE) is below 10 2 (10 units in the table). Remarkably, our PST counts all listed substructures, which aligns with our Theorem 5.4 and Theorem 5.5, while the second-best model, I2-GNN, counts only 10 out of 13 substructures. PSDS can also count 8 out of 13 substructures, showcasing the versatility of our graph-to-set method. 7.2. Graph properties prediction We conduct experiments on four real-world graph datasets: QM9 (Wu et al., 2017), ZINC, ZINC-full (G omez Bombarelli et al., 2016), and ogbg-molhiv (Hu et al., 2020). PST excels in performance, and PSDS performs comparable to GIN (Xu et al., 2019a). PST also outperforms all baselines on TU datasets (Ivanov et al., 2019) (see Appendix J). For the QM9 dataset, we compare PST with various expressive GNNs, including models considering Euclidean distances (1-GNN, 1-2-3-GNN (Morris et al., 2019), DTNN (Wu et al., 2017), PPGN (Maron et al., 2019a)) and those focusing solely on graph structure (Deep LRP (Chen et al., 2020), NGNN (Zhang & Li, 2021), I2-GNN (Huang et al., 2023b), 2-DRFWL(2) GNN (Zhou et al., 2023)). For fair comparsion, we introduce two versions of our model: PST without Euclidean distance (PST) and PST with Euclidean distance (PST*). Results in Table 2 show PST outperforms all baseline models without Euclidean distance on 11 out of 12 targets, with an average 11% reduction in loss compared to the strongest baseline, 2-DRFWL(2) GNN. PST* outperforms all Euclidean distance-based baselines on 8 out of 12 targets, with an average 4% reduction in loss compared to the strongest baseline, 1-2-3-GNN. Both models rank second in performance for the remaining targets. PSDS without Euclidean distance also outperforms baselines on 6 out of 12 targets. For ZINC, ZINC-full, and ogbg-molhiv datasets, we have conducted an evaluation of PST and PSDS in comparison to a range of expressive GNNs and graph transformers (GTs). This set of models includes expressive MPNN and subgraph GNNs: GIN (Xu et al., 2019b), SUN (Frasca et al., 2022), SSWL (Zhang et al., 2023a), 2-DRFWL(2) GNN (Zhou et al., 2023), CIN (Bodnar et al., 2021), NGNN (Zhang & Li, 2021), and GTs: Graphormer (Ying et al., 2021), GPS (Ramp asek et al., 2022), Graph MLP-Mixer (He et al., 2023), Specformer (Bo et al., 2023), Sign Net (Lim et al., 2023), and Grit (Ma et al., 2023b). Performance results for the expressive GNNs are sourced from (Zhou et al., 2023), while those for the Graph Transformers are extracted from (He et al., 2023; Ma et al., 2023b; Lim et al., 2023). The comprehensive results are presented in Table 3. Notably, our PST outperforms all baseline models on ZINC-full datasets, achieving reductions in loss of 18%. On the ogbgmolhiv dataset, our PST also delivers competitive results, with only CIN and Graphormer surpassing it. Overall, PST demonstrates exceptional performance across these four diverse datasets, and PSDS also performs comparable to representative GNNs like GIN (Xu et al., 2019a). 7.3. Long Range Graph Benchmark To assess the long-range capacity of our Point Set Transformer (PST), we conducted experiments using the Long Graph As Point Set Table 2. MAE ( ) on the QM9 dataset. * denotes models with 3D coordinates or features as input. LRP: Deep LRP (Chen et al., 2020). DF: 2-DRFWL(2) GNN (Zhou et al., 2023). 1GNN: 1-GNN. 123: 1-2-3-GNN (Morris et al., 2019). Target Unit LRP NGNN I2GNN DF PSDS PST 1GNN* DTNN* 123* PPGN* PST* µ 10 1D 3.64 4.28 4.28 3.46 3.53 0.05 3.19 0.04 4.93 2.44 4.76 2.31 0.23 0.01 α 10 1a3 0 2.98 2.90 2.30 2.22 2.05 0.02 1.89 0.04 7.80 9.50 2.70 3.82 0.78 0.05 εhomo 10 2me V 6.91 7.21 7.10 6.15 6.56 0.03 5.98 0.09 8.73 10.56 9.17 7.51 2.98 0.08 εlumo 10 2me V 7.54 8.08 7.27 6.12 6.31 0.05 5.84 0.08 9.66 13.93 9.55 7.81 2.20 0.07 ε 10 2me V 9.61 10.34 10.34 8.82 9.13 0.04 8.46 0.07 13.33 30.48 13.06 11.05 4.47 0.09 R2 a2 0 19.30 20.50 18.64 15.04 14.35 0.02 13.08 0.16 34.10 17.00 22.90 16.07 0.93 0.03 ZPVE 10 2me V 1.50 0.54 0.38 0.46 0.41 0.02 0.39 0.01 3.37 4.68 0.52 17.42 0.26 0.01 U0 me V 11.24 8.03 5.74 4.24 3.53 0.11 3.46 0.17 63.13 66.12 1.16 6.37 3.33 0.19 U me V 11.24 9.82 5.61 4.16 3.49 0.05 3.55 0.10 56.60 66.12 3.02 6.37 3.26 0.05 H me V 11.24 8.30 7.32 3.95 3.47 0.04 3.49 0.20 60.68 66.12 1.14 6.23 3.29 0.21 G me V 11.24 13.31 7.10 4.24 3.56 0.14 3.55 0.17 52.79 66.12 1.28 6.48 3.25 0.15 Cv 10 2cal/mol/K 12.90 17.40 7.30 9.01 8.35 0.09 7.77 0.15 27.00 243.00 9.44 18.40 3.63 0.13 Table 3. Results on graph property prediction tasks. zinc zinc-full molhiv MAE MAE AUC GIN 0.163 0.004 0.088 0.002 77.07 1.49 GNN-AK+ 0.080 0.001 79.61 1.19 ESAN 0.102 0.003 0.029 0.003 78.25 0.98 SUN 0.083 0.003 0.024 0.003 80.03 0.55 SSWL 0.083 0.003 0.022 0.002 79.58 0.35 DRFWL 0.077 0.002 0.025 0.003 78.18 2.19 CIN 0.079 0.006 0.022 0.002 80.94 0.57 NGNN 0.111 0.003 0.029 0.001 78.34 1.86 Graphormer 0.122 0.006 0.052 0.005 80.51 0.53 GPS 0.070 0.004 - 78.80 1.01 GMLP-Mixer 0.077 0.003 - 79.97 1.02 SAN 0.139 0.006 - 77.75 0.61 Specformer 0.066 0.003 - 78.89 1.24 Sign Net 0.084 0.006 0.024 0.003 - Grit 0.059 0.002 0.024 0.003 - PSDS 0.162 0.007 0.049 0.002 74.92 1.18 PST 0.063 0.003 0.018 0.001 80.32 0.71 Range Graph Benchmark (Dwivedi et al., 2022b). Following He et al. (2023), we compared our model to a range of baseline models, including GCN (Kipf & Welling, 2017), GINE (Xu et al., 2019a), Gated GCN (Bresson & Laurent, 2017), SAN (Kreuzer et al., 2021), Graphormer (Ying et al., 2021), GMLP-Mixer, Graph Vi T (He et al., 2023), and Grit (Ma et al., 2023b). PST outperforms all baselines on the Pascal VOC-SP and Peptides-Func datasets and achieves the third-highest performance on the Peptides-Struct dataset. PSDS consistently outperforms GCN and GINE. These results showcase the remarkable long-range interaction capturing abilities of our methods across various benchmark datasets. Note that a contemporary work (T onshoff et al., 2023) points out that even vanilla MPNNs can achieve similar performance to Graph Transformers on LRGB with better hyperparameters, which implies that LRGB is not a rigor benchmark. However, for comparison with previous work, we maintain the original settings on LRGB datasets. Table 4. Results on Long Range Graph Benchmark. * means using Random Walk Structural Encoding (Dwivedi et al., 2022a), and ** means Laplacian Eigenvector Encoding (Dwivedi et al., 2023). Model Pascal VOC-SP Peptides-Func Peptides-Struct F1 score AP MAE GCN 0.1268 0.0060 0.5930 0.0023 0.3496 0.0013 GINE 0.1265 0.0076 0.5498 0.0079 0.3547 0.0045 Gated GCN 0.2873 0.0219 0.5864 0.0077 0.3420 0.0013 Gated GCN* 0.2860 0.0085 0.6069 0.0035 0.3357 0.0006 Transformer** 0.2694 0.0098 0.6326 0.0126 0.2529 0.0016 SAN* 0.3216 0.0027 0.6439 0.0075 0.2545 0.0012 SAN** 0.3230 0.0039 0.6384 0.0121 0.2683 0.0043 Graph GPS 0.3748 0.0109 0.6535 0.0041 0.2500 0.0005 Exphormer 0.3975 0.0037 0.6527 0.0043 0.2481 0.0007 GMLP-Mixer - 0.6970 0.0080 0.2475 0.0015 Graph Vi T - 0.6942 0.0075 0.2449 0.0016 Grit - 0.6988 0.0082 0.2460 0.0012 PSDS 0.2134 0.0050 0.5965 0.0064 0.2621 0.0036 PST 0.4010 0.0072 0.6984 0.0051 0.2470 0.0015 8. Conclusion We introduce a novel approach employing symmetric rank decomposition to transform interconnected nodes in graph into independent points with coordinates. Additionally, we propose the Point Set Transformer to encode the point set. Our approach demonstrates remarkable theoretical expressivity and excels in real-world performance, addressing both short-range and long-range tasks effectively. It extends the design space of GNN and provides a principled way to inject graph structural information into Transformers. 9. Limitations PST s scalability is still constrained by the Transformer architecture. To overcome this, acceleration techniques such as sparse attention and linear attention could be explored, which will be our future work. Graph As Point Set Impact Statement This paper presents work whose goal is to advance the field of graph representation learning and will improve the design of graph generation and prediction models. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here. Acknowledgement Xiyuan Wang and Muhan Zhang are partially supported by the National Key R&D Program of China (2022ZD0160300), the National Key R&D Program of China (2021ZD0114702), the National Natural Science Foundation of China (62276003), and Alibaba Innovative Research Program. Pan Li is supported by the National Science Foundation award IIS-2239565. Akiba, T., Sano, S., Yanase, T., Ohta, T., and Koyama, M. Optuna: A next-generation hyperparameter optimization framework. In SIGKDD, 2019. Batzner, S., Musaelian, A., Sun, L., Geiger, M., Mailoa, J. P., Kornbluth, M., Molinari, N., Smidt, T. E., and Kozinsky, B. E (3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials. Nature communications, 13(1):1 11, 2022. Bevilacqua, B., Frasca, F., Lim, D., Srinivasan, B., Cai, C., Balamurugan, G., Bronstein, M. M., and Maron, H. Equivariant subgraph aggregation networks. In ICLR, 2022. Bo, D., Shi, C., Wang, L., and Liao, R. Specformer: Spectral graph neural networks meet transformers. In ICLR, 2023. Bodnar, C., Frasca, F., Otter, N., Wang, Y., Li o, P., Mont ufar, G. F., and Bronstein, M. M. Weisfeiler and lehman go cellular: CW networks. In Neur IPS, 2021. Bouritsas, G., Frasca, F., Zafeiriou, S., and Bronstein, M. M. Improving graph neural network expressivity via subgraph isomorphism counting. TPAMI, 45(1), 2023. Bresson, X. and Laurent, T. Residual gated graph convnets, 2017. Chen, H., Liu, S., Chen, W., Li, H., and Jr., R. W. H. Equivariant point network for 3d point cloud analysis. In CVPR, 2021. Chen, Z., Chen, L., Villar, S., and Bruna, J. Can graph neural networks count substructures? In Neur IPS, 2020. Cohen, T. S., Geiger, M., K ohler, J., and Welling, M. Spherical cnns. In ICLR, 2018. Deng, C., Litany, O., Duan, Y., Poulenard, A., Tagliasacchi, A., and Guibas, L. J. Vector neurons: A general framework for so(3)-equivariant networks. In ICCV, 2021. Dwivedi, V. P. and Bresson, X. A generalization of transformer networks to graphs, 2020. Dwivedi, V. P., Luu, A. T., Laurent, T., Bengio, Y., and Bresson, X. Graph neural networks with learnable structural and positional representations. In ICLR, 2022a. Dwivedi, V. P., Ramp asek, L., Galkin, M., Parviz, A., Wolf, G., Luu, A. T., and Beaini, D. Long range graph benchmark. In Neur IPS, 2022b. Dwivedi, V. P., Joshi, C. K., Luu, A. T., Laurent, T., Bengio, Y., and Bresson, X. Benchmarking graph neural networks. J. Mach. Learn. Res., 24:43:1 43:48, 2023. Dym, N. and Maron, H. On the universality of rotation equivariant point cloud networks. In ICLR, 2021. Feng, J., Chen, Y., Li, F., Sarkar, A., and Zhang, M. How powerful are k-hop message passing graph neural networks. In Neur IPS, 2022. Fey, M. and Lenssen, J. E. Fast graph representation learning with pytorch geometric. Co RR, abs/1903.02428, 2019. Frasca, F., Bevilacqua, B., Bronstein, M. M., and Maron, H. Understanding and extending subgraph gnns by rethinking their symmetries. In Neur IPS, 2022. Fuchs, F., Worrall, D., Fischer, V., and Welling, M. Se(3)- transformers: 3d roto-translation equivariant attention networks. Neur IPS, 2020. F urer, M. On the combinatorial power of the weisfeilerlehman algorithm. In CIAC, volume 10236, pp. 260 271, 2017. Gasteiger, J., Becker, F., and G unnemann, S. Gemnet: Universal directional graph neural networks for molecules. In Neur IPS, 2021. Gilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O., and Dahl, G. E. Neural message passing for quantum chemistry. In ICML, 2017. G omez-Bombarelli, R., Duvenaud, D., Hern andez-Lobato, J. M., Aguilera-Iparraguirre, J., Hirzel, T. D., Adams, R. P., and Aspuru-Guzik, A. Automatic chemical design using a data-driven continuous representation of molecules, 2016. Hamilton, W. L., Ying, Z., and Leskovec, J. Inductive representation learning on large graphs. In Neur IPS, 2017. Graph As Point Set He, X., Hooi, B., Laurent, T., Perold, A., Le Cun, Y., and Bresson, X. A generalization of vit/mlp-mixer to graphs. In ICML, 2023. Hu, W., Fey, M., Zitnik, M., Dong, Y., Ren, H., Liu, B., Catasta, M., and Leskovec, J. Open graph benchmark: Datasets for machine learning on graphs. In Neur IPS, 2020. Huang, Y., Lu, W., Robinson, J., Yang, Y., Zhang, M., Jegelka, S., and Li, P. On the stability of expressive positional encodings for graph neural networks. ar Xiv preprint ar Xiv:2310.02579, 2023a. Huang, Y., Peng, X., Ma, J., and Zhang, M. Boosting the cycle counting power of graph neural networks with i$ˆ2$-gnns. In ICLR, 2023b. Huang, Y., Lu, W., Robinson, J., Yang, Y., Zhang, M., Jegelka, S., and Li, P. On the stability of expressive positional encodings for graph neural networks. ICLR, 2024. Hutchinson, M. J., Lan, C. L., Zaidi, S., Dupont, E., Teh, Y. W., and Kim, H. Lietransformer: Equivariant selfattention for lie groups. In ICML, 2021. Ivanov, S., Sviridov, S., and Burnaev, E. Understanding isomorphism bias in graph data sets. Co RR, abs/1910.12091, 2019. Kim, J., Oh, S., and Hong, S. Transformers generalize deepsets and can be extended to graphs & hypergraphs. In Neur IPS, 2021. Kipf, T. N. and Welling, M. Semi-supervised classification with graph convolutional networks. In ICLR, 2017. Kreuzer, D., Beaini, D., Hamilton, W. L., L etourneau, V., and Tossou, P. Rethinking graph transformers with spectral attention. In Neur IPS, 2021. Li, P., Wang, Y., Wang, H., and Leskovec, J. Distance encoding: Design provably more powerful neural networks for graph representation learning. In Neur IPS, 2020. Lim, D., Robinson, J. D., Zhao, L., Smidt, T. E., Sra, S., Maron, H., and Jegelka, S. Sign and basis invariant networks for spectral graph representation learning. In ICLR, 2023. Ma, G., Wang, Y., and Wang, Y. Laplacian canonization: A minimalist approach to sign and basis invariant spectral embedding. In Neur IPS, 2023a. Ma, L., Lin, C., Lim, D., Romero-Soriano, A., Dokania, P. K., Coates, M., Torr, P. H. S., and Lim, S. Graph inductive biases in transformers without message passing. In ICML, 2023b. Maron, H., Ben-Hamu, H., Serviansky, H., and Lipman, Y. Provably powerful graph networks. In Neur IPS, 2019a. Maron, H., Ben-Hamu, H., Shamir, N., and Lipman, Y. Invariant and equivariant graph networks. In IGN, 2019b. Mialon, G., Chen, D., Selosse, M., and Mairal, J. Graphit: Encoding graph structure in transformers, 2021. Morris, C., Ritzert, M., Fey, M., Hamilton, W. L., Lenssen, J. E., Rattan, G., and Grohe, M. Weisfeiler and leman go neural: Higher-order graph neural networks. In AAAI, 2019. Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., K opf, A., Yang, E. Z., De Vito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., and Chintala, S. Pytorch: An imperative style, high-performance deep learning library. In Neur IPS, pp. 8024 8035, 2019. Perepechko, S. and Voropaev, A. The number of fixed length cycles in an undirected graph. explicit formulae in case of small lengths. MMCP, 148, 2009. Puntanen, S., Styan, G. P., and Isotalo, J. Matrix tricks for linear statistical models: our personal top twenty. Springer, 2011. Qian, C., Rattan, G., Geerts, F., Niepert, M., and Morris, C. Ordered subgraph aggregation networks. In Neur IPS, 2022. Ramp asek, L., Galkin, M., Dwivedi, V. P., Luu, A. T., Wolf, G., and Beaini, D. Recipe for a general, powerful, scalable graph transformer. In Neur IPS, 2022. Satorras, V. G., Hoogeboom, E., and Welling, M. E (n) equivariant graph neural networks. In ICML, 2021. Sch utt, K., Unke, O., and Gastegger, M. Equivariant message passing for the prediction of tensorial properties and molecular spectra. In ICML, 2021. Shervashidze, N., Schweitzer, P., van Leeuwen, E. J., Mehlhorn, K., and Borgwardt, K. M. Weisfeiler-lehman graph kernels. JMLR, 2011. Shirzad, H., Velingker, A., Venkatachalam, B., Sutherland, D. J., and Sinop, A. K. Exphormer: Sparse transformers for graphs. In ICML, 2023. Thomas, N., Smidt, T., Kearnes, S., Yang, L., Li, L., Kohlhoff, K., and Riley, P. Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds. ar Xiv preprint ar Xiv:1802.08219, 2018. Graph As Point Set T onshoff, J., Ritzert, M., Rosenbluth, E., and Grohe, M. Where did the gap go? reassessing the long-range graph benchmark. Co RR, abs/2309.00367, 2023. Wang, H., Yin, H., Zhang, M., and Li, P. Equivariant and stable positional encoding for more powerful graph neural networks. In ICLR, 2022. Wang, X. and Zhang, M. Graph neural network with local frame for molecular potential energy surface. Lo G, 2022. Weiler, M., Geiger, M., Welling, M., Boomsma, W., and Cohen, T. 3d steerable cnns: Learning rotationally equivariant features in volumetric data. In Neur IPS, 2018. Wijesinghe, A. and Wang, Q. A new perspective on how graph neural networks go beyond weisfeiler-lehman? . In ICLR, 2022. Winkels, M. and Cohen, T. S. 3d g-cnns for pulmonary nodule detection, 2018. Worrall, D. E., Garbin, S. J., Turmukhambetov, D., and Brostow, G. J. Harmonic networks: Deep translation and rotation equivariance. In CVPR, 2017. Wu, Z., Ramsundar, B., Feinberg, E. N., Gomes, J., Geniesse, C., Pappu, A. S., Leswing, K., and Pande, V. S. Moleculenet: A benchmark for molecular machine learning, 2017. Wu, Z., Jain, P., Wright, M. A., Mirhoseini, A., Gonzalez, J. E., and Stoica, I. Representing long-range context for graph neural networks with global attention. In Neur IPS, 2021. Xu, K., Hu, W., Leskovec, J., and Jegelka, S. How powerful are graph neural networks? In ICLR, 2019a. Xu, K., Hu, W., Leskovec, J., and Jegelka, S. How powerful are graph neural networks? In ICLR, 2019b. Ying, C., Cai, T., Luo, S., Zheng, S., Ke, G., He, D., Shen, Y., and Liu, T. Do transformers really perform badly for graph representation? In Neur IPS, 2021. You, J., Selman, J. M. G., Ying, R., and Leskovec, J. Identity-aware graph neural networks. In AAAI, 2021. Zaheer, M., Kottur, S., Ravanbakhsh, S., P oczos, B., Salakhutdinov, R., and Smola, A. J. Deep sets. In Neur IPS, 2017. Zhang, B., Feng, G., Du, Y., He, D., and Wang, L. A complete expressiveness hierarchy for subgraph gnns via subgraph weisfeiler-lehman tests. In ICML, 2023a. Zhang, B., Luo, S., Wang, L., and He, D. Rethinking the expressive power of gnns via graph biconnectivity. In ICLR, 2023b. Zhang, M. and Li, P. Nested graph neural networks. In Neur IPS, 2021. Zhang, M., Cui, Z., Neumann, M., and Chen, Y. An end-toend deep learning architecture for graph classification. In AAAI, 2018. Zhao, L., Jin, W., Akoglu, L., and Shah, N. From stars to subgraphs: Uplifting any GNN with local structure awareness. In ICLR, 2022. Zhou, J., Feng, J., Wang, X., and Zhang, M. Distancerestricted folklore weisfeiler-leman gnns with provable cycle counting power, 2023. Graph As Point Set A.1. Proof of Proposition 2.3 The matrices QT 1 Q1 and QT 2 Q2 in Rr r are full rank and thus invertible. This allows us to derive the following equations: L = Q1QT 1 = Q2QT 2 (9) Q1QT 1 = Q2QT 2 QT 1 Q1QT 1 = QT 1 Q2QT 2 (10) QT 1 = (QT 1 Q1) 1QT 1 Q2QT 2 (11) R Rm m, QT 1 = RQT 2 (12) R Rm m, Q1 = Q2R (13) Q1QT 1 = Q2RRT QT 2 = Q2QT 2 QT 2 Q2RRT QT 2 Q2 = QT 2 Q2QT 2 Q2 (14) RRT = (QT 2 Q2) 1QT 2 Q2QT 2 Q2(QT 2 Q2) 1 = I (15) Since R is orthogonal, any two full rank Q matrices are connected by an orthogonal transformation. Furthermore, if there exists an orthogonal matrix R where RRT = I, then Q1 = Q2R, and L = Q1QT 1 = Q2RRT QT 2 = Q2QT 2 . A.2. Matrix D+A is Always Positive Semi-Definite x T (D + A)x = X (i,j) E xixj + X j V Aij)x2 i (16) (i,j) E xixj + 1 (i,j) E x2 i + 1 (i,j) E x2 j (17) (i,j) E (xi + xj)2 0 (18) Therefore, D + A is always positive semi-definite. A.3. Proof of Theorem 3.1 We restate the theorem here: Theorem A.1. Given two graphs G = (V, A, X) and G = (V , A , X ) with degree matrices D and D , respectively, the two graphs are isomorphic (G G ) if and only if R O(r), {{(Xv, RQv)| v V }} = {{(Xv, Q v)|v V }}, where r denotes the rank of matrix Q, and Q and Q are the symmetric rank decompositions of D + A and D + A respectively. Proof. Two graphs are isomorphic π Πn, π(A) = A and π(X) = X . Now we prove that π Πn, π(A) = A and π(X) = X R O(r), {{(Xv, RQv)|v V }} = {{(Xv, Q v)|v V }}. When π Πn, π(A) = A and π(X) = X , as π(Q)π(Q)T = π(A + D) = A + D = Q Q T , (19) according to Proposition 2.3, R O(r), π(Q)RT = Q . Moreover, π(X) = X , so {{(Xv, RQv)|v V }} = {{(X v, Q v)|v V }} (20) When R O(r), {{(Xv, RQv)|v V }} = {{(X v, Q v)|v V }}, there exists permutation π Πn, π(X) = X , π(Q)RT = Q . Therefore, π(A + D) = π(Q)π(Q)T = π(Q)RT Rπ(Q)T = Q Q T = A + D (21) Graph As Point Set As A = D + A 1 2diag((D + A) 1), A = D + A 1 2diag((D + A) 1), where 1 Rn is an vector with all elements = 1. π(A) = A (22) A.4. Proof of Theorem 3.3 Now we restate the theorem. Theorem A.2. Given two graphs G = (V, A, X) and G = (V , A , X ), and an injective permutation-equivariant function Z mapping adjacency matrix to a symmetric matrix: (1) For all permutation-equivariant function f, if G G , then the two sets of PSRD coordinates are equal up to an orthogonal transformation, i.e., R O(r), {{Xv, RQ(Z(A), f)v|v V }} = {{X v, Q(Z(A ), f)v|v V }}, where r is the rank of A, Q, Q are the PSRD coordinates of A and A respectively. (2) There exists a continuous permutation-equivariant function f : Rr Rr 2, such that G G if R O(r), i = 1, 2, ..., d, {{(Xv, RQ(Z(A), f1)v, RQ(Z(A), f2)v)|v V }} = {{(X v, Q(Z(A ), f1)v, Q(Z(A ), f2)v)|v V }}, where f1 : Rr Rr and f2 : Rr Rr are two output channels of f. Proof. First, as Z is an injective permutation equivariant function, forall permutation π Πn π(Z(A)) = Z(A ) Z(π(A)) = Z(A ) π(A) = A . (23) Therefore, two matrix are isomorphic π Πn, π(X) = X , π(Z) = Z , where Z, Z denote Z(A), Z(A ) respectively. In this proof, we denote eigendecomposition as Z = Udiag(Λ)U T and Z = U diag(Λ )U T , where elements in Λ and Λ are sorted in ascending order. Let the multiplicity of eigenvalues in Z be r1, r2, ..., rl, corresponding to eigenvalues λ1, λ2, ..., λi. (1) If G G , there exists a permutation π Πn, π(X) = X , π(Z) = Z . π(Z) = Z Z = π(U)diag(Λ)π(U)T = U diag(Λ )U T . (24) π(U)diag(Λ)π(U)T is also an eigendecomposition of Z , so Λ = Λ as they are both sorted in ascending order. Moreover, since π(U), U are both matrices of eigenvectors, they can differ only in the choice of bases in each eigensubspace. So there exists a block diagonal matrix V with orthogonal matrix V1 O(r1), V2 O(r2), ..., Vl O(rl) as diagonal blocks that π(U)V = U . As f is a permutation equivariant function, Λi = Λj π Πr, π(i) = j, π(j = i), π(Λ) = Λ (25) π Πr, π(i) = j, π(j = i), π(f(Λ)) = f(π(Λ)) = f(Λ) (26) f(Λ)i = f(Λ)j (27) Therefore, f will produce the same value on positions with the same eigenvalue. Therefore, f can be consider as a block diagonal matrix with f1Ir1, f2Ir2, ..., fl Irl as diagonal blocks, where fi R is f(Λ)pi, pi is a position that Λpi = λi, and Ir is an identity matrix Rr r. Therefore, V diag(f(Λ)) = diag(f1V1, f2V2, ..., fl Vl) = diag(f(Λ))V. (28) π(Q(Z, f))V = π(U)diag(f(Λ))V (29) = π(U)V diag(f(Λ)) (30) = U diag(f(Λ )) (31) = Q(Z , f) (32) As V V T = I, V O(r), R O(r), {{Xv, RQ(Z(A), f)v|v V }} = {{X v, Q(Z(A ), f)v|v V }} (33) Graph As Point Set (2) We simply define f1 is element-wise abstract value and square root p |.|, f1 is element-wise abstract value and square root multiplied with its sign sgn(|.|) p |.|. Therefore, f1, f2 are continuous and permutation equivariant. {{Xv, RQ(Z(A), f1)v, RQ(Z(A), f2)v|v V }} = {{X v, Q(Z(A ), f1)v, Q(Z(A ), f2)v|v V }}. then there exist π Πn, so that π(X) = X (34) π(U)diag(f1(Λ))RT = U diag(f1(Λ )) (35) π(U)diag(f2(Λ))RT = U diag(f2(Λ )). (36) π(Z) = π(U)diag(f1(Λ))diag(f2(Λ))π(U )T (37) = π(U)diag(f1(Λ))RRT diag(f2(Λ))π(U )T (38) = U diag(f1(Λ ))diag(f2(Λ ))U T (39) As π(Z) = Z , π(X) = X , two graphs are isomorphic. A.5. Proof of Theorem 5.3 Let Hl denote a circle of l nodes. Let Gl denote a graph of two connected components, one is H l/2 and the other is H l/2 . Obviously, there exists a node pair in Gl with shortest path distance equals to infinity, while Hl does not have such a node pair. So the multiset of shortest path distance is easy to distinguish them. However, they are regular graphs with node degree all equals to 2, so MPNN cannot distinguish them: Lemma A.3. For all nodes v, u in Gl, Hl, they have the same representation produced by k-layer MPNN, forall k N. Proof. We proof it by induction. k = 0. Initialization, all node with trivial node feature and are the same. Assume k 1-layer MPNN still produce representation h for all node. At the k-th layer, each node s representation will be updated with its own representation and two neighbors representations as follows. h, {{h, h}} (41) So all nodes still have the same representation. A.6. Proof of Theorem 5.2 and B.3 Given two function f, g, f can be expressed by g means that there exists a function ϕ ϕ g = f, which is equivalent to given arbitrary input H, G, f(H) = f(G) g(H) = g(G). We use f g to denote that f can be expressed with g. If both f g and g f, there exists a bijective mapping between the output of f to the output of g, denoted as f g. Here are some basic rule. g h f g f h. g h, f s f g s h. f is bijective, f g g Graph As Point Set 2-Folklore-Weisfeiler-Leman test (2-FWL) produce a color ht ij for each node pair (i, j) at t-th iteration. It updates the color as follows, ht+1 ij = hash(ht ij, {{(ht ik, ht kj)|k V }}). (42) The color of the the whole graph is ht G = hash({{ht ij|(i, j) V V }}). (43) Initially, tuple color hashes the node feature and edge between the node pair, h0 ij δij, Aij, Xi, Xj. We are going to prove that Lemma A.4. Forall t N, ht ij can express Ak ij, k = 0, 1, 2, ..., 2t, where A is the adjacency matrix of the input graph. Proof. We prove it by induction on t. When t = 0, h0 ij Aij, Iij in initialization. If t > 0, t < t, ht ij Ak , k = 0, 1, 2, ..., 2t . For all k = 0, 1, 2, ht ij hash(ht ij, {{(ht ik, ht kj)|k V }}) (44) hash(ht ij, {{(A k/2 ik , A k/2 kj )|k V }}) (45) k V A k/2 ik A k/2 kj (46) To prove that t-iteration 2-FWL cannot compute shortest path distance larger than 2t , we are going to construct an example. Lemma A.5. Let Hl denote a circle of l nodes. Let Gl denote a graph of two connected components, one is H l/2 and the other is H l/2 . K N+, 2-FWL can not distinguish Hl K and Gl K, where l K = 2 2 (2K). However, Gl K contains node tuple with 2K + 1 shortest path distance between them while Hl K does not, any model count up to 2K + 1 shortest path distance can count it. Proof. Given a fixed t, we enumerate the iterations of 2-FWL. Given two graphs Hl K, Gl K, we partition all tuples in each graph according to the shortest path distance between nodes: c0, c1, ..., cl, ..., c2K, where cl contains all tuples with shortest path distance between them as l, and c>2K contains all tuples with shortest path distance between them larger than 2K. We are going to prove that at k-th layer k <= K, all ci, i 2k nodes have the same representation (denoted as hk i ) c2k+1, c2k+2, ..., c2K, c>K nodes all have the same representation (denoted as hk 2k+1). Initially, all c0 tuples have representation h0 0, all c1 tuples have the same representation h0 1 in both graph, and all other tuples have the same representation h0 2. Assume at k-th layer, all ci, i 2k nodes have the same representation hk i , c2k+1, c2k+2, ..., c2K, c>2K tuples all have the same representation hk 2k+1. At k + 1-th layer, each representation is updated as follows. ht+1 ij ht ij, {{(ht iv, ht vj)|v V }} (48) For all tuples, the multiset has l K elements in total. For c0 tuples, the multiset have elements: one (hk 0, hk 0) as v = i, and two (hk t , hk t ) for t = 1, 2, .., 2k respectively as v is the k-hop neighbor of i, and all elements left are (hk 2k+1, hk 2k+1) as v is not in the k-hop neighbor of i. For ct, t = 1, 2, ..., 2k tuples: the multiset have elements: one (hk a, hk t a) for a = 0, 1, 2, .., t respectively as v is on the shortest path between (i, j), one (hk a, hk 2k+1) for a = 1, 2, ..., 2k respectively, and one (hk 2k+1, hk a) for a = 1, 2, ..., 2k respectively, with other elements are (hk 2k+1, hk 2k+1). Graph As Point Set For ct, t = 2k + 1, 2k + 2, ..., 2k+1 tuples: the multiset have elements: one (hk a, hk t a) for a = t 2k, t 2k + 1, ..., 2k respectively as v is on the shortest path between (i, j), one (hk a, hk t a) for a {0, 1, 2, ..., t 2k 1} {2k + 1, 2k + 2, ..., 2k+1} respectively as v is on the shortest path between (i, j), one (hk a, hk 2k+1) for a = 1, 2, ..., 2k respectively, and one (hk 2k+1, hk a) for a = 1, 2, ..., 2k respectively, with other elements are (hk 2k+1, hk 2k+1). For ct, t = 2k+1+1, ..., 2K, > 2K: the multiset are all the same : two (hk a, hk 2k+1) and two (hk 2k+1, hk a) for a = 1, 2, 3, ..., 2k, respectively. A.7. Proof of Theorem 5.1 We can simply choose fk(Λ) = Λk. Then Q(A, f0)i, Q(A, fk)j = Ak ij. The shortest path distance is spd(i, j, A) = arg min k {k N|Ak ij > 0} (49) A.8. Proof of Theorem 5.4 and 5.5 This section assumes that the input graph is undirected and unweighted with no self-loops. Let A denote the adjacency matrix of the graph. Note that AT = A, A A = A. In this section, we define d(X) = diag(diagonal(X)), which sets all non-diagonal elements in X to 0 and keeps diagonal elements. An L-path is a sequence of edges [(i1, i2), (i2, i3), ..., (i L, i L+1)], where all nodes are different from each other. An L-cycle is an L-path except that i1 = i L+1. Two paths/cycles are considered equivalent if their sets of edges are equal. The count of L path from node u to v is the number of non-equivalent pathes with i1 = u, i L+1 = v. The count of L-cycle rooted in node u is the number of non-equivalent cycles involves node u. Perepechko & Voropaev (2009) show that the number of path can be expressive with a polynomial of A, where A is the adjacency matrix of the input unweight graph. Specifically, let PL denote path matrix whose (u, v) elements denote the number of L-pathes from u to v, Perepechko & Voropaev (2009) provides formula to express PL with A for small L. This section considers a weaken version of point cloud transformer. Each layer still consists of sv-mixer and multi-head attention. However, the multi-head attention matrix takes the scalar and vector feature before sv-mixer for Q, K and use the feature after sv-mixer for V . At k-th layer sv-mixer: s i MLP1(si diag(W1v T i vk i W2)) (50) v i vidiag(MLP2(s i))W3 + vi W4 (51) Attention layer: Yij = MLP3(Kij), Kij = (W s q si W s ksj) diag(W v q v T i vi W v k ), (52) As s i and v i can express si, vi, so the weaken version can be expressed with the original version. si MLP4(s j X j Attenijs j) (53) vi W5(v j X j Attenijv j) (54) Let Y k denote the attention matrix at k-th layer. Y k is a learnable function of A. Let Yk denote the polynomial space of A that Y k can express. Each element in it is a function from Rn n Rn n We are going to prove some lemmas about Y. Lemma A.6. Yk Yk+1 Proof. As there exists residual connection, scalar and vector representations of layer k + 1 can always contain those of layer k, so attention matrix of layer k + 1 can always express those of layer k. Lemma A.7. If y1, y2, ..., ys Yk, their hadamard product y1 y2 ... ys Yk. Graph As Point Set Proof. As (y1 y2 ... ys)ij = Qs l=1(yl)ij is a element-wise polynomial on compact domain, an MLP (denoted as g) exists that takes (i, j) elements of the y1, y2, ..., ys to produce the corresponding elements of their hadamard product. Assume g0 is the MLP3 in Equation 53 to produce the concatenation of y1, y2, .., ys, use g g0 (the composition of two mlps) for the MLP3 in Equation 53 produces the hadamard product. Lemma A.8. If y1, y2, ..., ys Yk, their linear combination Ps l=1 alyl Yk, where al R. Proof. As (Ps l=1 alyl)ij = Ps l=1 al(yl)ij is a element-wise linear layer (denoted as g). Assume g0 is the MLP3 in Equation 53 to produce the concatenation of y1, y2, .., ys, use g g0 for the MLP3 in Equation 53 produces the linear combination. Lemma A.9. s > 0, As Y1. Proof. As shown in Section 5.1, the inner product of coordinates can produce As. Lemma A.10. y1, y2, y3 Yk, s N+, d(y1)y2, y2d(y1), d(y1)y2d(y3), y1As, Asy1, y1Asy2 Yk Proof. According to Equation 50 and Equation 52, s i at k-th layer can express yii for all y Yk. Therefore, at k + 1-th layer in Equation 52, MLP3 can first compute element (i, j) (y2)ij from si, sj, vi, vj, then multiply (y2)ij with (y1)ii from si, (y3)jj from sj and thus produce d(y1)y2, y2d(y1), d(y1)y2d(y3). Moreover, according to Equation 53, vi at k+1-th layer can express P k(y1)ikvk, P k(y2)ikvk. So at k+1-th layer, the (i, j) element can express P k(y1)ikvk, vj , vi, P k(y1)jkvk , (y1)ikvk, P k(y2)jkvk , corresponds to y1As, Asy2, y1Asy2, respectively. Lemma A.11. s > 0, l > 0, ai > 0, l i=1Aai Y1 s1, s2 > 0, l > 0, As1d( l i=1Aai), d( l i=1Aai)As1, d( l1 i=1Abi)As1d( l2 i=1Abi) Y2. s1, s2, s3 > 0, As1d( l i=1Aai) Therefore, we come to our main theorem. Theorem A.12. The attention matrix of 1-layer PST can express P2, 2-layer PST can express P3, 3-layer PST can express P4, P5, 5-layer PST can express P6. Proof. As shown in (Perepechko & Voropaev, 2009), P2 = A2 (55) Only one kind basis l i=1Aai. 1-layer PST can express it. P3 = A3 + A Ad(A2) d(A2)A (56) Three kind of basis l i=1Aai(A3, A), As1d( l i=1Aai)(Ad(A2)), and d( l i=1Aai)As1. 2-layer PST can express it. P4 = A4 + A2 + 3A A2 d(A3)A d(A2)A2 Ad(A3) A2d(A2) Ad(A2)A (57) Four kinds of basis l i=1Aai (A4, A2, A A2), As1d( l i=1Aai) (Ad(A3), A2d(A2)), d( l i=1Aai)As1 (d(A3)A, d(A2)A2), and As1d( l i=1Aai)As3 (Ad(A2)A). 3-layer PST can express it. Graph As Point Set P5 = A5 + 3A3 + 4A (58) + 3A2 A2 A + 3A A3 4A A2 (59) d(A4)A d(A3)A2 d(A2)A3 + 2d(A2)2A 2d(A2)A 4d(A2)A (60) Ad(A4) A2d(A3) A3d(A2) + 2Ad(A2)2 2Ad(A2) 4Ad(A2) (61) + d(A2)Ad(A2) (62) + 3(A A2)A (63) + 3A(A A2) (64) Ad(A3)A Ad(A2)A2 A2d(A2)A (65) + d(Ad(A2)A)A (66) Basis are in l i=1Aai: A5, A3, A, A2 A2 A, A A3, A A2 As1d( l i=1Aai):Ad(A4), A2d(A3), A3d(A2), Ad(A2)2, Ad(A2), Ad(A2) d( l i=1Aai)As1: Ad(A4), A2d(A3), A3d(A2), Ad(A2)2, Ad(A2), Ad(A2) d( l1 i=1Aai)As1d( l1 i=1Abi): d(A2)Ad(A2) As1( l i=1Aai): A(A A2) ( l i=1Aai)As1: (A A2)A As Y2: Ad(A3)A, Ad(A2)A2, A2d(A2)A d(Y2)As: d(Ad(A2)A)A 3-layer PST can express it. Graph As Point Set Formula for 6-path matrix is quite long. P6 = A6 + 4A4 + 12A2 (67) + 3A A4 + 6A A2 A3 + A2 A2 A2 4A2 A2 + 44A A2 (68) d(A5)A d(A4)A2 d(A3)A3 5d(A3)A d(A2)A4 7d(A2)A2 (69) + 2d(A2)2A2 + 4(d(A2) d(A3))A (70) Ad(A5) A4d(A2) A3d(A3) 5Ad(A3) A2d(A4) 7A2d(A2) (71) + 2A2d(A2)2 + 4A(d(A2) d(A3)) (72) + d(A2)Ad(A3) + d(A3)Ad(A2) + d(A2)A2d(A2) (73) + 2(A A3)A + 2(A A2 A2)A + (A2 A2 A)A 3(A A2)A + (A A3)A (74) + (A A2)A2 + 2(A A2)A2 (A A2)A (75) + 2A(A A3) + 2A(A A2 A2) + A(A2 A2 A) 3A(A A2) + A(A A3) (76) + A2(A A2) + 2A2(A A2) A(A A2) (77) 8A (A(A A2)) 8A ((A A2)A) (78) 12d(A2)(A A2) 12(A A2)d(A2) (79) Ad(A4)A Ad(A2)A3 A3d(A2)A Ad(A3)A2 A2d(A3)A A2d(A2)A2 (80) 10Ad(A2)A + 2Ad(A2)2A (81) + d(A2)Ad(A2)A + Ad(A2)Ad(A2) (82) 3A (Ad(A2)A) (83) 4Ad((A A2)A) 4Ad(A(A A2)) (84) + 3A(A A2)A (85) 4d(A(A A2))A 4d((A A2)A)A (86) + d(Ad(A3)A)A + d(Ad(A2)A)A2 + d(Ad(A2)A2)A + d(A2d(A2)A)A (87) + Ad(Ad(A3)A) + A2d(Ad(A2)A) + Ad(A2d(A2)A) + Ad(Ad(A2)A2) (88) + Ad(Ad(A2)A)A (89) Basis are in l i=1Aai: A6, A4, A2, A A4, A A2 A3, A2 A2 A2, A2 A2, A A2 As1d( l i=1Aai):Ad(A5), A4d(A2), A3d(A3), Ad(A3), A2d(A4), A2d(A2), A2d(A2)2, A(d(A2) d(A3)) d( l i=1Aai)As1: d(A5)A, d(A4)A2, d(A3)A3, d(A3)A, d(A2)A4, d(A2)A2, d(A2)2A2, (d(A2) d(A3))A d( l1 i=1Aai)As1d( l1 i=1Abi): d(A2)Ad(A3), d(A3)Ad(A2), d(A2)A2d(A2) As1( l i=1Aai): A(A A3), A(A A2 A2), A(A2 A2 A), A(A A2), A(A A3), A2(A A2), A2(A A2), A(A A2) ( l i=1Aai)As1: (A A3)A, (A A2 A2)A, (A2 A2 A)A, (A A2)A, (A A3)A, (A A2)A2, (A A2)A2, (A A2)A Y2 Y2: A ((A A2)A), A ((A A2)A) d(Y1)Y1: d(A2)(A A2) Y1d(Y1): (A A2)d(A2) Graph As Point Set As Y2: Ad(A4)A, Ad(A2)A3, A3d(A2)A, Ad(A3)A2, A2d(A3)A, A2d(A2)A2, Ad(A2)A, Ad(A2)2A, Ad(A2)Ad(A2), A(A A2)A Y2As: d(A2)Ad(A2)A Y3 Y3: A (Ad(A2)A) d(Y2)Y2: d(Ad(A2)A)A,d(A(A A2))A,d((A A2)A)A Y2d(Y2):Ad((A A2)A),Ad(A(A A2)) d(Y3)Y3: d(Ad(A3)A)A, d(Ad(A2)A)A2, d(Ad(A2)A2)A, d(A2d(A2)A)A Y3d(Y3): Ad(Ad(A3)A), A2d(Ad(A2)A), Ad(A2d(A2)A), Ad(Ad(A2)A2) As1d(Y3)As2:Ad(Ad(A2)A)A 5-layer PST can express it. Count cycle is closely related to counting path. A L + 1 cycle contains edge (i, j) can be decomposed into a L-path from i to j and edge (i, j). Therefore, the vector of count of cycles rooted in each node CL+1 = diagonal(APL) Theorem A.13. The diagonal elements of attention matrix of 2-layer PST can express C3, 3-layer PST can express C4, 4-layer PST can express C5, C6, 6-layer PST can express C7. Proof. It is a direct conjecture of Theorem A.12 as CL+1 = diagonl(APL) and k, PL Yk APL Yk+1 B. Expressivity Comparision with Other Models Algorithm A is considered more expressive than algorithm B if it can differentiate between all pairs of graphs that algorithm B can distinguish. If there is a pair of links that algorithm A can distinguish while B cannot and A is more expressive than B, we say that A is strictly more expressive than B. We will first demonstrate the greater expressivity of our model by using PST to simulate other models. Subsequently, we will establish the strictness of our model by providing a concrete example. Our transformer incorporates inner products of coordinates, which naturally allows us to express shortest path distances and various node-to-node distance metrics. These concepts are discussed in more detail in Section 5.1. This naturally leads to the following theorem, which compares our PST with GIN (Xu et al., 2019a). Theorem B.1. A k-layer Point Set Transformer is strictly more expressive than a k-layer GIN. Proof. We first prove that one PST layer can simulate an GIN layer. Given node features si and vi. Without loss of generality, we can assume that one channel of vi contains Udiag(Λ1/2). The sv-mixer can simulate an MLP function applied to si. Leading to s i. A GIN layer will then update node representations as follows, j N(i) s j (90) By inner products of coordinates, the attention matrix can express the adjacency matrix. By setting W s q , W s k = 0, and W v q , W v k be a diagonal matrix with only the diagonal elements at the row corresponding the the channel of Udiag(Λ1/2). Kij = (W s q si W s ksj) diagonal(W v q v T i vj W v k ) diag(Λ1/2)Ui, diag(Λ1/2)Uj = Aij (91) Let MLP express an identity function. Attenij = MLP(Kij) Aij (92) Graph As Point Set The attention layer will produce si X j Aijs j = X j N(i) s j (93) with residual connection, the layer can express GIN si s i + si = s i + X j N(i) s j (94) Moreover, as shown in Theorem 5.3, MPNN cannot compute shortest path distance, while PST can. So PST is strictly more expressive. Moreover, our transformer is strictly more expressive than some representative graph transformers, including Graphormer (Ying et al., 2021) and GPS with RWSE as structural encoding (Ramp asek et al., 2022). Theorem B.2. A k-layer Point Set Transformer is strictly more expressive than a k-layer Graphormer and a k-layer GPS. Proof. We first prove that k-layer Point Set Transformer is more expressive than a k-layer Graphormer and a k-layer GPS. In initialization, besides the original node feature, Graphormer further add node degree features and GPS further utilize RWSE. Our PST can add these features with the first sv-mixer layer. s i MLP1(si diagonal(W1v T i vi W T 2 )) (95) Here, diagonal(W1viv T i W T 2 ) add coordinate inner products, which can express RWSE (diagonal elements of random walk matrix) and degree (see Appendix D), to node feature. Then we are going to prove that one PST layer can express one GPS and one Graphormer layer. PST s attention matrix is as follows, Attenij = MLP(Kij), Kij = (W s q si W s ksj) diagonal(W v q v T i vj W v k ) diag(Λ1/2)Ui, diag(Λ1/2)Uj (96) The Hadamard product (W s q si W s ksj) with MLP can express the inner product of node representations used in Graphormer and GPS. The inner product of coordinates can express adjacency matrix used in GPS and Graphormer and shortest path distance used in Graphormer. Therefore, PST attention matrix can express the attention matrix in GPS and Graphormer. To prove strictness, Figure 2(c) in (Zhang et al., 2023b) provides an example. As PST can capture resistance distance and simulate 1-WL, so it can differentiate the two graphs according to Theorem 4.2 in (Zhang et al., 2023b). However, Graphormer cannot distinguish the two graphs, as proved in (Zhang et al., 2023b). For GPS, Two graphs in Figure 2(c) have the same RWSE: RWSE is diagonal( ˆUdiag(ˆΛk) ˆU T ), k = 1, 2, 3, ..., (97) where the eigendecomposition of normalized adjacency matrix D 1/2AD 1/2 is ˆU. By computation, we find that two graphs share the same ˆΛ. Moroever, diagonal( ˆUdiag(ˆΛk) ˆU T ) are equal in two graphs for k = 0, 1, 2, ..., 9, where 9 is the number of nodes in graphs. Λk and diagonal( ˆUdiag(ˆΛk) ˆU T ) with larger k are only linear combinations of Λk and thus diagonal( ˆUdiag(ˆΛk) ˆU T ) for k = 0, 1, ..., 9. So the RWSE in the two graphs are equal and equivalent to simply assigning feature h1 to the center node and feature h2 to other nodes in two graphs. Then GPS simply run a model be a submodule of Graphormer on the graph and thus cannot differentiate the two graphs either. Even against a highly expressive method such as 2-FWL, our models can surpass it in expressivity with a limited number of layers: Theorem B.3. For all K > 0, a graph exists that a K-iteration 2-FWL method fails to distinguish, while a 1-layer Point Set Transformer can. Proof. It is a direct corollary of Theorem 5.2. Graph As Point Set C. Some Graph Transformers Fail to Compute Shortest Path Distance First, we demonstrate that computing inner products of node representations alone cannot accurately determine the shortest path distance when the node representations are permutation-equivariant. Consider Figure 2 as an illustration. In cases where node representations exhibit permutation-equivariance, nodes v2 and v3 will share identical representations. Consequently, the pairs (v1, v2) and (v1, v3) will yield the same inner products of node representations, despite having different shortest path distances. Consequently, it becomes evident that the attention matrices of some Graph Transformers are incapable of accurately computing the shortest path distance. Theorem C.1. GPS with RWSE (Ramp asek et al., 2022) and Graphormer without shortest path distance encoding cannot compute shortest path distance with the elements of adjacency matrix. Proof. Their adjacency matrix elements are functions of the inner products of node representations and the adjacency matrix. Attenij = si, sj ||Aij. (98) This element is equal for the node pair (v1, v2) and (v1, v3) in Figure 2. However, two pairs have different shortest path distances. Furthermore, while Graph Transformers gather information from the entire graph, they may not have the capacity to emulate multiple MPNNs with just a single transformer layer. To address this, we introduce the concept of a vanilla Graph Transformer, which applies a standard Transformer to nodes using the adjacency matrix for relative positional encoding. This leads us to the following theorem. Theorem C.2. For all k N, there exists a pair of graph that k + 1-layer MPNN can differentiate while k-layer MPNN and k-layer vanilla Graph Transformer cannot. Proof. Let Hl denote a circle of l nodes. Let Gl denote a graph of two components, one is H l/2 and the other is H l/2 . Let H l denote adding a unique feature 1 to a node in Hl (as all nodes are symmetric for even l, the selection of node does not matter), and G l denote adding a unique feature 1 to one node in Gl. All other nodes have feature 0. Now we prove that Lemma C.3. For all K N, (K + 1)-layer MPNN can distinguish H 4(K+1) and G 4(K+1), while K-layer MPNN and K-layer vanilla Graph Transformer cannot distinguish. Given H 4(K+1), G 4(K+1), we assign each node a color according to its distance to the node with extra label 1: c0 (the labeled node itself), c1 (two nodes connected to the labeled node), c2 (two nodes whose shortest path distance to the labeled node is 2),..., c K (two nodes whose shortest path distance to the labeled node is K), c>K (nodes whose shortest path distance to the labeled node is larger than K). Now by simulating the process of MPNN, we prove that at k-th layer k <= K, i k, ci nodes have the same representation (denoted as hk i ), respectively, ck+1, ck+2, ..., c K, c>K nodes all have the same representation (denoted as hk k+1). Initially, all c0 nodes have representation h0 0, all other nodes have representation h0 1 in both graph. Assume at k-th layer, i k, ci nodes have the same representation hk i , respectively, ck+1, ck+1, ..., c K, c>K nodes all have the same representation hk k+1. At k + 1-th layer, c0 node have two neighbors of representation hk 1. all ci, 1 < i <= k node two neighbors of representations hk i 1 and hk i+1, respectively. ck+1 nodes have two neighbors of representation hk k and hk k+1. All other nodes have two neighbors of representation hk k+1. So ci, i k + 1 nodes have the same representation (denoted as hk+1 i ), respectively, ck+1+1, ..., c K, c>K nodes all have the same representation (denoted as hk k+1). The same induction also holds for K-layer vanilla graph transformer. However, in the K + 1-th message passing layer, only one node in G4(K+1) is of shortest path distance K + 1 to the labeled node. It also have two neighbors of representation h K K. While such a node is not exist in H4(K+1). So (K + 1)-layer MPNN can distinguish them. The issue with a vanilla Graph Transformer is that, although it collects information from all nodes in the graph, it can only determine the presence of features in 1-hop neighbors. It lacks the ability to recognize features in higher-order neighbors, such as those in 2-hop or 3-hop neighbors. A straightforward solution to this problem is to manually include the shortest Graph As Point Set Figure 2. The failure of using inner products of permutation-equivariant node representations to predict shortest path distance. v2 and v3 have equal node representations due to symmetry. Therefore, (v1, v2) and (v1, v3) will have the same inner products of node representations but different shortest path distance. Table 5. Connection between existing structural embeddings and our parameterized coordinates. The eigendecomposition are ˆA ˆU ˆΛ ˆU, D A U Λ U T , A UΛU T . di denote the degree of node i. Method Description Connection Random walk matrix (Li et al., 2020; Dwivedi et al., 2023; Ramp asek et al., 2022) k-step random walk matrix is (D 1A)k, whose element (i, j) is the probability that a k-step random walk starting from node i ends at node j. (D 1A)k ij = (D 1/2( ˆA)k D1/2)ij dj di ˆUi, diag(ˆΛk) ˆUj Heat kernel matrix (Mialon et al., 2021) Heat kernel is a solution of the heat equation. Its element (i, j) represent how much heat diffuse from node i to node j (I+ U(diag(exp( t Λ)) I) U T )ij =δij + Ui, (diag(exp( t Λ)) I) Uj Resistance distance (Zhang & Li, 2021; Zhang et al., 2023b) Its element (i, j) is the resistance between node i, j considering the graph as an electrical network. It is also the pseudo-inverse of laplacian matrix L, ( Udiag( Λ 1) U T )ij = Ui, diag( Λ 1) Uj Equivariant and stable laplacian PE (Wang et al., 2022) The encoding of node pair i, j is 1K (Ui Uj) , where 1K means a vector Rr with its elements coresponding to K largest eigenvalue of L 1K (Ui Uj) 2 = Ui, diag(1K)Ui + Uj, diag(1K)Uj 2 Ui, diag(1K)Uj Degree and number of triangular (Bouritsas et al., 2023) di is the number of edges, and ti is the number of triangular rooted in node i. di = Ui, diag(Λ2)Uj . ti = Ui, diag(Λ3)Uj path distance as a feature. However, our analysis highlights that aggregating information from the entire graph is insufficient for capturing long-range interactions. D. Connection with Structural Embeddings We show the equivalence between the structural embeddings and the inner products of our PSRD coordinates in Table 5. The inner products of PSRD coordinates can unify a wide range of positional encodings, including random walk (Li et al., 2020; Dwivedi et al., 2023; Ramp asek et al., 2022), heat kernel (Mialon et al., 2021), and resistance distance (Zhang & Li, 2021; Zhang et al., 2023b). E. Datasets We summarize the statistics of our datasets in Table 6. Synthetic is the dataset used in substructure counting tasks provided by Huang et al. (2023b), they are random graph with the count of substructure as node label. QM9 (Wu et al., 2017), ZINC (G omez-Bombarelli et al., 2016), and ogbg-molhiv are three datasets of molecules. QM9 use 13 quantum chemistry property as the graph label. It provides both the graph and the coordinates of each atom. ZINC provides graph structure only and aim to predict constrained solubility. Ogbg-molhiv is one of Open Graph Benchmark dataset, which aims to use graph structure to predict whether a molecule can inhibits HIV virus replication. We further use MUTAG, PTC-MR, PROTEINS, and IMDB-BINARY from TU database (Ivanov et al., 2019). MUTAG comprises 188 mutagenic aromatic Graph As Point Set Table 6. Statistics of the datasets. #Nodes and #Edges denote the number of nodes and edges per graph. In split column, fixed means the dataset uses the split provided in the original release. Otherwise, it is of the formal training set ratio/valid ratio/test ratio. #Graphs #Nodes #Edges Task Metric Split Synthetic 5,000 18.8 31.3 Node Regression MAE 0.3/0.2/0.5. QM9 130,831 18.0 18.7 Regression MAE 0.8/0.1/0.1 ZINC 12,000 23.2 24.9 Regression MAE fixed ZINC-full 249,456 23.2 24.9 Regression MAE fixed ogbg-molhiv 41,127 25.5 27.5 Binary classification AUC fixed MUTAG 188 17.9 39.6 classification Accuracy 10-fold cross validataion PTC-MR 344 14.3 14.7 classification Accuracy 10-fold cross validation PROTEINS 1113 39.1 145.6 classification Accuracy 10-fold cross validataion IMDB-BINARY 1000 19.8 193.1 classification Accuracy 10-fold cross validataion Pascal VOC-SP 11,355 479.4 2710.5 Node Classification Macro F1 fixed Peptides-func 15,535 150.9 307.3 Classification AP fixed Peptides-struct 1 15,535 150.9 307.3 Regression MAE fixed Table 7. Hyperparameters of our model for each dataset. #warm means the number of warmup epochs, #cos denotes the number of cosine annealing epochs, gn denotes the magnitude of the gaussian noise injected into the point coordinates, hiddim denotes hidden dimension, bs means batch size, lr represents learning rate, and #epoch is the number of epochs for training. dataset #warm #cos wd gn #layer hiddim bs lr #epoch #param Synthetic 10 15 6e-4 1e-6 9 96 16 0.0006 300 961k qm9 1 40 1e-1 1e-5 8 128 256 0.001 150 1587k ZINC 17 17 1e-1 1e-4 6 80 128 0.001 800 472k ZINC-full 40 40 1e-1 1e-6 8 80 512 0.003 400 582k ogbg-molhiv 5 5 1e-1 1e-6 6 96 24 0.001 300 751k MUTAG 20 1 1e-7 1e-4 2 48 64 2e-3 70 82k PTC-MR 25 1 1e-1 1e-3 4 16 64 3e-3 70 15k PROTEINS 25 1 1e-7 3e-3 2 48 8 1.5e-3 80 82k IMDB-BINARY 35 1 1e-7 1e-5 3 48 64 3e-3 80 100k Pascal VOC-SP 5 5 1e-1 1e-5 4 96 6 0.001 40 527k Peptide-func 40 20 1e-1 1e-6 6 128 2 0.0003 80 1337k Peptide-struct 40 20 1e-1 1e-6 6 128 2 0.0003 40 1337k and heteroaromatic nitro compounds. PROTEINS represents secondary structure elements as nodes with edges between neighbors in amino-acid sequence or 3D space. PTC involves 344 chemical compounds, classifying carcinogenicity for rats. IMDB-BINARY features ego-networks for actors/actresses in movie collaborations, classifying movie genre graphs. We also use three datasets in Long Range Graph Benchmark (Dwivedi et al., 2022b). They consists of larger graphs. Pascal VOC-SP comes from the computer vision domain. Each node in it representation a superpixel and the task is to predict the semantic segmentation label for each node. Peptide-func and peptide struct are peptide molecular graphs. Task in Peptides-func is to predict the peptide function. Peptides-struct is to predict 3D properties of the peptide. PTC is a collection of 344 chemical compounds represented as graphs which report the carcinogenicity for rats. There are 19 node labels for each node. F. Experimental Details Our code is available at https://github.com/Graph PKU/Graph As Set. Our code is primarily based on Py Torch (Paszke et al., 2019) and Py G (Fey & Lenssen, 2019). All our experiments are conducted on NVIDIA RTX 3090 GPUs on a linux server. We use l1 loss for regression tasks and cross entropy loss for classification tasks. We select the hyperparameters by running optuna (Akiba et al., 2019) to optimize the validation score. We run each experiment with 8 different seeds, reporting the averaged results at the epoch achieving the best validation metric. For optimization, we use Adam W optimizer and cosine annealing scheduler. Hyperparameters for datasets are shown in Table 7. All PST and PSDS models (except these in ablation study) decompose laplacian matrix for coordinates. ZINC, ZINC-full, Pascal VOC-SP, Peptide-func, and Peptide-struct have 500k parameter budgets. Other datasets have no parameter limit. Graphormer (Ying et al., 2021) takes 47000k parameters on ogbg-molhiv. 1-2-3-GNN takes 929k parameters on qm9. Our PST follows these budgets on ZINC, ZINC-full and Pascal VOC-SP. However, on the peptide-func Graph As Point Set Table 8. Results on peptide-func and peptide-struct dataset with 1M parameter budget. peptide-func peptide-struct Graph-MLPMixer 0.6970 0.0080 0.2475 0.0015 Graph GPS 0.6535 0.0041 0.2500 0.0005 PST 0.6984 0.0051 0.2470 0.0015 Laplacian Matrix Eigendecomposition 2 1 1 0 0 0 1 2 1 0 0 0 1 1 2 0 0 0 0 0 0 2 1 1 0 0 0 1 2 1 0 0 0 1 1 2 Eigenvector 𝑉 Eigenvalue Λ Hidden dimension 𝑑 Batch Matrix Multiplication Rank 𝑟 Hidden dimension 𝑑 Figure 3. The pipeline of parameterized SRD. We first decompose Laplacian matrix or other matrice for the non-zero eigenvalue and the corresponding eigenvectors. Then the eigenvalue is transformed with Deep Set (Zaheer et al., 2017). Multiply the transformed eigenvalue and the eigenvector leads to coordinates. and peptide-struct datasets, we find that hidden dimension is quite crucial for good performance. So we use a comparable number of hidden dimension and transformer layers. This leads to about 1M parameters because our PST employs two sets of parameters (one for scalar and one for vector), which resulted in twice the parameters with the same hidden dimension and number of transformer layers. We conduct experiments with baselines with larger hidden dimensions for these datasets. The results are shown in Table 8. When the PST and baselines are all to 1M parameters, out PST outperforms baselines with the same parameter budget 1M, and our method is effective on the two datasets. Other datasets we explored do not have explicit parameter constraints, and it s worth noting that our PST has fewer parameters compared to representative baselines in these cases. Hyperparameter Configuration: Our experiments involved tuning seven hyperparameters: depth (4, 8), hidden dimension (64, 256), learning rate (0.0001, 0.003), number of warmup epochs (0, 40), number of cosine annealing epochs (1, 20), magnitude of Gaussian noise added to input (1e-6, 1e-4), and weight decay (1e-6, 1e-1). We observed that [1] also used seven hyperparameters in their setup. Batch size was determined based on GPU memory constraints and was not a parameter that we optimized. G. Architecture The architecture of parameterized SRD is shown in Figure 3. As illustrated in Section 3.2, it first do eigendecomposition for non-zero eigenvalues and the corresponding eigenvectors, then use Deep Set (Zaheer et al., 2017) to process the eigenvalues, leading to coordinates with multiple channels. The architecture of PST is shown in Figure 4. As illustrated in Section 4, it is composed of scalar-vector mixers and attention layers. H. Ablation To assess the design choices made in our Point Set Transformer (PST), we conducted ablation experiments. First, we replace the PSRD coordinates (see Section 3.2) with SRD coordinates, resulting in a reduced version referred to as the PST-gc model. Additionally, we introduced a variant called PST-onelayer, which is distinct from PST in that it only computes the attention matrix once and does not combine information in scalar and vector features. Furthermore, PST decompose Laplacian matrix by default to produce coordinates. PST-adj uses adjacency matrix instead. Similar to PST, PSDS takes node coordinates as input. However, it use Deep Set (Zaheer et al., 2017) rather than transformer as the set encoder. For better comparison, we also use our strongest baseline on QM9 dataset, DF (Zhou et al., 2023). The results of the ablation study conducted on the QM9 dataset are summarized in Table 2. Notably, PST-gc exhibits only Graph As Point Set Figure 4. Architecture of Point Set Transformer (PST) (a) PST contains several layers. Each layer is composed of an scalar-vector (sv)-mixer and an attention layer. (b) The architecture of sv-mixer. (c) The architecture of attention layer. si and s i denote the scalar representations of node i, and vi and v i denote the vector representations. xi is the initial features of node i. Qi and point coordinates of node i produced by parameterized SRD in Section 3.2. Table 9. Ablation study on qm9 dataset. µ α εhomo εlumo ε R2 ZPVE U0 U H G Cv Unit 10 1D 10 1a3 0 10 2me V 10 2me V 10 2me V a2 0 10 2me V me V me V me V me V 10 2cal/mol/K PST 3.19 0.04 1.89 0.04 5.98 0.09 5.84 0.08 8.46 0.07 13.08 0.16 0.39 0.01 3.46 0.17 3.55 0.10 3.49 0.20 3.55 0.17 7.77 0.15 PST-onelayer 3.72 0.02 2.25 0.05 6.62 0.11 6.67 0.07 9.37 0.15 15.95 0.29 0.55 0.01 3.46 0.06 3.50 0.14 3.50 0.03 3.45 0.07 9.62 0.24 PST-gc 3.34 0.02 1.93 0.03 6.08 0.11 6.10 0.10 8.65 0.10 13.71 0.12 0.40 0.01 3.38 0.13 3.43 0.12 3.33 0.08 3.29 0.11 8.04 0.15 PST-adj 3.16 0.02 1.86 0.01 6.31 0.06 6.10 0.05 8.84 0.01 13.60 0.09 0.39 0.01 3.59 0.12 3.73 0.08 3.65 0.06 3.60 0.016 7.62 0.21 PST-normadj 3.22 0.04 1.85 0.02 5.97 0.23 6.15 0.07 8.79 0.04 13.42 0.15 0.41 0.01 3.36 0.25 3.41 0.24 3.46 0.18 3.38 0.23 8.10 0.12 PSDS 3.53 0.05 2.05 0.02 6.56 0.03 6.31 0.05 9.13 0.04 14.35 0.02 0.41 0.02 3.53 0.11 3.49 0.05 3.47 0.04 3.56 0.14 8.35 0.09 DF 3.46 2.22 6.15 6.12 8.82 15.04 0.46 4.24 4.16 3.95 4.24 9.01 a slight increase in test loss compared to PST, and even outperforms PST on 4 out of 12 target metrics, highlighting the effectiveness of the Graph as Point Set approach with vanilla symmetric rank decomposition. In contrast, PST-onelayer performs significantly worse, underscoring the advantages of PST over previous methods that augment adjacency matrices with spectral features. PST-adj and PST-normadj achieves similar performance to PST, illustrating that the choice of matrix to decompose does not matter. Deep Set performs worse than PST, but it still outperforms our strongest baseline DF, showing the potential of combining set encoders other than transformer with our convertion from graph to set. On the long-range graph benchmark, PST maintains a significant performance edge over PST-onelayer. However, it s worth noting that the gap between PST and PST-gc widens, further confirming the effectiveness of gc in modeling long-range interactions. I. Scalability We present training time per epoch and GPU memory consumption data in Table 11 and Table 12. Due to architecture, PST has higher time complexity than existing Graph Transformers and does not scale well on large graphs like pascalvoc-sp dataset. However, on the ZINC dataset, PST ranks as the second fastest model, and its memory consumption is comparable to existing models with strong expressivity, such as SUN and SSWL, and notably lower than PPGN. J. Results on TU datasets Following the setting of Feng et al. (2022), we test our PST on four TU datasets (Ivanov et al., 2019). The results are shown in Table 13. Baselines include WL subtree kernel (Shervashidze et al., 2011), GIN (Xu et al., 2019a), DGCNN (Zhang et al., 2018), Graph SNN (Wijesinghe & Wang, 2022), GNN-AK+ (Zhao et al., 2022), and three variants of KP-GNN (Feng et al., 2022) (KP-GCN, KP-Graph SAGE, and KP-GIN). We use 10-fold cross-validation, where 9 folds are for training and 1 fold is for testing. The average test accuracy is reported. Our PST consistently outperforms our baselines. Graph As Point Set Table 10. Ablation study on Long Range Graph Benchmark dataset. Model Pascal VOC-SP Peptides-Func Peptides-Struct PST 0.4010 0.0072 0.6984 0.0051 0.2470 0.0015 PST-onelayer 0.3229 0.0051 0.6517 0.0076 0.2634 0.0019 PST-gc 0.4007 0.0039 0.6439 0.0342 0.2564 0.0120 Table 11. Training time per epoch and GPU memory consumption on zinc dataset with batch size 128. PST SUN SSWL PPGN Graphormer GPS SAN-GPS Time/s 15.20 20.93 45.30 20.21 123.79 11.70 79.08 Memory/GB 4.08 3.72 3.89 20.37 0.07 0.25 2.00 K. Point Set Deep Set Besides Transformer, we also propose a Deep Set (Zaheer et al., 2017)-Based set encoder, Point Set Deep Set (PSDS), for point set to illustrate the versatility of our graph-to-set method. Similar to PST, PSDS also operates with points carrying two types of representations: scalars, which remain invariant to coordinate orthogonal transformations, and vectors, which adapt equivariantly to coordinate changes. For a point i, its scalar representation is denoted by si Rd, and its vector representation is denoted by vi Rr d, where d is the hidden dimension, and r is the rank of coordinates. si is initialized with the input node feature Xi, and vi is initialized with the parameterized coordinates containing graph structural information, as detailed in Section 3.2. Similar to Deep Set, PSDS transforms point representations individually, aggregates them to produce global feature, and combine global features and individual point representations to update point representations. Scalar-Vector Mixer. This component individually transforms point representations. To enable the information exchange between vector and scalar features, we design a mixer architecture as follows. s i MLP1(si diagonal(W1v T i vi W T 2 )), (99) v i vidiag(MLP2(si))W3 + vi W4 (100) Here, W1, W2, W3, and W4 Rd d are learnable matrices for mixing different channels of vector features. Additionally, MLP1 : R2d d and MLP2 : Rd d represent two multi-layer perceptrons transforming scalar representations. The operation diagonal(W1v T i vi W2) takes the diagonal elements of a matrix, which translates vectors to scalars, while diag(MLP2(si))vi transforms scalar features into vectors. As v T i RT Rvi = v T i vi, R O(r), the scalar update is invariant to orthogonal transformations of the coordinates. Similarly, the vector update is equivariant to O(r). Aggregator. This component aggregates individual point representations for global features s, v, vsq. i V MLP3(si) (101) i V vi W5 (102) i V vi W6W7v T i (103) Here, W5, W6 and W7 Rd d denote the linear transformations vectors. MLP3 : Rd Rd is an MLP converting scalars. Global feature s Rd is scalar, v Rr d is vector, and vsq Rr r is the sum of square for each vector. Point Representation Update. Each point representation is updated by combining global features. si MLP4(si + s) (104) vi vsqvi + v W8 (105) si is combined with global scalar s and transformed with an MLP MLP4 : Rd Rd. vi is combined with vsq and v with linear layer W8 Rd d. Graph As Point Set Table 12. Training time per epoch and GPU memory consumption on pascalvoc-sp dataset with batch size 6. PST SUN SSWL PPGN Graphormer GPS SAN-GPS Time/s 15.20 20.93 45.30 20.21 123.79 11.70 79.08 Memory/GB 4.08 3.72 3.89 20.37 0.07 0.25 2.00 Table 13. TU dataset evaluation result. Method MUTAG PTC-MR PROTEINS IMDB-B WL 90.4 5.7 59.9 4.3 75.0 3.1 73.8 3.9 GIN 89.4 5.6 64.6 7.0 75.9 2.8 75.1 5.1 DGCNN 85.8 1.7 58.6 2.5 75.5 0.9 70.0 0.9 Graph SNN 91.24 2.5 66.96 3.5 76.51 2.5 76.93 3.3 GIN-AK+ 91.30 7.0 68.20 5.6 77.10 5.7 75.60 3.7 KP-GCN 91.7 6.0 67.1 6.3 75.8 3.5 75.9 3.8 KP-Graph SAGE 91.7 6.5 66.5 4.0 76.5 4.6 76.4 2.7 KP-GIN 92.2 6.5 66.8 6.8 75.8 4.6 76.6 4.2 PST 94.4 3.5 68.8 4.6 80.7 3.5 78.9 3.6 Pooling. After several layers, we pool all points scalar representations as the set representation s. s Pool({si|i V }), (106) where Pool is pooling function like sum, mean, and max.