# optimal_inference_in_contextual_stochastic_block_models__1243f9bb.pdf Published in Transactions on Machine Learning Research (03/2024) Optimal Inference in Contextual Stochastic Block Models O. Duranthon and L. Zdeborová Statistical Physics of Computation laboratory, École polytechnique fédérale de Lausanne (EPFL), Switzerland firstname.lastname@epfl.ch Reviewed on Open Review: https://openreview.net/forum?id=Pe6hld OUkw The contextual stochastic block model (CSBM) was proposed for unsupervised community detection on attributed graphs where both the graph and the high-dimensional node information correlate with node labels. In the context of machine learning on graphs, the CSBM has been widely used as a synthetic dataset for evaluating the performance of graphneural networks (GNNs) for semi-supervised node classification. We consider a probabilistic Bayes-optimal formulation of the inference problem and we derive a belief-propagation-based algorithm for the semi-supervised CSBM; we conjecture it is optimal in the considered setting and we provide its implementation. We show that there can be a considerable gap between the accuracy reached by this algorithm and the performance of the GNN architectures proposed in the literature. This suggests that the CSBM, along with the comparison to the performance of the optimal algorithm, readily accessible via our implementation, can be instrumental in the development of more performant GNN architectures. 1 Introduction In this paper we are interested in the inference of a latent community structure given the observation of a sparse graph along with high-dimensional node covariates, correlated with the same latent communities. With the same interest, the authors of Yan & Sarkar (2021); Deshpande et al. (2018) introduced the contextual stochastic block model (CSBM) as an extension of the well-known and broadly studied stochastic block model (SBM) for community detection. The CSBM accounts for the presence of node covariates; it models them as a high-dimensional Gaussian mixture where cluster labels coincide with the community labels and where the centroids are latent variables. Along the lines of theoretical results established in the past decade for the SBM, see e.g. the review Abbé (2017) and references therein, authors of Deshpande et al. (2018) and later Lu & Sen (2020) study the detectability threshold in this model. Our motivation to study the CSBM is due to the interest this model has recently received in the community developing and analyzing graph neural networks (GNNs). Indeed, this model provides an idealized synthetic dataset on which graph neural networks can be conveniently evaluated and benchmarked. It has been used, for instance, in Baranwal et al. (2021); Kimon et al. (2022); Javaloy et al. (2023); Shi et al. (2023) to establish and test theoretical results on graph convolutional networks or graph-attention neural networks. In Wu et al. (2023) the CSBM was used to study over-smoothing of GNNs and in Wei et al. (2022) to study the role of non-linearity. As a synthetic dataset the CSBM has also been utilized in Cong et al. (2021) for supporting theoretical results on depth in graph convolutional networks and in Chien et al. (2021); Fu et al. (2021); Lei et al. (2022) for evaluating new GNN architectures. Some of the above works study the CSBM in the unsupervised case; however, more often they study it in the semi-supervised case where on top of the network and covariates we observe the membership of a fraction of the nodes. While many of the above-cited works use the CSBM as a benchmark and evaluate GNNs on it, they do not compare to the optimal performance that is tractably achievable in the CSBM. A similar situation happened in the past for the stochastic block model. Many works were published proposing novel community detection algorithms and evaluating them against each other, see e.g. the review Fortunato (2010) and references Published in Transactions on Machine Learning Research (03/2024) there-in. The work of Decelle et al. (2011) changed that situation by conjecturing that a specific variant of the belief propagation algorithm provides the optimal performance achievable tractably in the large size limit. A line of work followed where new algorithms, including early GNNs (Chen et al., 2020), were designed to approach or match this predicted theoretical limit. The goal of the present work is to provide a belief-propagation-based algorithm, which we call AMP BP, for the semi-supervised CSBM. It is able to deal with noisy-labels; it can estimate the parameters of the model and we propose a version of it for multiple unbalanced communities. We conjecture AMP BP has optimal performance among tractable algorithms in the limit of large sizes. We provide a simple-to-use implementation of the algorithm (attached in the Supplementary Material) so that researchers in GNNs who use CSBM as a benchmark can readily compare to this baseline and get an idea of how far from optimality their methods are. We also provide a numerical section illustrating this, where we compare the optimal inference in CSBM with the performance of some of state-of-the-art GNNs. We conclude that indeed there is still a considerable gap between the two; and we hope the existence of this gap will inspire follow-up work in GNNs aiming to close it. Related work Previous works deal with optimal inference in CSBM. In a setting with low-dimensional features there is Braun et al. (2022) that is unsupervised for multi-community; it does not require hyperparameter tuning but it focuses on exact recovery when the graph signal-to-noise ratio grows with number of nodes. Abbé et al. (2022) proposes an optimal spectral algorithm for unsupervised binary CSBM in the case of growing degrees. There is also Baranwal et al. (2023) that determines a localy-optimal semi-supervised classifier and shows it can interpreted as a GNN. In the same high-dimensional setting as ours, there are Lu & Sen (2020) and the belief-propagation-based algorithm Deshpande et al. (2018); they are unsupervised and need the right parameters of the model; we discuss them more in detail further. Works on optimality in related models of SBMs with features include Duranthon & Zdeborová (2023), where the group memberships are a generic function of the features. We consider the CSBM in a semi-supervised high-dimensional sparse setting. Semi-supervision is necessary because we want to compare to empirical risk minimizers such that GNNs; they require a fraction of train labels to be trained. In the regime where the features are high-dimensional, many quantities of interest converge to deterministic values and allow a precise analysis, as opposed to low-dimensional features. We choose a setting with a sparse graph because this is closer to what in practice researchers in GNNs use. The setting where the graph is dense is theoretically easier to deal with (e.g. it has been analyzed in Deshpande et al. (2018) and Shi et al. (2023)) and we provide the corresponding optimal algorithm in appendix. 2.1 Contextual stochastic block model (CSBM) We consider a set V of |V | = N nodes and a graph G(V, E) on those nodes. Each of the nodes belongs to one of two groups: ui { 1, +1} for i = 1, . . . , N. We draw their memberships independently, and we consider two balanced groups: ui is Rademacher. We make this choice following previous papers that used CSBM to study graph neural networks. We note, however, that multiple communities or arbitrary sizes can be readily considered, as done for the SBM in Decelle et al. (2011) and for the high-dimensional Gaussian mixture e.g. in Lesieur et al. (2017). In appendix D we define the unbalanced mulity-community setup and provide the corresponding AMP BP algorithm. The graph is generated according to a stochastic block model (SBM): P(Aij = 1|ui, uj) = ci/N if ui = uj, co/N if ui = uj, (1) and Aij = 0 otherwise. ci R and co R are the two affinity coefficients common to the SBM. We stack them in the matrix C = ( ci co co ci ). Published in Transactions on Machine Learning Research (03/2024) We also consider feature/attribute/covariate B RP N of dimension P on the N nodes. They are generated according to a high-dimensional Gaussian mixture model: N vβui + Zβi (2) for β = 1, . . . , P, where vβ N(0, 1) determines the randomly drawn centroids, Zβi is standard Gaussian noise and µ R is the strength of the signal. We precise that there are two centroids: the features of the nodes in the +1 group are centered at p µ N v while the features of the 1 group are centered at p µ N v. The edges A and the feature matrix B are observed. We aim to retrieve the groups ui. We work in the sparse limit of the SBM: the average degree of the graph A is d = O(1). We parameterize the SBM via the signal-to-noise ratio λ: d ; co = d λ We further work in the high-dimensional limit of the CSBM. We take both N and P going to infinity with α = N/P = O(1) and µ = O(1). We define Ξ as the set of revealed training nodes, that are observed. We set ρ = |Ξ|/N; ρ = 0 for unsupervised learning. We assume Ξ is drawn independently with respect to the group memberships. We define PU,i an additional node-dependant prior. It is used to inject information about the memberships of the observed nodes: PU,i(s) = δs,ui if i Ξ, 1/2 if i / Ξ. (4) 2.2 Bayes-optimal estimation We use a Bayesian framework to infer optimally the group membership u from the observations A, B, Ξ. The posterior distribution over the nodes u = (ui)i is P(u|A, B, Ξ) = 1 Z(A, B, Ξ)P(A|u, B, Ξ)P(B|u, Ξ)P(u|Ξ) (5) i PU,i(ui) Z(A, B, Ξ) i ϵ) N 0 for any ϵ > 0 with N/P = α = O(1) and all other parameters being of O(1). Detectability threshold and the effective signal-to-noise ratio Previous works on the inference in the CSBM (Deshpande et al., 2018; Lu & Sen, 2020) established a detectability threshold in the unsupervised case, ρ = 0, to be α = 1 . (13) meaning that for a signal-to-noise ratio smaller than this, it is information-theoretically impossible to obtain any correlation with the ground truth communities. On the other hand, for snr larger than this, the works Deshpande et al. (2018); Lu & Sen (2020) demonstrate algorithms that are able to obtain a positive correlation with the ground truth communities. This detectability threshold also intuitively quantifies the interplay between the parameters, the graph-related snr λ and the covariates-related snr µ2/α. Small µ2/α generates a benchmark where the graph structure carries most of the information; while small λ generates a benchmark where the information from the covariates dominates; and if we want both to be comparable, we consider both comparable. The combination from eq. (13) plays the role of an overall effective snr and thus allows tuning the benchmarks between regions where getting good performance is challenging or easy. The threshold (13) reduces to the one well known in the pure SBM (Decelle et al., 2011) when µ = 0 and to the one well known in the unsupervised high-dimensional Gaussian mixture (Lesieur et al., 2016) when λ = 0. 3 The AMP BP Algorithm We derive the AMP BP algorithm starting from the factor graph representations of the posterior (7): ψβ i ui / ui χi j ui Published in Transactions on Machine Learning Research (03/2024) The factor graph has two kinds of variable nodes, one kind for v and the other one for u. The factors are of two types, those including information about the covariates B that form a fully connected bipartite graph between all the components of u and v, and those corresponding to the adjacency matrix A that form a fully connected graph between the components of u. We write the belief-propagation (BP) algorithm for this graphical model (Yedidia et al., 2003; Mézard & Montanari, 2009). It iteratively updates the so-called messages χs and ψs. These different messages can be interpreted as probability distributions on the variables ui and vβ conditioned on the absence of the target node in the graphical model. The iterative equations read (Yedidia et al., 2003; Mézard & Montanari, 2009) χβ i vβ PV (vβ) Y j =i ψj β vβ , ψβ i ui Z dvβ χβ i vβ e (Bβi wβi)2/2 , (14) χi j ui PU,i(ui) Y β ψβ i ui Y k =i,j ψk i ui , ψi j uj X ui χi j ui P(Aij|ui, uj) , (15) χj γ uj PU,j(uj) Y β =γ ψγ j uj Y k =j ψk j uj , ψj γ vγ X uj χj γ uj e (Bγj wγj)2/2 , (16) where wβi = p µ N vβui for all i and β and where the proportionality sign means up to the normalization factor that ensures the message sums to one over its lower index. We conjecture that the BP algorithm is asymptotically exact for CSBM. BP is exact on graphical models that are trees, which the one of CSBM is clearly not. The graphical model of CSBM, however, falls into the category of graphical model for which the BP algorithm for Bayes-optimal inference is conjectured to provide asymptotically optimal performance in the sense that, in the absence of first-order phase transitions, the algorithm iterated from random initialization reaches a fixed point whose marginals are equal to the true marginals of the posterior in the leading order in N. This conjecture is supported by previous literature. The posterior (7) of the CSBM is composed of two parts that are independent of each other conditionally on the variables u, the SBM part depending on A, and the Gaussian mixture part depending on B. Previous literature proved the asymptotic optimality of the corresponding AMP for the Gaussian mixture part in Dia et al. (2016). As to the SBM part, the asymptotic optimality of BP (Decelle et al., 2011) was proven for the binary semi-supervised SBM in Yu & Polyanskiy (2022). Because of the conditional independence, the optimality is expected to be preserved when we concatenate the two parts into the CSBM. For the sparse standard SBM in the unsupervised case the conjecture remains mathematically open. The above BP equations can be simplified in the leading order in N to obtain the AMP BP algorithm. The details of this derivation are given in appendix A. This is done by expanding in w in part accounting for the high-dimensional Gaussian mixture side of the graphical model. This is standard in the derivation of the AMP algorithm, see e.g. Lesieur et al. (2017). On the SBM side the contributions of the non-edges are concatenated into an effective field, just as it is done for the BP on the standard SBM in Decelle et al. (2011). The AMP BP algorithm then reads as in Algorithm 1. A version of the algorithm for an unbalanced mulity-community setup is given in section D in the appendix. To give some intuitions we explain what are the variables AMP BP employs. The variable ˆvβ is an estimation of the posterior mean of vβ, whereas σV of its variance. The variable ˆui is an estimation of the posterior mean of ui, σU of its variance. Next Bβ U is a proxy for estimating the mean of vβ in the absence of the Gaussian mixture part, AU for its variance; Bi V is a proxy for estimating the mean of ui in absence of the SBM part, AV for its variance. Further hu can be interpreted as an external field to enforce the nodes not to be in the same group; χi j + is a marginal distribution on ui (these variables are the messages of a sum-product message-passing algorithm); and χi + is the marginal probability that node i is +1, that we are interested in. The AMP BP algorithm can be implemented very efficiently: it takes O(NP) in time and memory, which is the minimum to read the input matrix B. Empirically, the number of steps to converge does not depend on N and is of order ten, as shown on Fig. 6 in appendix F. We provide a fast implementation of AMP BP Published in Transactions on Machine Learning Research (03/2024) Input: features Bβi RP N, observed graph G, affinity matrix C, prior information PU,i. Initialization: for (ij) G, χi j,(0) + = PU,i(+) + ϵi j, ˆu(0) i = 2PU,i(+) 1 + ϵi, ˆv(0) β = ϵβ, t = 0, where ϵi j, ϵi and ϵβ are small centered random variables in R. Repeat until convergence: σi U = 1 ˆu(t),2 i AMP estimation of ˆv i ˆu(t),2 i i Bβiˆu(t) i µ i σi U ˆv(t) β ˆv(t+1) β Bβ U/(1 + AU) σV = 1/(1 + AU) AMP estimation of ˆu β Bβiˆv(t+1) β µ ασV ˆu(t) i Estimation of the field h t= 1 Cu,t 1 + tˆu(t) i 2 hi u = hu + log PU,i(u) + u Bi V Cu,t being the affinity between groups u and t. BP update of the messages χi j for (ij) G and of marginals χi χi j,(t+1) + σ hi + hi + d χk i,(t) + ci 2λ d χk i,(t) + χi + = σ hi + hi + d χk i,(t) + ci 2λ d χk i,(t) + where σ(x) = 1/(1 + e x) is the sigmoid and i are the nodes connected to i. BP estimation of ˆu ˆu(t+1) i 2χi + 1 Update time t t + 1. Output: estimated groups sign(ˆui). Algorithm 1: The AMP BP algorithm. written in Python in the supplementary material and in our repository.1 The algorithm can be implemented in terms of fast vectorized operations as to the AMP part; and, as to the BP part, vectorization is possible thanks to an encoding of the sparse graph in an O(Nd) O(d) array with a padding node. Computationally, running the code for a single experiment, N = 3 104, α = 1 and d = 5 takes around one minute on one CPU core. We cross-check the validity of the derived AMP-BP algorithm by independent Monte-Carlo simulations. We sample the posterior distribution (7) with the Metropolis algorithm; the estimates for the communities are then sign(P t ut i), ut being the different samples. As shown on Fig. 7 in appendix F, the agreement with AMP BP is very good except close to small q U, i.e. close to the phase transition, where the Markov chain seems to take time to converge. Related work on message passing algorithms in CSBM The AMP BP algorithm was stated for the unsupervised CSBM in section 6 of Deshpande et al. (2018) where it was numerically verified that it indeed presents the information-theoretic threshold (13). In that paper, little attention was given to the performance of this algorithm besides checking its detectability threshold. In particular, the authors did not comment on the asymptotic optimality of the accuracy achieved by this algorithm. Rather, they linearized it and studied the detectability threshold of this simplified linearized version that is amenable to analysis 1gitlab.epfl.ch/spoc-idephics/csbm Published in Transactions on Machine Learning Research (03/2024) 0.0 0.5 1.0 1.5 2.0 λ 0.0 0.5 1.0 1.5 2.0 λ Figure 1: Convergence to the high-dimensional limit. Overlap q U of the fixed point of AMP BP vs the snr λ for several system sizes N. Left: unsupervised case, ρ = 0. Right: semi-supervised, ρ = 0.1. The other parameters are α = 10, µ2 = 4, d = 5. We run ten experiments per point. via random matrix theory. This threshold matches the information-theoretical detectability threshold that was later established in Lu & Sen (2020). The linearized version of the AMP BP algorithm is a spectral algorithm; it has sub-optimal accuracy, as we will illustrate below in section 3. We also note that the work Lu & Sen (2020) considered another algorithm based on self-avoiding walks. It reaches the threshold but it is not optimal in terms the overlap in the detectable phase or in the semi-supervised case, nor in terms of efficiency since it quasi-polynomial. Authors of Deshpande et al. (2018); Lu & Sen (2020) have not considered the semi-supervised case of CSBM, whereas that is the case that has been mostly used as a benchmark in the more recent GNN literature. Bayes-optimal performance We run AMP BP and show the performance it achieves. Since the conjecture of optimality of AMP BP applies to the considered high-dimensional limit, we first check how fast the performance converges to this limit. In Fig. 1, we report the achieved overlap when increasing the size N to + while keeping the other stated parameters fixed. We conclude that taking N = 3 104 is already close to the limit; finite-size effects are relatively small. Fig. 2 shows the performance for several different values of the ratio α = N/P between the size of the graph N and the dimensionality of the covariates P. Its left panel shows the transition from a non-informative fixed point q U = 0 to an informative fixed point q U > 0, that becomes sharp in the limit of large sizes. It occurs in the unsupervised regime ρ = 0 for α large enough. The transition is located at the critical threshold λc given by eq. (13). This threshold is shared by AMP BP and the spectral algorithm of Deshpande et al. (2018) in the unsupervised case. The transition is of 2nd order, meaning the overlaps vary continuously with respect to λ. As expected from statistical physics, the finite size effects are stronger close to the threshold; this means that the variability from one experiment to another one is larger when close to λc. The limit α + , in our notation, leads back to the standard SBM, and the phase transition is at λ = 1 in that limit. Taking α µ2 or adding supervision ρ > 0 (Fig. 2 right) makes the 2nd order transition in the optimal performance disappear. The spectral algorithm given by Deshpande et al. (2018) is sub-optimal. In the unsupervised case, it is a linear approximation of AMP BP, and the performances of the two are relatively close. In the semi-supervised case, a significant gap appears because the spectral algorithm does not naturally use the additional information given by the revealed labels; it performs as if ρ = 0. Dense limit We can consider the dense limit of the CSBM where the average degree d goes to infinity. In this limit the SBM is equivalent to a low-rank matrix factorization problem and can be rigorously analyzed. The Bayes-optimality of the belief propagation-based algorithm is then provable. Published in Transactions on Machine Learning Research (03/2024) 0.0 0.5 1.0 1.5 2.0 λ 0.0 0.5 1.0 1.5 2.0 λ λC α = 1 α = 3 α = 10 α = 30 AMP BP spectral alg. Figure 2: Performances of AMP BP and of the spectral algorithm of Deshpande et al. (2018) sec. 4. Overlap q U of the fixed point of the algorithms, vs snr λ for a range of ratios α. Left: unsupervised, ρ = 0; right: semi-supervised, ρ = 0.1. Vertical dashed lines on the left: theoretical thresholds λc to partial recovery, eq. (13). N = 3 104, µ2 = 4, d = 5. We run ten experiments per point. The dense limit is defined by ci and co of order N and ci co = νN. The adjacency matrix A can be approximated by a noisy rank-one matrix (Lesieur et al., 2017; Deshpande et al., 2018) N uiuj + Ξij (17) where the Ξij are standard independent normals. The CSBM is then a joint uu and uv matrix factorization problem. The BP on the SBM is approximated by an AMP algorithm; and one can glue the two AMPs. One can prove that the resulting AMP AMP algorithm is Bayes-optimal. We state in appendix C the AMP AMP algorithm for CSBM and provide its state evolution (SE) equations. The rank-one approximation is valid for average degrees d moderately large. Numerical simulations show that d 20 is enough at N = 104 (Duranthon & Zdeborová, 2023; Shi et al., 2023). Parameter estimation and Bethe free entropy In case the parameters θ = (ci, co, µ) of the CSBM are not known they can be estimated using expectation-maximization (EM). This was proposed in Decelle et al. (2011) for the affinity coefficients and the group sizes of the SBM. In the Bayesian framework, one has to find the most probable value of θ. This is equivalent to maximizing the free entropy φ (8) over θ. The exact free entropy φ is not easily computable because this requires integrating over all configurations. It can be computed thanks to AMP BP: at a fixed point of the algorithm, φ can be expressed from the values of the variables. It is then called the Bethe free entropy φBethe in the literature. The derivation is presented in appendix B. φBethe converges in probability to φ in the large N limit: for any ϵ > 0, P(|φ φBethe| > ϵ) N 0. For compactness, we write χi j = 1 χi j + and pack the connectivity coefficients in the matrix C. We have NφBethe = N d t Cu,tχk i t X (ij) G log X u,t Cu,tχi j u χj i t (18) Bβ,2 U 1 + AU log(1 + AU) N Bβiˆvβˆui µ 2 ˆv2 β + ˆu2 i σV 1 2 ˆv2 βˆu2 i ) , Published in Transactions on Machine Learning Research (03/2024) 1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 ϕ 1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 ϕ GPR GNN AMP BP logistic regression Figure 3: Comparison against GPR-GNN Chien et al. (2021). Overlap q U achieved by the algorithms, vs ϕ = 2 π arctan( λ α µ ). Left: few nodes revealed ρ = 0.025; right: more nodes revealed ρ = 0.6. For GPR-GNN we plot the results of Fig. 2 and tables 5 and 6 from Chien et al. (2021). N = 5 103, α = 2.5, ϵ = 3.25, d = 5. We run ten experiments per point for AMP BP. where hi u, AU and Bβ U are given by the algorithm. One can then estimate the parameters θ by numerically maximizing φ(θ)Bethe, or more efficiently iterating the extremality condition θφBethe = 0, given in appendix B, which become equivalent to the expectation-maximization algorithm. The Bethe free entropy is also used to determine the location of a first-order phase transition in case the AMP BP algorithm has a different fixed point when running from the random initialization as opposed to running from the initialization informed by the ground truth values of the hidden variables u, v. In analogy with the standard SBM (Decelle et al., 2011) and the standard high-dimensional Gaussian mixture (Lesieur et al., 2016; 2017), a first-order transition is expected to appear when there are multiple groups or when one of the two groups is much smaller than the other. We only study the case of two balanced groups where we observed these two initializations converge to the same fixed point in all our experiments. Semi-supervision and noisy labels Semi-supervised AMP BP can be straightforwardly generalized to deal with noisy labels, thanks to the Bayesian framework we consider. It is sufficient to set the semi-supervised prior PU,i to the actual distribution of the labels. For instance, if the observed label of the train node i is the true ui with probability q and ui otherwise, then PU,i(s) = qδs,ui + (1 q)δs, ui. 4 Comparison against graph neural networks AMP BP gives upper bounds for the performance of any other algorithm for solving CSBM. It is thus highly interesting to compare to other algorithms and to see how far from optimality they are. 4.1 Comparison to GPR-GNN from previous literature CSBM has been used as a synthetic benchmark many times (Chien et al., 2021; Cong et al., 2021; Fu et al., 2021; Lei et al., 2022) to assess new architectures of GNNs or new algorithms. These works do not compare their results to optimal performances. We propose to do so. As an illustrative example, we reproduce the experiments from Fig. 2 of the well-known work Chien et al. (2021). Authors of Chien et al. (2021) proposed a GNN based on a generalized Page Rank method; it is called GPR-GNN. The authors test it on CSBM for node classification and show it has better accuracy than many other models, for both λ > 0 (homophilic graph) and λ < 0 (heterophilic graph). We reproduce their results in Fig. 3 and compare them to the optimal performance given by AMP BP. The authors of Chien et al. (2021) use a different parameterization of the CSBM: they consider λ2 + µ2/α = 1 + ϵ and ϕ = 2 π arctan( λ α Published in Transactions on Machine Learning Research (03/2024) 0.0 0.5 1.0 1.5 2.0 λ logistic regr. K = 1 K = 2 K = 3 K = 4 0.0 0.5 1.0 1.5 2.0 λ logistic regr. graph conv. general conv. attention conv. 0.0 0.5 1.0 1.5 2.0 λ Figure 4: Comparison to GNNs of various architectures and convergence to a high-dimensional limit. Overlap q U achieved by the GNNs, vs the snr λ. Left: general convolution for different numbers of layers K; middle: for different types of convolutions, at the best K (the detailed results for every K are reported on Fig. 8 of appendix F); right: general convolution at K = 3 for different sizes N. The other parameters are N = 3 104, α = 10, µ2 = 4, d = 5, ρ = 0.1. We run five experiments per point. We see from Fig. 3 that this state-of-the-art GNN can be far from optimality. For the worst parameters in the figure, GPR-GNN reaches an overlap 50% lower than the accuracy of AMP BP. Fig. 3 left shows that the gap is larger when the training labels are scarce, at ρ = 2.5%. When enough data points are given (ρ = 60%, right), GPR-GNN is rather close to optimality. However, this set of parameters seems easy since at ϕ = 0 simple logistic regression is also close to AMP BP. Authors of Chien et al. (2021); Fu et al. (2021); Lei et al. (2022) take ϵ > 0 thus considering only parameters in the detectable regime. We argue it is more suitable for unsupervised learning than for semi-supervised because the labels then carry little additional information. From left to right on Fig. 3 we reveal more than one-half of the labels but the optimal performance increases by at most 4%. To have a substantial difference between unsupervised and semi-supervised one should take λ2 + µ2/α < 1, as we do in Fig. 2. This regime would then be more suitable to assess the learning by empirical risk minimizers (ERMs) such as GNNs. We use this regime in the next section. 4.2 Baseline graph neural networks In this section, we evaluate the performance of a range of baseline GNNs on CSBM. We show again that the GNNs we consider do not reach optimality and that there is room for improving these architectures. We consider the same task as before: on a single instance of CSBM a fraction ρ of node labels are revealed and the GNN must guess the hidden labels. As to the parameters of the CSBM, we work in the regime where supervision is necessary for good inference; i.e. we take µ2/α < 1. We use the architectures implemented by the Graph Gym package (You et al., 2020). It allows to design the intra-layer and inter-layer architecture of the GNN in a simple and modular manner. The parameters we considered are the number K of message-passing layers, the convolution operation (among graph convolution, general convolution and graph-attention convolution) and the internal dimension h. We fixed h = 64; we tried higher values for h at K = 2, but we observed slight or no differences. One GNN is trained to perform node classification on one instance of CSBM on the whole graph, given the set Ξ of revealed nodes. It is evaluated on the remaining nodes. More details on the architecture and the training are given in appendix E. Fig. 4 shows that there is a gap between the optimal performance and the one of all the architectures we tested. The GNNs reach an overlap of at least about ten per cent lower than the optimality. They are close to the optimality only near λ = d when the two groups are very well separated. The gap is larger at small λ. At small λ it may be that the GNNs rely too much on the graph while it carries little information: the logistic regression uses only the node features and performs better. Published in Transactions on Machine Learning Research (03/2024) The shown results are close to being asymptotic in the following sense. Since CSBM is a synthetic dataset we can vary N, train different GNNs and check whether their test accuracies are the same. Fig 4 right shows that the test accuracies converge to a limit at large N and taking N = 3 104 is enough to work in this large-size limit of the GNNs on CSBM. These experiments lead to another finding. We observe that there is an optimal number K of message-passing layers that depends on λ. Having K too large mixes the covariates of the two groups and diminishes the performance. This effect seems to be mitigated by the attention mechanism: In Fig. 8 right of appendix F the performance of the graph-attention GNN increases with K at every λ. It is an interesting question whether the optimum performance can be reached by a GNN. One could argue that AMP BP is a sophisticated algorithm tailored for this problem, while GNNs are more generic. However, Fig. 4 shows that even logistic regression can be close to optimality at λ = 0. We study the effect of the training labels. We consider a setting where µ2/α < 1 so supervision is necessary for λ < 1. We check this is the case by letting the training ratio ρ going to 0 on Fig. 9 in appendix F. We observe the resulting accuracies q U of AMP BP and the GNNs drop to 0. The transition seems to be sharp. AMP BP always performs better as expected. 4.3 Comparison to Baranwal et al. (2023), a locally Bayes-optimal architecture The authors of Baranwal et al. (2023) propose a GNN whose architecture implements the locally Bayes-optimal classifier for CSBM. Given a node they consider only its neighborhood; since the graph is sparse, it is tree-like and the likelihood of the node has a simple analytical expression. They parameterize a part of this classifier by a multi-layer perceptron that is trained and a connectivity matrix between communities is also learned. The resulting architecture is a GNN with a specific agregation function. The authors did not name it; we propose the name clip GNN. They set l the size of the neighborhoods it processes and L the number of layers of the perceptron. At l = 0 there is no agregation and clip GNN is a multi-layer perceptron classifying a Gaussian mixture. The setting Baranwal et al. (2023) considers is a bit different: clip GNN was derived in a low-dimensional setting P = O(1). Yet, for large P it can be run and its local Bayes-optimality should remain valid. For fairness we take P small or not too large. We have to scale µ the snr of the Gaussian mixture accordingly. According to (13) we shall take µ2 of order α = N/P. The comparison can be seen on Fig. 5. AMP BP is considerably better for all parameters we tried. The results are similar to Fig. 4 left in the sense that increasing the neighborhood size l increases the performance of their architecture, but there is still a large gap to reach the performances of AMP BP. Their architecture works closer to AMP BP at large α i.e. small P. 5 Conclusion We provide the AMP BP algorithm to solve the balanced CSBM with two groups optimally asymptotically in the limit of large dimension in both the unsupervised and semi-supervised cases. We show a sizable difference between this optimal performance and the one of recently proposed GNNs to which we compare. We hope that future works using CSBM as an artificial dataset will compare to this optimal AMP BP algorithm, and we expect that this will help in developing more powerful GNN architectures and training methods. An interesting future direction of work could be to generalize the results of Shi et al. (2023) on the theoretical performance of a one-layer graph-convolution GNN trained on CSBM. Another promising direction would be unrolling AMP BP to form a new architecture of GNN, as Chen et al. (2020) did for BP for SBM and Borgerding et al. (2017) for AMP in compressed sensing, and see if it can close the observed algorithmic gap. We expect this new architecture of GNN to be based on non-backtracking walks and to incorporate skip connections between its layers. Acknowledgement We acknowledge funding from the Swiss National Science Foundation grant SMArt Net (grant number 212049). Published in Transactions on Machine Learning Research (03/2024) 0.0 0.5 1.0 1.5 2.0 clip GNN, l = 1 clip GNN, l = 2 clip GNN, l = 3 clip GNN, l = 4 clip GNN, l = 5 0.0 0.5 1.0 1.5 2.0 clip GNN, l = 1 clip GNN, l = 3 clip GNN, l = 5 clip GNN, l = 7 Figure 5: Comparison against clip GNN (Baranwal et al., 2023). Overlap q U achieved by the algorithms, vs λ. l is the size of the neigborhood clip GNN processes. Left: µ2 = 50 and α = 50 i.e. P = 200; right: µ2 = 500 and α = 500 i.e. P = 20. The other parameters are N = 104 (N = 5 103 for the two largest l), ρ = 0.05, d = 5, L = 1. For clip GNN we run the code kindly provided by the authors; we run five experiments per point. Emmanuel Abbé. Community detection and stochastic block models: recent developments. The Journal of Machine Learning Research, 18(1):6446 6531, 2017. arxiv:1703.10146. Emmanuel Abbé, Jianqing Fan, and Kaizheng Wang. An ℓp theory of pca and spectral clustering. Annals of statistics, 50(4):2359 2385, 2022. ar Xiv:2006.14062. Aseem Baranwal, Kimon Fountoulakis, and Aukosh Jagannath. Graph convolution for semi-supervised classification: Improved linear separability and out-of-distribution generalization. In Proceedings of the 38th International Conference on Machine Learning, 2021. arxiv:2102.06966. Aseem Baranwal, Aukosh Jagannath, and Kimon Fountoulakis. Optimality of message-passing architectures for sparse graphs. 2023. arxiv:2305.10391. Mark Borgerding, Philip Schniter, and Sundeep Rangan. AMP-inspired deep networks for sparse linear inverse problems. IEEE Transactions on Signal Processing, 65(16):4293 4308, 2017. arxiv:1612.01183. Guillaume Braun, Hemant Tyagi, and Christophe Biernacki. An iterative clustering algorithm for the contextual stochastic block model with optimality guarantees. In International conference on learning representations, 2022. ar Xiv:2112.10467. Zhengdao Chen, Lisha Li, and Joan Bruna. Supervised community detection with line graph neural networks. In International conference on learning representations, 2020. ar Xiv:1705.08415. Eli Chien, Jianhao Peng, Pan Li, and Olgica Milenkovic. Adaptative universal generalized pagerank graph neural network. In International Conference on Learning Representations, 2021. arxiv:2006.07988. Weilin Cong, Morteza Ramezani, and Mehrdad Mahdavi. On provable benefits of depth in training graph convolutional networks. In 35th Conference on Neural Information Processing Systems, 2021. arxiv:2110.15174. Aurélien Decelle, Florent Krzakala, Cristopher Moore, and Lenka Zdeborová. Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications. Phys. Rev. E, 84, 2011. arxiv:1109.3041. Published in Transactions on Machine Learning Research (03/2024) Yash Deshpande, Subhabrata Sen, Andrea Montanari, and Elchanan Mossel. Contextual stochastic block models. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 31, 2018. arxiv:1807.09596. Mohamad Dia, Nicolas Macris, Florent Krzakala, Thibault Lesieur, Lenka Zdeborová, et al. Mutual information for symmetric rank-one matrix estimation: A proof of the replica formula. Advances in Neural Information Processing Systems, 29, 2016. arxiv:1606.04142. O Duranthon and L Zdeborová. Neural-prior stochastic block model. MLST, 2023. arxiv:2303.09995. Santo Fortunato. Community detection in graphs. Physics reports, 486(3-5):75 174, 2010. ar Xiv:0906.0612. Guoji Fu, Peilin Zhao, and Yatao Bian. p-Laplacian based graph neural networks. In Proceedings of the 39th International Conference on Machine Learning, 2021. arxiv:2111.07337. Adrián Javaloy, Pablo Sanchez-Martin, Amit Levi, and Isabel Valera. Learnable graph convolutional attention networks. In International Conference on Learning Representations, 2023. arxiv:2211.11853. Fountoulakis Kimon, Dake He, Silvio Lattanzi, Bryan Perozzi, Anton Tsitsulin, and Shenghao Yang. On classification thresholds for graph attention with edge features. 2022. arxiv:2210.10014. Runlin Lei, Zhen Wang, Yaliang Li, Bolin Ding, and Zhewei Wei. Even Net: Ignoring odd-hop neighbors improves robustness of graph neural networks. In 36th Conference on Neural Information Processing Systems, 2022. arxiv:2205.13892. Thibault Lesieur, Caterina De Bacco, Jess Banks, Florent Krzakala, Cris Moore, and Lenka Zdeborová. Phase transitions and optimal algorithms in high-dimensional gaussian mixture clustering. In 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 601 608. IEEE, 2016. arxiv:1610.02918. Thibault Lesieur, Florent Krzakala, and Lenka Zdeborová. Constrained low-rank matrix estimation: Phase transitions, approximate message passing and applications. Journal of Statistical Mechanics: Theory and Experiment, 2017(7):073403, 2017. arxiv:1701.00858. Chen Lu and Subhabrata Sen. Contextual stochastic block model: Sharp thresholds and contiguity. 2020. ar Xiv:2011.09841. Marc Mézard and Andrea Montanari. Information, physics, and computation. Oxford University Press, 2009. Cheng Shi, Liming Pan, Hong Hu, and Ivan Dokmanić. Homophily modulates double descent generalization in graph convolution networks. PNAS, 121(8), 2023. ar Xiv:2212.13069. Rongzhe Wei, Haoteng Yin, Junteng Jia, Austin R. Benson, and Pan Li. Understanding non-linearity in graph neural networks from the Bayesian-inference perspective. In Conference on Neural Information Processing Systems, 2022. arxiv:2207.11311. Xinyi Wu, Zhengdao Chen, William Wang, and Ali Jadbabaie. A non-asymptotic analysis of oversmoothing in graph neural networks. In International Conference on Learning Representations, 2023. arxiv:2212.10701. Bowei Yan and Purnamrita Sarkar. Covariate regularized community detection in sparse graphs. Journal of the American Statistical Association, 116(534):734 745, 2021. arxiv:1607.02675. Jonathan S Yedidia, William T Freeman, Yair Weiss, et al. Understanding belief propagation and its generalizations. Exploring artificial intelligence in the new millennium, 8(236-239):0018 9448, 2003. Jiaxuan You, Rex Ying, and Jure Leskovec. Design space for graph neural networks. In 34th Conference on Neural Information Processing Systems, 2020. arxiv:2011.08843. Qian Yu and Yury Polyanskiy. Ising model on locally tree-like graphs: Uniqueness of solutions to cavity equations. 2022. arxiv:2211.15242. Published in Transactions on Machine Learning Research (03/2024) A Derivation of the algorithm We recall the setup. We have N nodes ui in { 1, +1}, P coordinates vβ in R; we are given the P N matrix Bβi = p µ N vβui + Zβi, where Zβi is standard Gaussian, and we are given a graph whose edges Aki {0, 1} are drawn according to P(Aki|uk, ui) CAki uk,ui(1 Cuk,ui/N)1 Aki. We define wβi = p µ N vβui and eg(B,w) = e (B w)2/2 the output channel. Later we approximate the output channel by its expansion near 0; we have: g w(w = 0) = Bβi and g w(w = 0) 2 + 2g w2 (w = 0) = B2 βi 1. We write belief propagation for this problem. We start from the factor graph of the problem: ψβ i ui / ui χi j ui There are six different messages that stem from the factor graph; they are: χi j ui PU,i(ui) Y β ψβ i ui Y k =i,j ψk i ui (19) ui χi j ui P(Aij|ui, uj) (20) χi β ui PU,i(ui) Y γ =β ψγ i ui Y k =i ψk i ui (21) ui χi β ui eg(Bβi,wβi) (22) χβ i vβ PV (vβ) Y j =i ψj β vβ (23) ψβ i ui Z dvβ χβ i vβ eg(Bβi,wβi) (24) where the proportionality sign means up to the normalization factor that insures the message sums to one over its lower index. We simplify these equations following closely Lesieur et al. (2017) and Decelle et al. (2011). A.1 Gaussian mixture part We parameterize messages 22 and 24 as Gaussians expanding g : ui χi β ui eg(Bβi,0)(1 + Bβiwβi + (B2 βi 1)w2 βi/2) (25) ψβ i ui Z dvβ χβ i vβ eg(Bβi,0)(1 + Bβiwβi + (B2 βi 1)w2 βi/2) (26) ˆvβ i = Z dv χβ i v v ; σβ i V = Z dv χβ i v (v2 ˆv2 β i) (27) u χi β u u ; σi β U = X u χi β u (u2 ˆu2 i β) (28) Published in Transactions on Machine Learning Research (03/2024) we assemble products of messages in the target-dependent elements Bi β V = r µ γ =β Bγiˆvγ i ; Bβ i U = r µ j =i Bβj ˆuj β (29) γ =β B2 γiˆv2 γ i (B2 γi 1)(ˆv2 γ i + σγ i V ) (30) j =i B2 βj ˆu2 j β (B2 βj 1)(ˆu2 j β + σj β U ) (31) and in the target-independent elements β Bβiˆvβ i ; Bβ U = r µ j Bβj ˆuj β (32) β B2 βiˆv2 β i (B2 βi 1)(ˆv2 β i + σβ i V ) (33) j B2 βj ˆu2 j β (B2 βj 1)(ˆu2 j β + σj β U ) (34) so we can write the messages of eq. 19, 21 and 23 in a close form as χi j ui PU,i(ui)eui Bi V u2 i Ai V /2 Y uk χk i uk P(Aki|uk, ui) (35) χi β ui PU,i(ui)eui Bi β V u2 i Ai β V /2 Y uk χk i uk P(Aki|uk, ui) (36) χβ i vβ PV (vβ)evβBβ i U v2 βAβ i U /2 (37) Since we sum over u = 1, the AV s can be absorbed in the normalization factor and we can omit them. A.2 SBM part We work out the SBM part using standard simplifications. We define the marginals and their fields by χi ui PU,i(ui)eui Bi V Y uk χk i uk P(Aki|uk, ui) (38) uk Cuk,uiχk uk (39) Simplifications give χi j ui = χi ui if (ij) / G ; else (40) χi j ui PU,i(ui)eui Bi V e hui Y uk Cuk,uiχk i uk (41) χi ui PU,i(ui)eui Bi V e hui Y uk Cuk,uiχk i uk (42) χi β ui PU,i(ui)eui Bi β V e hui Y uk Cuk,uiχk i uk (43) Published in Transactions on Machine Learning Research (03/2024) A.3 Update functions The estimators can be updated thanks to the functions f V (A, B) = R dv v PV (v) exp Bv Av2/2 R dv PV (v) exp (Bv Av2/2) = B/(A + 1) (44) f U(A, B, χ) = P u u PU,i(u) exp (Bu) χu P u PU,i(u) exp (Bu) χu (45) Bf U = 1 f 2 U (46) The update is ˆvβ i = f V (Aβ i U , Bβ i U ) ; σβ i V = Bf V (Aβ i U , Bβ i U ) (47) ˆui β = f U(Ai β V , Bi β V , ˆχi) ; σi β U = Bf U(Ai β V , Bi β V , ˆχi) (48) where ˆχi u = e hu Q uk Cuk,uχk i uk . A.4 Time indices We mix the AMP part and the BP part in this manner: AU, B(t+1) U ; ˆv, σ(t+1) V / AV , B(t+1) V ˆu, σ(t+1) U χ(t) / ˆχ, h(t+1) where the dashed lines mean that ˆui β and χi are close. We precise this statement in the next section. A.5 Additional simplifications preserving asymptotic accuracy We introduce the target-independent estimators ˆvβ = f V (Aβ U, Bβ U) ; σβ V = Bf V (Aβ U, Bβ U) (49) ˆui = f U(Ai V , Bi V , ˆχi) = X u uχi u ; σi U = Bf U(Ai V , Bi V , ˆχi) (50) This makes the message of eq. 43 redundant: we can directly express ˆui, the estimator of the AMP side, as a simple function of χi u, the estimator of the BP side. We express the target-independent As and Bs as a function of these. We evaluate the difference between the target-independent and the target-dependent estimators and we obtain Ai,(t+1) V = Ai β,(t+1) V ; Aβ,(t+1) U = Aβ i,(t+1) U (51) Bi,(t+1) V = r µ β Bβiˆv(t+1) β µ β B2 βiσβ,(t+1) V ˆu(t) i (52) Bβ,(t+1) U = r µ i Bβiˆu(t) i µ i B2 βiσi,(t) U ˆv(t) β (53) Published in Transactions on Machine Learning Research (03/2024) We further notice that B2 βi concentrate on one; this simplifies the equations to A(t+1) V = µ β ˆv(t+1),2 β ; A(t+1) U = µ i ˆu(t+1),2 i (54) Bi,(t+1) V = r µ β Bβiˆv(t+1) β µ β σβ,(t+1) V ˆu(t) i (55) Bβ,(t+1) U = r µ i Bβiˆu(t) i µ i σi,(t) U ˆv(t) β (56) The As do not depend on the node then; this simplifies σV : σ(t+1) V = 1/(1 + A(t+1) U ) (57) ˆv(t+1) β = σ(t+1) V Bβ,(t+1) U (58) Bi,(t+1) V = r µ β Bβiˆv(t+1) β µ ασ(t+1) V ˆu(t) i (59) Last, we express all the updates in function of χu=+1, having χu= 1 = 1 χu=+1. This gives the algorithm in the main part. B Free entropy and estimation of the parameters To compute Bethe free entropy we start from the factor graph. Factor nodes are between two variables so the free entropy is i