# flow_matching_for_denoised_social_recommendation__18eeefae.pdf Flow Matching for Denoised Social Recommendation Yinxuan Huang 1 * Ke Liang 1 * Zhuofan Dong 2 Xiaodong Qu 3 Tianxiang Wang 1 Yue Han 1 Jingao Xu 1 Bin Zhou 1 Ye Wang 1 Graph-based social recommendation (SR) models suffer from various noises in social graphs, hindering their recommendation performances. Both graph-level redundancy and graph-level missing will indeed influence the social graph structures, further influencing the message propagation procedure of graph neural networks (GNNs). Generative models, especially diffusion models, are usually used to reconstruct and recover the data in better quality from noisy input. Motivated by it, a few works take attempts on it for social recommendation. However, they can only handle isotropic Gaussian noise and fail to address anisotropic noise. Moreover, an anisotropic relational structures in social graphs are commonly seen, which existing models cannot sufficiently utilize the graph structures, which constraints the capacity of noise removal and recommendation performances. Compared to the diffusion strategy, the flow matching strategy better handles anisotropic noise, as it preserves data structures more effectively during the learning process. Inspired by this, we propose Rec Flow, the first flowbased SR model. Concretely, Rec Flow performs flow-based method on the structure representations of social graphs. Then, a conditional learning procedure is designed for optimization. Extensive performances prove the promising performances of our Rec Flow from six aspects, including superiority, effectiveness, robustnesses, sensitivity, convergence and visualization. Code are available at . *Equal contribution 1School of Computer Science, National University of Defense Technology, Changsha, Hunan, China 2University of Chicago, Chicago, Illinois, United States 3Harbin Institute of Technology(Shenzhen), Shenzhen, Guangdong, China. Correspondence to: Ye Wang . Proceedings of the 42 nd International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s). Figure 1. Illustration of denoising diffusion probabilistic models (DDPM) and flow matching based models where blue and red nodes represent normal and noisy data. Compared to DDPM, flowbased models can get better discriminative capacity, further leading to better denoise performance. 1. Introduction As online content continues to grow exponentially, the challenges of managing information sensitivity and urgency have become increasingly pressing (Lin et al., 2025; Wang et al., 2024), driving the rise of recommendation systems (Liu et al., 2023b). Despite advancements, recommendation systems still face challenges such as collaborative information sparsity. The rise of social media has shifted their focus from user-item interactions to integrating social networks to enhance recommendations. By leveraging social relationships as auxiliary information, social recommendation (SR) systems can mitigate these issues, making SR a key research area (Liang et al., 2023). Early models of SR relied on matrix factorization (Salakhutdinov & Mnih, 2007; Yang et al., 2013), which drew upon social theories to exploit the influence of nearby or connected users on individual preferences. Additionally, social relationships naturally form graph-structured data, making graph neural networks (GNNs) an effective tool for graph representation learning (Wang et al., 2023a; Dai et al., 2023). GNNs have demonstrated exceptional performance in aggregating neighborhood information of nodes and have been widely Flow Matching for Denoised Social Recommendation applied in the social recommendation domain, enabling the deep exploration and effective utilization of more valuable information (Huang et al., 2021a; Liang et al., 2023). Despite advancements in social recommendation systems, current approaches often struggle to effectively mitigate two critical graph structural issues: redundancy and incompleteness (Lin et al., 2023). These limitations primarily stem from noisy social connections in real-world data, where lowquality relationships inject interference that significantly degrades recommendation performance (Lin et al., 2024c;a). Graph neural networks (GNNs) exhibit particular vulnerability to such noise due to their inherent reliance on message propagation mechanisms across social edges, a characteristic that amplifies error transmission through the network (Wang et al., 2023a; Lin et al., 2024b). Recent studies have extended diffusion models to graph-based recommendation systems. Diff Rec (Wang et al., 2023b) applies continuous diffusion by adding Gaussian noise to user/item embeddings and optimizes them through a denoising process. Rec Diff (Li et al., 2024b) introduces a multi-step diffusion and denoising framework for modeling complex social connections. These efforts build upon the growing success of diffusion models in various domains (Liu et al., 2024; Lee et al., 2024; Jiang et al., 2024; Deng et al., 2024). Social data often exhibit strong anisotropy, as shown by the directional distribution of vector fields in our preliminary analysis (Figure 3). This conflicts with the isotropic Gaussian noise (where noise is modeled as ϵ N(0, σ2I)) assumption in conventional denoising diffusion probabilistic models (DDPM), leading to two key issues: representation degradation, as isotropic noise blurs user embedding distinctiveness, and unstable training, as DDPM s iterative denoising undermines convergence consistency (Kingma et al., 2023). To address these limitations, we adopt Flow Matching, a generative approach that learns continuous velocity fields to construct direct sampling paths and model an ODE flow, enabling it to handle non-isotropic noise(modeled by ϵ N(µ, Σ) with µ = 0 or non-diagonal Σ) (Zhao et al., 2024; Lipman et al., 2022). Figure 1 illustrates the differences between these generative approaches: DDPM injects isotropic noise without directional awareness, while Flow Matching guides data towards clean distributions in a more discriminative manner, improving denoising performance on graph-structured data (Lipman et al., 2022). Building on this, we propose Rec Flow, a flow-based social recommendation model that leverages Flow Matching to capture the directional dynamics of user interactions, enabling more stable training, better denoising, and improved recommendation accuracy. Building on this, we propose Rec Flow, a flow-based social recommendation model that leverages Flow Matching to capture the directional dynamics of user interactions, enabling more stable training, better denoising, and improved recommendation accuracy. In summary, we make the following contributions: We propose Rec Flow, a novel social recommendation model that explicitly captures anisotropy in social networks through velocity fields, addressing the limitations of isotropic noise assumptions in conventional diffusion models. By integrating flow matching with social recommendation, Rec Flow models directional data dynamics more effectively, leading to improved user preference representations. Extensive experiments on benchmark datasets demonstrate the effectiveness of Rec Flow, with significant performance gains. A visualization experiment further visualizes the evolution of velocity fields over time, highlighting the impact of our approach. 2. Related Work In this section, we provide a comprehensive review of related studies in the areas of social recommendation and generative models, and clarify how our work aligns with and builds upon the existing research. 2.1. Graph-based Social Recommendation Graph-based Social Recommendation has gained significant attention for incorporating social relationships. Early works like Diff Net(Wu et al., 2019b) used Graph Convolutional Networks (GCNs) to model social influence, while later models like Graph Rec(Fan et al., 2019) and DANSER(Wu et al., 2019c) added attention mechanisms to account for varying influence levels. More recent approaches, such as MHCN(Yu et al., 2021b) and HOSR(Liu et al., 2020), capture higher-order relationships and distant influences. Models like Reco GCN(Xu et al., 2019), DGRec(Song et al., 2019b), TGRec(Bai et al., 2020), and KCGN(Huang et al., 2021b) integrate diverse data sources, including agent, temporal, and knowledge graph information. To address noisy social relations, recent methods like DSL(Wang et al., 2023a) and GDMSR(Quan et al., 2023) focus on denoising by identifying and removing irrelevant or redundant social connections, thus improving recommendation quality and efficiency. These methods struggle with noisy or irrelevant social connections. Flow matching models, by modeling influence spread and denoising, provide a promising solution, refining user representations to enhance the robustness and accuracy of recommendations. 2.2. Diffusion Model-based Recommendation Generative recommenders have attracted interest in recent studies. Some studies focused on leveraging diffusion mod- Flow Matching for Denoised Social Recommendation els to enhance data representation and mitigate noise inherent in social connections. For instance, Rec Diff(Li et al., 2024a) employed a latent diffusion paradigm to denoise user representations derived from social networks, demonstrating improved robustness in handling the diverse noisy effects of user social contexts. Similarly, Diffu ASR(Liu et al., 2023a) proposed a diffusion-based pseudo sequence generation framework, and fills in the gap between the generations of continuous images and discrete sequences. CGSo Rec(He et al., 2024) proposed a condition-guided social recommendation model, leveraging a conditional constraint in the diffusion process to incorporate social connections. This allows the model to refine user preferences based on their social connections. Similarly, DIEXRS(Guo et al., 2023) uses a diffusion framework to model user preferences, and then trains a textual decoder to generate explanations based on the denoised user representation, enhancing the interpretability of diffusion recommenders. In contrast to these established approaches, our proposed Recflow introduces a novel methodology by leveraging flow matching models (Liu et al., 2022), making it more effective in capturing intricate patterns. 3. Preliminary In this section, we briefly introduce the preliminaries of flow-matching models. A flow-matching model is a type of generative model that bridges the gap between a source distribution px and a target distribution pz by learning a neural network to parameterize the velocity field of an Ordinary Differential Equation (ODE). The ODE is defined as: dxt = vθ(xt, t)dt, (1) where vθ denotes the learnable velocity field parameterized by a neural network. The ODE ensures that the intermediate distributions xt remain consistent with the learned probability path for all t [0, 1], and the velocity field vθ directs the flow from the initial state x to the target state z, effectively transforming the source distribution into the target distribution over time. Forward Process: This process converts samples from x px to align with pz. The interpolation between x and z is defined through the linear blend xt = tz + (1 t)x, satisfying the ODE: dxt = (z x)dt (2) Reverse Process: Conversely, this process generates samples starting from z pz and reverses the flow dynamics. The reverse ODE, mirroring the forward process, is defined as: dxt = (x z)dt (3) The effectiveness of the rectified flow depends on the precise estimation of velocity v. To align v with the direction (z x), the model solves a least squares regression problem, optimizing the velocity field vθ to closely match the ideal flow between x and z. The training of the neural network involves minimizing the loss function L, defined as: 0 Ex,z (z x) vθ(xt, t) 2 dt (4) This loss quantifies the discrepancy between the ideal and predicted velocities over the time interval [0, 1], enabling the flow to follow the desired trajectory by accurately predicting the velocity at any point t. The parameterized neural network vθ is thus trained to minimize L, facilitating an efficient and accurate modeling of transitions from x to z. 4. Proposed Model As illustrated in Figure 2, we integrated collaborative and social signals within a unified flow-based generative framework, the overall architecture of Rec Flow consists of three key components: Graph-based Collaborative Pattern Encoding, the Rec Flow Module, and a Joint Optimization Module. 4.1. Problem Statement Users and items are defined as the sets U = {u1, u2, . . . , un} and V = {v1, v2, . . . , vn}, respectively. Interactions between users and items are represented by the matrix R R|U| |V |, where the element ru,v = 1 indicates that user u interacts with item v, and ru,v = 0 otherwise. Social relationships between users are described by the matrix S R|U| |U|, where su,u = 1 signifies a social interaction between user u and user u , and su,u = 0 otherwise. Based on these interaction matrices, the following graph structures are constructed: Collaborative Graph Gr = (U, V, Er), where the edge set Er = { (u, v) | ru,v = 1 } represents interactions between users and items. Social Graph Gs = (U, Es), where the edge set Es = { (u, u ) | su,u = 1 } represents social relationships between users. Our Rec Flow leverages both collaborative and social graphs specifically, the collaborative graph Gr generates node embeddings denoted as Er and the social graph Gs generates node embeddings denoted as Es. The predicted user-item interaction value ˆru,v is computed as: ˆru,v = Pred(eu, ev), (5) where eu and ev are the embeddings of the user u and item v. These eu and ev are derived from both the collaborative Flow Matching for Denoised Social Recommendation Algorithm 1 Rec Flow Training Input: Users social interaction embedding Es Output: The reconstructed embedding which is denoted as ˆe0 1: Set Es # Initialize the social interaction embedding 2: while not converged do 3: t U(0, 1) # Sample time 4: x px # Sample data 5: z pz # Sample noise 6: xt = Ψt(z|x) # Conditional flow 7: Gradient step: θ vθ t (xt) ˆxt 2 8: end while Algorithm 2 Rec Flow Inference Input: Users interaction vectors xu, u = 1, 2, . . . , |U|; optimized parameter θ Output: Predicted user embeddings or interaction outcomes 1: for u U do 2: Let x0 xu # Initialize with user data 3: Sample t pt # Time step sampling 4: Calculate xt fθ(x0, t) # Run learned model 5: if needs Post Processing then 6: ˆy Process(xt) # Final prediction 7: end if 8: end for graph Gr and the social graph Gs, and are learned jointly during the model training, respectively. 4.2. Graph-based Collaborative Pattern Encoding Drawing inspiration from the effectiveness of simplified Graph Neural Networks (GNNs), we incorporated a lightweight Graph Convolutional Network (light GCN) as the graph encoder in our Rec Flow architecture (Jiang et al., 2023). Light GCN is widely recognized as a robust graph recommender for modeling implicit interactions in top-k recommendations. we construct a collaborative graph to encode user-item interactions using the Rec Encoder, and a user-user graph to capture social relationships among users using the Social Encoder. On the user-item graph Gr, the embeddings are propagated across layers using the following equation: E(l) r = (Lr + I) E(l 1) r , where Ar R(|U|+|V|) (|U|+|V|) is the adjacency matrix of the bipartite graph Gr, the embedding matrix E(l) r R(|U|+|V|) d captured the embeddings in the l-th iteration of the GCNs. And Lr is the normalized Laplacian matrix. The initial embeddings Er 0 are randomly generated learnable parameters. And Dr is the corresponding diagonal degree matrix, defined as: Ar = 0 R R 0 where Lr is the normalized Laplacian matrix of Gr, and I is the identity matrix. For the user s social graph Gs, embedding propagation follows a similar process: E(l) s = (Ls + I) E(l 1) s , S being the adjacency matrix of the social graph Gs. Ls is the normalized Laplacian matrix. After propagating through L layers, the final embedding for each user-item pair is obtained by aggregating embeddings across all layers as ˆeu,u = PL l=0 e(l) u,u , where e(l) u,u represents the l-th layer embedding between user u and u . Similarly, for a user-item pair (u, v), the predicted embedding is ˆeu,v = PL l=0 e(l) u,v. 4.3. Rec Flow Module In the forward process, the Rec Flow module begins by sampling a time step t from a uniform distribution U(0, 1). This time step is then encoded into a time embedding vector Et, which is concatenated with the user representation Es obtained from the social encoder. The perturbed input xt is defined as a linear interpolation between Gaussian noise z and the socially-informed embedding Es: xt = tz + (1 t)Es (9) Unlike conventional flow-matching methods that typically treat z as the origin and interpolate toward data samples, our formulation conditions the diffusion trajectory on the social embedding Es, enabling the model to incorporate social context into the forward process. In the reverse process, we train a vector field estimator vθ(xt, t) to approximate the velocity field. Here, θ denotes the set of learnable parameters. The estimator is defined as: vθ(xt, t) = FC2(Es Et), FC(x) = σ(Wx + b) (10) where Et is the time embedding at step t, denotes vector concatenation, and FC2 represents two consecutive fully connected layers. The function σ( ) denotes a non-linear activation (e.g., Re LU or GELU), and W, b are the weight matrix and bias vector of each linear transformation. Specifically, starting from xt, the model integrates the reverse-time flow using an ODE solver to obtain the clean Flow Matching for Denoised Social Recommendation Figure 2. Our Rec Flow consists of three main components: Graph-based Collaborative Pattern Encoding responsible for obtaining embeddings of the collaborative graph and the social graph; Rec Flow Module learns continuous velocity fields on the social graph to effectively denoise anisotropic structures; and Joint Optimization integrates the objectives of collaborative encoding and flow-based social representation learning. embedding x0, which approximates the original user preference vector in the latent space. The reverse integration is performed as: x0 = xt + Z 0 t vθ(xτ, τ) dτ (11) During inference, the model fixes the optimized parameters θ, and no further training is performed, then we sample a latent representation z N(0, I) and timestep t and then applies the learned v to reconstruct the user embedding eu. 4.4. Joint Optimization To integrate social relationships with encoded user-item interaction patterns, Rec Flow employs a hidden-space reflow mechanism to generate the final user embeddings for prediction. This process is defined as: ˆru,v = e u e v, eu = e u + ˆeθ(e u, t), (12) where t denotes a sampled diffusion time step for user u, e u and e v represent the initial embeddings of the user and item obtained from the respective graph encoders, and ˆeθ( , t) denotes the learned reflow adjustment from the vector field estimator. The model is optimized by minimizing a joint loss function that combines recommendation and diffusion objectives: (u,v+,v ) log σ(ˆru,v+ ˆru,v ) + λ1 X t Lcfm (13) Here, (u, v+, v ) denotes a user with a positive and a negative item in a pairwise training setup following the Bayesian Personalized Ranking (BPR) paradigm (Rendle et al., 2012). The conditional flow-matching loss Lcfm, computed over sampled diffusion steps t, guides the learning of the reversetime vector field. Additionally, L2 regularization (weight decay) with coefficient λ1 is applied to all trainable parameters Θ to prevent overfitting. The form of LCFM follows the definition in Eq. (3) and the flow estimation process illustrated in Figure 2. 4.5. Disucssion and Analysis In this section, we present a detailed analysis of the time and space complexity of our Rec Flow model. Besides, we further provide the theoretical analysis between our Rec Flow and original DDPM methods to better illustrate the efficiency of the flow matching strategy. Time Complexity. Initially, Rec Flow performs graph-level information propagation on both the holistic collaborative graph Gr and the social graph Gs, this process requires O((|Er| + |Es|) d) calculations for message passing. The overall time complexity of Rec Flow during training is dominated by the graph-level propagation and the gradient updates, resulting in: O((|Er| + |Es|) d) (14) for each iteration of training. And complexity of flow match- Flow Matching for Denoised Social Recommendation ing is O(N), eliminating multi-step iterations and requiring only one global optimization. Space Complexity. The space complexity is primarily determined by the storage required for the graphs and embeddings. The collaborative graph Gr requires storing the adjacency matrix of the user-item interactions, which has a space complexity of O(|U| |V |), the social graph Gs requires storing the adjacency matrix of the user-user interactions, which has a space complexity of O(|U|2), assuming a dense representation. Thus, the overall space complexity of Rec Flow is: O(|U| |V | + |U|2 + (|U| + |V |) d) (15) This accounts for the storage of the graph structures and the user and item embeddings. And space complexity is O(D), dependent solely on data dimensionality and independent of timesteps. Theoretical Analysis. In the context of generative modeling, Flow Matching and DDPM both aim to generate data through controlled transformations of noise. FM constructs a linear interpolation between X0 and X1, leading to a continuous and deterministic path. Mathematically, this is described as: Xt = (1 t)X0 + t X1 (16) where Xt follows a simple ODE-driven trajectory. In contrast, DDPM follows a stochastic diffusion process: Xt = αt X0 + p where ϵ is sampled from a Gaussian prior. The nonlinear and stochastic nature of DDPM results in higher variance in sampling paths, leading to inefficiencies. And FM is governed by an Ordinary Differential Equation (ODE), which enables continuous time evaluation and faster sampling via adaptive solvers. In contrast, DDPM relies on discrete Stochastic Differential Equations (SDEs), requiring a large number of steps for accurate generation. Empirically, FM achieves comparable quality with fewer function evaluations, reducing computational overhead. 5. Experiment In this section, we present a series of experiments conducted to evaluate the performance of our Rec Flow method, focusing on the following five questions: Q1: How does Rec Flow perform in comparison to other state-of-the-art social recommendation methods? Q2: What are the key contributions of Rec Flow s main modules? Figure 3. Illustration of the anisotropic attributes of two typical datasets, i.e., Ciao and Epinions. Each point in the 2D visualization represents a user user interaction embedding projected onto two principal components. The uneven and elongated distributions of points along certain directions highlight the anisotropy of the data, indicating that user user relationships are not uniformly distributed but instead exhibit directional concentration. Q3: Is Rec Flow robust enough to effectively handle noisy and sparse data in social recommendation (SR)? Q4: How do different settings impact the performance of Rec Flow? Q5: How does the complexity of our method compare to that of alternative approaches? Before showing and analyzing the experimental results, we first present the experimental settings below. 5.1. Experiment Settings Datasets and Evaluation Metrics. We conducted experiments on three publicly available social recommendation datasets: Ciao, Yelp and Epinions. Detailed statistics for these datasets are provided in Table 2. Our datasets are obtained from Rec Diff (Li et al., 2024a)1 We conducted a preliminary analysis of the Ciao and Epinions dataset. Figure.3 shows the distribution of social data in the Ciao and Epinions dataset. After reducing the data from high-dimensional space to two dimensions using Principal Component Analysis (PCA), it can be observed that the data points are more spread out along Component 1, while the distribution is more concentrated and less variable along Component 2. This aligns with the characteristics of social network data, where there are often dominant relational patterns, while others are secondary or sparse. In the social graph, certain user groups with strong internal connections form tightly-knit social clusters, reflecting prominent relational patterns in the data. Additionally, the presence of a dominant direction in the data causes the vector field to display a clear anisotropic distribution, indicating that user interactions are more concentrated along specific directions. 1Datasets are available at https://github.com/HKUDS/Rec Diff. Flow Matching for Denoised Social Recommendation Table 1. Statistics of experimental datasets, our datasets Data Ciao Yelp Epinions # Users 1,925 99,262 14,680 # Items 15,053 105,142 233,261 # Interactions 23,223 672,513 447,312 # Social Ties 65,084 1,298,522 632,144 Evaluation Protocols. In our experiments, we utilized two commonly used metrics.: Hit Ratio HR@N and Normalized Discounted Cumulative Gain (NDCG)@N as metrics, where N represents the number of items recommended to the user, widely ultilized in Top-N recommendations. We applied a 7:1:2 ratio to split each dataset into training, validation, and test sets, adhering to common data partitioning practices in graph-based recommendation systems. Compared Baselines. We compared Rec Flow with 12 baseline models, representing the latest advancements in social recommendation research. These encompass conventional and attention-based methods, graph-based recommendation models using collaborative filtering, as well as other GNNbased social recommendation systems, as outlined below: PMF (Salakhutdinov & Mnih, 2007), Trust MF (Yang et al., 2013), Graph Rec (Fan et al., 2019), Diff Net (Wu et al., 2019a), DGRec (Song et al., 2019a), NGCF (Wang et al., 2019), MHCN (Yu et al., 2021a), KCGN (Huang et al., 2021a), SMIN (Long et al., 2021), GDMSR(Quan et al., 2023), DSL (Wang et al., 2023a), Rec Diff (Li et al., 2024a). Implementation Details. All experiments are conducted on a machine with an RTX A800 for a fair comparison. The experimental settings and hyper-parameters details of our Rec Flow framework are elaborated in. The learning rate was tuned within [5e 4, 1e 3, 5e 3] with a 0.96 decay factor per epoch. Batch sizes were selected from [1024, 2048, 4096, 8192], and hidden dimensions from [64, 128, 256, 512]. The parameter γ was set according to the γpctpercentile of node embedding distances for each dataset. The optimal number of GNN layers was chosen from [1, 2, 3, 4]. The Timestep embedding size is selected from 4,8,16,32. And the batch size for Ciao is 2048, while for Yelp and Epinions is 4096. Regularization weights λ1 were selected from [1e 3, 1e 2, 1e 1, 1e0, 1e1]. 5.2. Performance Comparison (RQ1) Table 2 summarizes the experimental results across three datasets, with Rec Flow s metrics bolded and top baselines underlined. Rec Flow demonstrates marked improvements over existing approaches, more specifically, on Ciao, Rec Flow achieves 0.725 Recall (+2.0%) and 0.438 NDCG Table 2. Overall performance analysis. Ciao Yelp Epinions Method Recall NDCG Recall NDCG Recall NDCG Trust MF 0.539 0.343 0.371 0.193 0.265 0.195 SAMN 0.604 0.384 0.403 0.208 0.329 0.226 Diff Net 0.528 0.328 0.557 0.292 0.384 0.273 Graph Rec 0.540 0.335 0.419 0.201 0.334 0.246 DGRec 0.517 0.319 0.410 0.209 0.326 0.236 NGCF 0.559 0.363 0.450 0.230 0.353 0.243 MHCN 0.621 0.378 0.567 0.292 0.438 0.321 KCGN 0.602 0.350 0.460 0.234 0.2201 0.1456 SMIN 0.588 0.354 0.485 0.251 0.333 0.228 GDMSR 0.560 0.355 0.513 0.246 0.368 0.241 DSL 0.606 0.389 0.504 0.259 0.365 0.267 Rec Diff 0.712 0.419 0.597 0.308 0.460 0.336 Rec Flow 0.725 0.438 0.618 0.341 0.486 0.341 (+4.5%) over Rec Diff. For Yelp and Epinions, it maintains robust gains: 0.618 vs. 0.597 Recall(+3.5%) and 0.341 vs. 0.308 NDCG (+10.7%) on Yelp; 0.486 vs. 0.460 Recall (+ 5.6%) on Epinions, demonstrating adaptability to varying data densities. Notably, methods with self-supervised learning (SSL) MHCN (local-global contrast), KCGN, SMIN (hierarchical relations), and DSL (predictive consistency) consistently outperform traditional approaches. SSL mitigates noise propagation and interaction sparsity by extracting latent relational patterns, enabling stable representation learning. Rec Flow s denoising process, guided by a velocity field, directly optimizes trajectories toward clean data distributions, suppressing noise during refinement. This explains its 5.6 10.7% improvements over diffusion-based Rec Diff on sparse datasets. 5.3. Ablation Study (RQ2) To evaluate the impact of different components in the Rec Flow framework, we performed an ablation study using three benchmark datasets: Ciao, Yelp, and Epinions. The results are presented in Table 4. w/o Flow: This configuration excludes the holistic flow-matching module, leaving only the GNN for learning user-item and social relations. As shown in Table 4, the absence of the flow module results in a significant decrease in both Recall and NDCG across all datasets. Specifically, Recall drops by approximately 13% (Ciao), 7% (Yelp), and 12% (Epinions), while NDCG decreases by about 13% (Ciao), 16% (Yelp), and 31% (Epinions). This emphasizes the importance of the denoising mechanism in our model. w/o CL: In this configuration, we remove the conditional learning (CL) guidance for flow matching. The Flow Matching for Denoised Social Recommendation Table 3. Comparison of different sampling methods Ciao Yelp Epinions Method Recall NDCG Recall NDCG Recall NDCG ODE-Solver 0.725 0.438 0.618 0.341 0.486 0.341 Multistep Heun 0.710 0.411 0.594 0.322 0.460 0.325 RK4 0.699 0.403 0.590 0.317 0.453 0.320 Heun 0.683 0.399 0.581 0.319 0.449 0.318 Euler 0.670 0.383 0.570 0.311 0.438 0.305 Table 4. Ablation Analysis: w/o Flow denotes the removal of the Rec Flow module, CL denotes the removal of conditional learning, and both indicates the removal of both components simultaneously. Ciao Yelp Epinions Method Recall NDCG Recall NDCG Recall NDCG Rec Flow 0.725 0.438 0.618 0.341 0.486 0.341 Rec Flow w/o Flow 0.633 0.380 0.573 0.301 0.429 0.297 Rec Flow w/o CL 0.692 0.401 0.597 0.320 0.443 0.312 Rec Flow w/o CL of both 0.621 0.407 0.589 0.312 0.417 0.302 results show a notable performance decline, particularly in NDCG across all datasets, with a decrease of around 8% (Ciao), 11% (Yelp), and 12% (Epinions). This demonstrates the critical role of Conditional Learning in improving model accuracy. w/o both: In this case, both the flow module and the CL label guidance are removed. The performance experiences a significant drop, with Recall decreasing by about 14% (Ciao), 5% (Yelp), and 7% (Epinions), and NDCG dropping by approximately 7% (Ciao), 9% (Yelp), and 15% (Epinions). This highlights the essential contributions of both the flow-matching process and label guidance in enhancing the model s ability to learn effective user-item and social relationships. The choice of sampling method creates a clear trade-off between computational cost and model accuracy (see Table 3): the basic Euler method, with its single first-order step, yields the lowest Recall and NDCG, while Heun s twostage predictor corrector boosts both metrics modestly at only twice the cost. Leveraging history, the multistep Heun scheme further raises performance, and the four-stage RK4 delivers similar gains by reducing local error through fourthorder updates. Finally, an adaptive ODE solver such as Dormand Prince, which dynamically adjusts its step size to satisfy error tolerances, consistently achieves the highest Recall and NDCG on Ciao (and likewise leads on Yelp and Epinions), demonstrating that, when resources allow, more precise integration yields the strongest quality. Figure 4. Robustness Analysis, the blue bars represent the Yelp dataset, while the green bars represent the Epinions dataset. Figure 5. The convergence comparison of the loss curves is shown in the legend. The blue curve represents the convergence of Rec Flow, while the yellow curve corresponds to the convergence after applying DDPM. 5.4. Robustness Analysis (RQ3) This section examines the effect of the noise scale factor (τ) on the noising process. By scaling the minimum and maximum noise in the scheduler to τ smin and τ smax, respectively, we test the model s performance at different noise scales (1, 0.1, 0.01, 0.001). The results, shown in Figure 4, reveal the following: Increasing the noise scale improves model performance, with higher Recall@20 and NDCG@20 values for both Yelp and Epinions as the noise scale decreases from 1 to 0.1. This demonstrates the effectiveness of the Rec Diff framework s denoising mechanism. Excessive noise beyond a threshold leads to performance degradation, especially for Yelp and Epinions. As noise scales reach 10 2 and 10 3, a noticeable decline in NDCG@20 suggests that too much noise interferes with the model s ability to retain important user-item data. 5.5. Convergence Analysis (RQ4) In Figure.5, Rec Flow exhibits a faster convergence speed compared to DDPM, as the horizontal axis represents the Flow Matching for Denoised Social Recommendation Figure 6. The top row shows the velocity field direction of our Rec Flow, while the bottom row displays that of DDPM. It is evident that applying the flow-matching velocity field results in a clear directional pattern, aligning with the anisotropy observed in the previous dataset visualizations. In contrast, the application of DDPM does not exhibit any noticeable directionality, with the directions appearing random throughout. diffusion model s time steps, ranging from 0 to 1. The vertical axis indicates the residual error at each time step. It is evident that the residual error of Flow Matching decreases rapidly in the early time steps, demonstrating a significantly faster convergence trend. In contrast, DDPM s error decreases at a slower pace, highlights the advantage of Flow Matching in modeling vector fields and achieving efficient convergence, making it more suitable for handling complex distributions and data scenarios. 5.6. Visualization of Flow matching velocity field direction over different epochs (RQ5) To better understand how the velocity field evolves during training, we design an experiment that visualizes the direction of the vector field across different epochs. In Figure 6, the data points are unevenly distributed, showing clear directionality and concentration, which highlights the anisotropy of social data. The model captures this anisotropy within the velocity field, represented by arrows that dynamically adjust direction as the epoch evolves, demonstrating the convergence dynamics of velocity field directions. In the early stages (Epoch 0 38), directions exhibit high dispersion (-0.50 to +0.50), reflecting unstable parameter adjustments. As training progresses (Epoch 59 99), directions progressively cluster near the origin (range: -0.25 to +0.25), indicating stabilized optimization trajectories. Notably, after Epoch 78, data density intensifies with directional similarity, signifying unified parameter updates. In comparison, as shown in the bottom row of Figure 6, the velocity field direction of DDPM lacks any noticeable directionality across epochs, with the directions appearing random throughout. This contrast highlights that while Rec Flow effectively captures and aligns with the anisotropy of the data, DDPM fails to exhibit clear directional convergence, further emphasizing the advantage of Rec Flow in learning meaningful patterns from the social graph structure. 6. Conclusion In this paper, we proposed Rec Flow, a novel generative social recommendation framework based on flow-matching. Diffusion-based method primarily target the removal of isotropic noise, therefore damage the structural representation, and the aim of Recflow is to fill the gap. By learning continuous velocity fields on social graphs, Recflow constructs a direct sampling path from noisy input to the target distribution, thereby enabling faithful representation learning. To validate the effectiveness, we conduct extensive experiments on several strong baselines, and the results consistently demonstrate the superior performance and robustness of our proposed Rec Flow. In the future, we will explore more scalable extensions of Rec Flow to further assess its practicality and effectiveness in real-world recommendation datasets. Impact Statement This paper introduces Rec Flow, a flow-matching social recommendation model that captures anisotropic in user interactions. By leveraging flow matching, Rec Flow enhances representation learning and denoising efficiency, emphasizes the practical benefits for personalized recommendation. We also acknowledge potential societal risks, such as bias amplification, and highlight the need for fairness and robustness in future recommendation systems. Flow Matching for Denoised Social Recommendation Acknowledgments This work is supported by National Natural Science Foundation of China ( No .62302507) and Hunan Provincial Natural Science Foundation (No. 2023JJ40684). Bai, T., Zhang, Y., Wu, B., and Nie, J.-Y. Temporal graph neural networks for social recommendation. In 2020 IEEE International Conference on Big Data (Big Data), pp. 898 903. IEEE, 2020. Dai, E., Lin, M., Zhang, X., and Wang, S. Unnoticeable backdoor attacks on graph neural networks. In Proceedings of the ACM Web Conference 2023, pp. 2263 2273, 2023. Deng, Y., He, X., Mei, C., Wang, P., and Tang, F. Fireflow: Fast inversion of rectified flow for image semantic editing, 2024. URL https://arxiv.org/abs/2412. 07517. Fan, W., Ma, Y., Li, Q., He, Y., Zhao, E., Tang, J., and Yin, D. Graph neural networks for social recommendation. In The world wide web conference, pp. 417 426, 2019. Guo, Y., Cai, F., Chen, H., Chen, C., Zhang, X., and Zhang, M. An explainable recommendation method based on diffusion model. In 2023 9th International Conference on Big Data and Information Analytics (Big DIA), pp. 802 806. IEEE, 2023. He, X., Fan, W., Wang, R., Wang, Y., Wang, Y., Pan, S., and Wang, X. Balancing user preferences by social networks: A condition-guided social recommendation model for mitigating popularity bias. ar Xiv preprint ar Xiv:2405.16772, 2024. Huang, C., Xu, H., Xu, Y., Dai, P., Xia, L., Lu, M., Bo, L., Xing, H., Lai, X., and Ye, Y. Knowledgeaware coupled graph neural network for social recommendation. In AAAI, pp. 4115 4122. AAAI Press, 2021a. URL https://ojs.aaai.org/index. php/AAAI/article/view/16533. Huang, C., Xu, H., Xu, Y., Dai, P., Xia, L., Lu, M., Bo, L., Xing, H., Lai, X., and Ye, Y. Knowledge-aware coupled graph neural network for social recommendation. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pp. 4115 4122, 2021b. Jiang, Y., Yang, Y., Xia, L., and Huang, C. Diffkg: Knowledge graph diffusion model for recommendation, 2023. URL https://arxiv.org/abs/2312.16890. Jiang, Y., Xia, L., Wei, W., Luo, D., Lin, K., and Huang, C. Diffmm: Multi-modal diffusion model for recommendation, 2024. URL https://arxiv.org/abs/2406. 11781. Kingma, D. P., Salimans, T., Poole, B., and Ho, J. Variational diffusion models, 2023. URL https://arxiv. org/abs/2107.00630. Lee, S., Lin, Z., and Fanti, G. Improving the training of rectified flows, 2024. URL https://arxiv.org/ abs/2405.20320. Li, Z., Xia, L., and Huang, C. Recdiff: Diffusion model for social recommendation. In Proceedings of the 33rd ACM International Conference on Information and Knowledge Management, pp. 1346 1355, 2024a. Li, Z., Xia, L., and Huang, C. Recdiff: Diffusion model for social recommendation, 2024b. URL https:// arxiv.org/abs/2406.01629. Liang, K., Meng, L., Liu, M., Liu, Y., Tu, W., Wang, S., Zhou, S., Liu, X., and Sun, F. A survey of knowledge graph reasoning on graph types: Static, dynamic, and multimodal, 2023. URL https://arxiv.org/abs/ 2212.05767. Lin, M., Xiao, T., Dai, E., Zhang, X., and Wang, S. Certifiably robust graph contrastive learning. Advances in Neural Information Processing Systems, 36:17008 17037, 2023. Lin, M., Chen, Z., Liu, Y., Zhao, X., Wu, Z., Wang, J., Zhang, X., Wang, S., and Chen, H. Decoding time series with llms: A multi-agent framework for cross-domain annotation. ar Xiv preprint ar Xiv:2410.17462, 2024a. Lin, M., Dai, E., Xu, J., Jia, J., Zhang, X., and Wang, S. Stealing training graphs from graph neural networks. ar Xiv preprint ar Xiv:2411.11197, 2024b. Lin, M., Zhang, Z., Dai, E., Wu, Z., Wang, Y., Zhang, X., and Wang, S. Trojan prompt attacks on graph neural networks. ar Xiv preprint ar Xiv:2410.13974, 2024c. Lin, M., Liu, H., Tang, X., Zeng, J., Dai, Z., Luo, C., Li, Z., Zhang, X., He, Q., and Wang, S. How far are llms from real search? a comprehensive study on efficiency, completeness, and inherent capabilities. ar Xiv preprint ar Xiv:2502.18387, 2025. Lipman, Y., Chen, R. T., Ben-Hamu, H., Nickel, M., and Le, M. Flow matching for generative modeling. ar Xiv preprint ar Xiv:2210.02747, 2022. Liu, Q., Yan, F., Zhao, X., Du, Z., Guo, H., Tang, R., and Tian, F. Diffusion augmentation for sequential recommendation. In Proceedings of the 32nd ACM International Flow Matching for Denoised Social Recommendation Conference on Information and Knowledge Management, pp. 1576 1586, 2023a. Liu, S., Zhang, A., Hu, G., Qian, H., and seng Chua, T. Preference diffusion for recommendation, 2024. URL https://arxiv.org/abs/2410.13117. Liu, X., Gong, C., and Liu, Q. Flow straight and fast: Learning to generate and transfer data with rectified flow, 2022. URL https://arxiv.org/abs/2209.03003. Liu, Y., Chen, L., He, X., Peng, J., Zheng, Z., and Tang, J. Modelling high-order social relations for item recommendation. IEEE Transactions on Knowledge and Data Engineering, 34(9):4385 4397, 2020. Liu, Z., Mei, S., Xiong, C., Li, X., Yu, S., Liu, Z., Gu, Y., and Yu, G. Text matching improves sequential recommendation by reducing popularity biases, 2023b. URL https://arxiv.org/abs/2308.14029. Long, X., Huang, C., Xu, Y., Xu, H., Dai, P., Xia, L., and Bo, L. Social recommendation with self-supervised metagraph informax network. In Demartini, G., Zuccon, G., Culpepper, J. S., Huang, Z., and Tong, H. (eds.), CIKM 21: The 30th ACM International Conference on Information and Knowledge Management, Virtual Event, Queensland, Australia, November 1 - 5, 2021, pp. 1160 1169. ACM, 2021. doi: 10.1145/ 3459637.3482480. URL https://doi.org/10. 1145/3459637.3482480. Quan, Y., Ding, J., Gao, C., Yi, L., Jin, D., and Li, Y. Robust preference-guided denoising for graph based social recommendation. In Proceedings of the ACM Web Conference 2023, WWW 23, pp. 1097 1108. ACM, April 2023. doi: 10.1145/3543507.3583374. URL http: //dx.doi.org/10.1145/3543507.3583374. Rendle, S., Freudenthaler, C., Gantner, Z., and Schmidt Thieme, L. Bpr: Bayesian personalized ranking from implicit feedback. abs/1205.2618:452 461, 2012. Salakhutdinov, R. and Mnih, A. Probabilistic matrix factorization. In Platt, J. C., Koller, D., Singer, Y., and Roweis, S. T. (eds.), Neur IPS, pp. 1257 1264. Curran Associates, Inc., 2007. URL https://proceedings. neurips.cc/paper/2007/hash/ d7322ed717dedf1eb4e6e52a37ea7bcd-Abstract. html. Song, W., Xiao, Z., Wang, Y., Charlin, L., Zhang, M., and Tang, J. Session-based social recommendation via dynamic graph attention networks. In Culpepper, J. S., Moffat, A., Bennett, P. N., and Lerman, K. (eds.), WSDM 2019, pp. 555 563. ACM, 2019a. doi: 10. 1145/3289600.3290989. URL https://doi.org/ 10.1145/3289600.3290989. Song, W., Xiao, Z., Wang, Y., Charlin, L., Zhang, M., and Tang, J. Session-based social recommendation via dynamic graph attention networks. In Proceedings of the Twelfth ACM international conference on web search and data mining, pp. 555 563, 2019b. Wang, B., Lin, M., Zhou, T., Zhou, P., Li, A., Pang, M., Li, H., and Chen, Y. Efficient, direct, and restricted black-box graph evasion attacks to any-layer graph neural networks via influence function. In Proceedings of the 17th ACM International Conference on Web Search and Data Mining, pp. 693 701, 2024. Wang, T., Xia, L., and Huang, C. Denoised self-augmented learning for social recommendation. In IJCAI 2023, pp. 2324 2331. ijcai.org, 2023a. doi: 10.24963/IJCAI.2023/ 258. URL https://doi.org/10.24963/ijcai. 2023/258. Wang, W., Xu, Y., Feng, F., Lin, X., He, X., and Chua, T.-S. Diffusion recommender model, 2023b. URL https: //arxiv.org/abs/2304.04971. Wang, X., He, X., Wang, M., Feng, F., and Chua, T. Neural graph collaborative filtering. In Piwowarski, B., Chevalier, M., Gaussier, E., Maarek, Y., Nie, J., and Scholer, F. (eds.), Proc. of SIGIR, pp. 165 174. ACM, 2019. doi: 10.1145/3331184.3331267. URL https: //doi.org/10.1145/3331184.3331267. Wu, L., Sun, P., Fu, Y., Hong, R., Wang, X., and Wang, M. A neural influence diffusion model for social recommendation. In Piwowarski, B., Chevalier, M., Gaussier, E., Maarek, Y., Nie, J., and Scholer, F. (eds.), Proc. of SIGIR, pp. 235 244. ACM, 2019a. doi: 10.1145/ 3331184.3331214. URL https://doi.org/10. 1145/3331184.3331214. Wu, L., Sun, P., Fu, Y., Hong, R., Wang, X., and Wang, M. A neural influence diffusion model for social recommendation. In Proceedings of the 42nd international ACM SIGIR conference on research and development in information retrieval, pp. 235 244, 2019b. Wu, Q., Zhang, H., Gao, X., He, P., Weng, P., Gao, H., and Chen, G. Dual graph attention networks for deep latent representation of multifaceted social effects in recommender systems. In The world wide web conference, pp. 2091 2102, 2019c. Xu, F., Lian, J., Han, Z., Li, Y., Xu, Y., and Xie, X. Relationaware graph convolutional networks for agent-initiated social e-commerce recommendation. In Proceedings of the 28th ACM international conference on information and knowledge management, pp. 529 538, 2019. Flow Matching for Denoised Social Recommendation Yang, B., Lei, Y., Liu, D., and Liu, J. Social collaborative filtering by trust. In Rossi, F. (ed.), IJCAI 2013, pp. 2747 2753. IJCAI/AAAI, 2013. URL http://www.aaai.org/ocs/index.php/ IJCAI/IJCAI13/paper/view/6750. Yu, J., Yin, H., Li, J., Wang, Q., Hung, N. Q. V., and Zhang, X. Self-supervised multi-channel hypergraph convolutional network for social recommendation. In Leskovec, J., Grobelnik, M., Najork, M., Tang, J., and Zia, L. (eds.), Proc. of WWW, pp. 413 424. ACM / IW3C2, 2021a. doi: 10.1145/3442381.3449844. URL https: //doi.org/10.1145/3442381.3449844. Yu, J., Yin, H., Li, J., Wang, Q., Hung, N. Q. V., and Zhang, X. Self-supervised multi-channel hypergraph convolutional network for social recommendation. In Proceedings of the web conference 2021, pp. 413 424, 2021b. Zhao, W., Shi, M., Yu, X., Zhou, J., and Lu, J. Flowturbo: Towards real-time flow-based image generation with velocity refiner, 2024. URL https://arxiv. org/abs/2409.18128.