# neural_fluid_simulation_on_geometric_surfaces__6e14e97a.pdf Published as a conference paper at ICLR 2025 NEURAL FLUID SIMULATION ON GEOMETRIC SURFACES Haoxiang Wang1 Tao Yu2 Hui Qiao1 Qionghai Dai1 1Department of Automation, Tsinghua University, 2BNRist, Tsinghua University, whx22@mails.tsinghua.edu.cn, ytrock@126.com, {qiaohui, daiqionghai}@tsinghua.edu.cn Incompressible fluid on the surface is an interesting research area in the fluid simulation, which is the fundamental building block in visual effects, design of liquid crystal films, scientific analyses of atmospheric and oceanic phenomena, etc. The task brings two key challenges: the extension of the physical laws on 3D surfaces and the preservation of the energy and volume. Traditional methods rely on grids or meshes for spatial discretization, which leads to high memory consumption and a lack of robustness and adaptivity for various mesh qualities and representations. Many implicit representations based simulators like INSR are proposed for the storage efficiency and continuity, but they face challenges in the surface simulation and the energy dissipation. We propose a neural physical simulation framework on the surface with the implicit neural representation. Our method constructs a parameterized vector field with the exterior calculus and Closest Point Method on the surfaces, which guarantees the divergence-free property and enables the simulation on different surface representations (e.g. implicit neural represented surfaces). We further adopt a corresponding covariant derivative based advection process for surface flow dynamics and energy preservation. Our method shows higher accuracy, flexibility and memory-efficiency in the simulations of various surfaces with low energy dissipation. Numerical studies also highlight the potential of our framework across different practical applications such as vorticity shape generation and vector field Helmholtz decomposition. 1 INTRODUCTION Fluids are fascinating but complex to simulate, with applications from aerodynamics and hydrodynamics to special effects in computer animation. Flow on the surface is a challenging problem while the practical usages are essential on the visual effects with foam or bubble (Da et al., 2015; Deng et al., 2022), studies of liquid crystal films (Crowdy & Marshall, 2005; Turner et al., 2010), atmosphere/ocean evolution (Miller et al., 1992; Niiler, 2001) and fluid-solid interaction in robotics (Ruan et al., 2021). The incompressible Euler flow model serves as a valuable simplification of real-world fluid dynamics. This model, characterized by a vector field v(x, t) representing velocity, along with pressure p(x, t) and density ρf, adheres to the following equations: t + v v) = p (1a) v = 0, (1b) where x in a surface S. Two main challenges in solving Eqs. 1 puzzle the researchers: one is enforcing the governing equation on the surface and the other is developing efficient approaches for the time integration (advection) and the divergence-free constraint in Eq. 1b, which is critical to ensure the conservation of fluid volume and energy. Classical approaches often utilize grids or meshes on surfaces and reduce the problem to 2D scenarios. However, they encounter significant challenges in the geometry and the differential operator Corresponding authors. Published as a conference paper at ICLR 2025 computation. Accurate calculation on surfaces relies on high mesh/grid quality, leading to limited robustness and flexibility on different geometry representations. Moreover, the introduction of mesh or grid through spatial discretization also hinders simulation in a continuous spatio-temporal domain owing to the limited memory usage. Finally, the traditional methods need to conduct advection and pressure projection for the divergence-free field, causing the energy dissipation problem. While many alternatives are proposed to solve the problems (Qu et al., 2019; Elcott et al., 2007b; Nabizadeh et al., 2022; Yin et al., 2023), they often come with implementation complexities and lack adaptability to different geometries. As a promising alternative, simulations based on the neural implicit representations are proposed in recent years (Richter-Powell et al., 2022; Chen et al., 2023). Unlike other data-driven simulation methods (Morimoto et al., 2021; Pfaff et al., 2020) that have limited generalization ability, these methods leverage neural networks to parameterize spatial functions and support the simulation on the continuous domain with limited storage. However, existing methods (Raissi et al., 2019; Chen et al., 2023) can not guarantee the divergence-free property and suffer from the advection error, which leads to the energy dissipation problem. Furthermore, extending these methods to surfaces presents additional challenges for practitioners. To tackle the challenges, we propose a neural flow on surfaces method based on the neural implicit representation. Neural implicit representation keeps high memory efficiency and supports robust and accurate differential operator computation for the continuous simulation across various geometry representations. Our method leverages a construction on the surface with Closest Point Method (Ruuth & Merriman, 2008) and differential forms, and automatically satisfies the divergence-free constraint, assisting us to enforce the constitutive laws on the surface. To mitigate the challenges of energy dissipation encountered in both classical and advanced methods, we adopt a covariantderivative based advection to enforce the dynamics of the incompressible fluid. By integrating this process with our divergence-free field construction, our framework eliminates the need for velocity advection and pressure projection, thus minimizing energy dissipation. Furthermore, our framework is versatile and applicable to various tasks such as generation and field decomposition, offering an end-end solution that capitalizes on the advantages of neural representation. In summary, we make the following contributions: We present a novel neural physical simulator for surface flow, named NFFS (Neural Functional Flow on Surface), leveraging the Closest Point Method and exterior calculus in the neural implicit representation. Our approach ensures divergence-free properties and adaptability across various geometric surface representations. Notably, it is the first study to present simulation results of incompressible fluid flow on implicitly neural-represented surfaces (Sitzmann et al., 2020) with a guarantee of divergence-free behavior. We design a complementary advection process based on the covariant derivatives for fluid dynamics with low energy dissipation. We conduct comprehensive numerical studies to verify the correctness, energy preservation, memory efficiency and geometry adaptivity for our proposed framework. Benefitting from the advantages of compact representation, as also highlighted in Chen et al. (2023), our results show that our method achieves approximately 15 times higher accuracy than other methods with the same storage cost and provides 5 times memory savings compared to the classic method while accurately describing the phenomenon of fluid dynamics on the surface. Additionally, we demonstrate the conditioning ability of our simulator through an end-to-end generation task and apply it to a real-world velocity field decomposition task. 2 RELATED WORKS Flow on two-dimensional surface. The main stream of the fluid simulation consists of the Lagrangian methods like Smoothed Particle Hydrodynamics (SPH) (Gingold & Monaghan, 1977; Monaghan, 1992) and the Euler methods such as stable fluid with Mark-and-Cell grid (Stam, 1999). On surfaces, particle-based methods have been studied extensively in the past decade. The primary focus is the differential operator with SPH-style. Many approximators (Petronetto et al., 2013; Belkin & Niyogi, 2008; Cheung et al., 2015; Nealen, 2004) are proposed and adopted in fluid dynamics (Auer & Westermann, 2013; Leung et al., 2011; Wang et al., 2020; Tao et al., 2022; Suchde, 2021). However, the particle-based methods suffer from high computation cost for the comparable Published as a conference paper at ICLR 2025 Exterior Calculus (Sec.3.2) Surface div-free Surface Normals Surface div-free Surface Normals 𝑓𝜃𝑡 Simulation on different representations Analytic Explicit Implicit Conditioning and generation (Sec. 4.2, 5.3) Velocity Helmholtz decomposition (Sec. 5.4) Curl-free Div-free Pipeline Applications (Sec. 3.1) (Sec. 5.1) (Sec. 5.2) (Sec. 5.2) Figure 1: The paradigm of our paper. Left: the pipeline of our proposed methods. Our method utilizes the implicit neural representation to construct a divergence-free field (showed in Sec. 3.1). We employ exterior calculus and Closest Point Method to construct the surface velocity field v and vorticity field ω (showed in Sec. 3.2). Subsequently, we adopt the covariant derivatives based advection to calculate the flow in each discretized time iteratively (showed in Sec. 4.1). Right: the potential applications of our proposed method: (a) Our method supports simulation on different surface representation, like analytic surfaces (showed in Sec. 5.1), explicitly represented mesh surface (showed in Sec. 5.2) and implicitly represented surface (showed in Sec. 5.2). (b) Our method also enjoys the ability of conditioning and generation benefitting from our network architecture (showed in Sec 4.2 and 5.3). (c) We demonstrate the effectiveness of our method for the Helmholtz decomposition and analyze the potential usage for scientific research (showed in Sec. 5.4). accuracy. Dealing with differential forms on surfaces for divergence-free projection also remains challenging, particularly without correspondence between particles. We next focus our discussions on Euler methods, which can be categorized into velocity-based methods and vorticity-based methods. Velocity-based methods, following Stam (1999), utilize the global or local surface parameterizations (Lui et al., 2005; Hegeman et al., 2009; Hill & Henderson, 2016; Yang et al., 2019), which might introduce undesired distortion. Methods like Stam (2003) restrict the problem to the subdivision surfaces to mitigate this issue. Shi & Yu (2004) tackles the problem by directly simulating on general triangle meshes but entails explicit complex computation of flow lines. More recently, Bhattacharya et al. (2019) extends these to unstructured quadrilateral surface meshes. However, these methods require the advection and divergence-free projection, leading to energy dissipation (Nabizadeh et al., 2022). While there are approaches to address the energy problem such as Mullen et al. (2009); Pavlov et al. (2011) and recent Qu et al. (2019); Deng et al. (2023), they do not primarily focus on the surface dynamics and often entail additional complex computations (such as accurate characteristic mapping (Wiggert & Wylie, 1976)). For the stream of vorticity-based approaches, methods are grounded in differential forms and exterior calculus on surfaces, circumventing surface parameterization and naturally enforcing the divergence-free property. One of the pioneering methods in this domain was proposed by Elcott et al. (2007b). While their method preserves circulation (vorticity), it suffers from the issues of numerical instability. Another notable vorticity-based approach is presented by De Witt et al. (2012), leveraging eigenfunctions of the Laplacian operator to reduce the computational cost of the Poisson solver when computing velocity from vorticity. This approach has been further extended in works such as Cui et al. (2018; 2021) for spectral-based simulations. However, such methods can be computationally expensive and require numerous eigenvectors for flows with high spatial frequencies. A method closely related to ours is Functional Fluids on Surfaces (Azencot et al., 2014), which employs the Discrete Exterior Calculus on the surface to derive the vorticity and advect with covariant derivatives. This approach achieves convenient computation on surfaces with the energy preservation. Our method builds upon this by employing continuous exterior calculus to generate divergence-free fields and vorticity functions on surfaces. The theoretical accuracy of our method is higher with a sufficient number of surface samples. Moreover, we incorporate the neural implicit representation to alleviate the high memory burden associated with high smoothness and accuracy requirements. Physical Simulation based on Neural Network. Neural physical simulation can be divided into two main streams. The first one is the data-driven simulation. This type of methods often aims at solving simulation problems based more on data but less on the governing equation. They often adopt the training data from the classical solvers or the real world observation to make the neural network learn the physical rules and generalize it to other scenarios. Some convolution network approaches (Morimoto et al., 2021) and designed U-Net approaches (Lu et al., 2019) show higher efficiency than the classical solvers. Neural operator approach (Li et al., 2020) makes full use of Published as a conference paper at ICLR 2025 the Fourier layer and becomes an important milestone for the type of methods. Recently, a newly proposed Lagrangian Flow Networks (Torres et al., 2023) embeds the idea of the characteristic mapping and design a data-driven PDE solver. However, these methods may struggle to generalize well to different initial/boundary conditions, material parameters, or geometries. The training data acquisition and time-consuming training processes also block their wide applications. Another stream of the line is to embed the governing equations into the network. For this type, one representative direction we have to mention is Physical-Inform Neural Network (PINN) (Raissi et al., 2019). The method designs the physical loss term according to the governing equation and the neural network is trained to extract the features of the spatiotemporal correlation and fit the target field via the physical loss. However, the ways to directly force the neural network to fit all the physical rules make the training process difficult. The training will cost a very long time and often can not achieve the required accuracy. To fulfill this issue, the implicit neural representation Chen et al. (2023) is also introduced to better describe the spatiotemporal dependencies and reduce the burden of network training. Nevertheless, this method enforces physical laws via an operator splitting manner, leading to the energy dissipation problem. Kim et al. (2019); Rao et al. (2020); Richter-Powell et al. (2022) propose divergence-free neural field construction approaches and optimize the advection process. These methods, while effective in guaranteeing the divergence-free property, face challenges in direct application to surface flows and encounter difficulties in optimization processes. Hence, in this work, we propose a novel framework that constructs a neural-represented divergencefree vector field on surfaces to embed the efficient spatial representation with the physical prior. We also design the corresponding covariant derivative based advection process for the fluid dynamics computation. Our method can achieve a memory-efficiency, accurate and energy-preserving simulation on different surfaces representation, even for the implicit neural represented surfaces. We show the paradigm of our paper in Fig. 1. 3 NEURAL FLOW ON SURFACES In this section, we present a framework designed to enforce the divergence-free characteristics for vector fields on surfaces. We first present the general philosophy to construct the divergence-free field then we explain how we apply it to the different surface representations, especially in implicit neural represented surfaces. 3.1 CONSTRUCTION OF THE DIVERGENCE-FREE VECTOR FIELD We adopt the terms of differential forms to derive the divergence-free vector field. More preliminaries are shown in Do Carmo (1998) and Appendix A. in Richter-Powell et al. (2022). We first discuss the divergence-free vector field on a Riemannian manifold M, (e.g. R3). Let Ak(M) as the space of k-forms. The operator d: Ak(M) Ak+1(M) is the exterior derivative while : Ak(M) An k(M), where n is the dimension of M, denotes the Hodge star operator mapping each k-form to an (n k)-form. Our vector field can be represented as 1-form, v = Pn i=1 vidxi. With the notation of the differential forms and the computation tools of the differential manifold, we can define the divergence div(v) = d v. We can observe that div(v) can be expressed with differential k-forms, but after computation it reduce to a 0-form, resulting in a scalar function. A fundamental property of the exterior calculus need to be proposed that for an arbitrary (n 2)- form µ An 2(M), we have d2µ = d(dµ) = 0, (2) then it follows that v = dµ (3) is divergence-free since yields a sign function. Consequently, our objective is to construct a parametric (n 2)-form µ that enforces a divergence-free v. We can construct a network parameterization to construct µ and derive the required velocity field v with the divergence-free property for the incompressibility via Eq. 3. Therefore, the main issue comes to the computation of the operator and d on our provided surface S. We calculate it by Closest Point Method in the following subsection. Published as a conference paper at ICLR 2025 3.2 CONSTRUCTION OF THE NEURAL FLOW ON SURFACES On the surfaces S embedded in R3, the analysis differs. To facilitate a more convenient analysis of differential forms on the surface without specific surface parameterization, we consider studying S in R3 instead of R2. The Closest Point Method (CPM) (Ruuth & Merriman, 2008; Li et al., 2023a) serves as a tool to transform differential forms on surfaces into ones defined in R3, using the closest point on the surface. We define the inclusion map j : S N R3, where N R3 is a neighborhood of j(S). The closest point function cp : N S takes a point in the neighborhood and returns the closest point on the surface. Then we can define the pullback operator j : Ak(N) Ak(S) and cp : Ak(S) Ak(N) to map the tangential vector in the neighborhood and on the surfaces. The endomorphism (j cp) = cp j : Ak(N) Ak(N) replaces a neighbor k-form by its extension from its value at the surface j(S). With the Closest Point Method, we can construct the divergence-free field on the surfaces S in the following theorem. 1 Theorem 3.1. Given a parameterized scalar function (stream function) σ : S R, one can construct divergence-free v : S R3, for x S: v(x) = j (( (cp σ) j(x)) n(x)), (4) where n : S R3 represents the normal of the surfaces, and the corresponding vorticity function ω : S R can be defined as ω = ( v(x)) n(x). (5) Equation 4 defines a stream function and employs the gradient operator to transform into a vector field. The operator cp pulls the vector field back into the ambient space, while n and j ensure that the field lies on the tangent plane and is restricted to the surface. The vorticity can be interpreted as having a rotation axis aligned with the surface normal, as it is evaluated directly on the surface, making it a scalar field. To provide a clearer understanding of the computation described in Theorem 3.1, we include an illustration in Fig. 2. 𝑗(𝑥) (𝑐𝑝 𝜎) 𝑗 ( (𝑐𝑝 𝜎) 𝑗(𝑥)) Figure 2: Illustration for divergence-free field. Proof sketch: Our goal is to construct a divergence-free field. We adopt the form in Sec. 3.1 dµ as the basis. Then we calculate the value of this differential form for a surface function. We utilize the Closest Point Method and pullback the surface field v into the ambient space. We extend the differential form with surfaces normals to R3 by the pullback and derive the parameterization on R3 that preserves the divergence-free. Moreover, the pullback utilizes the closest point and shares the property that the closest point of the surface point is the point itself. Therefore, we can simply constrain this parameterization for v on the surface and derive our required surface field that satisfies the divergence-free property on the surface. We can also derive vorticity function as dv through the similar process. Remark: The construction of the divergence-free field in Eq. 4 is related to the term surface curl in the electromagnetic s Helmholtz decomposition (Scharstein, 1991). However, our theorem provides a more formal formulation indicating why it is divergence-free from the perspective of the Closest Point Method and adopts it as a neural parametric function for surface vector field dynamics. Building on our analysis, we can further derive a scalar vorticity function on the surface to support the fluid dynamics simulation. Following the conclusion of Theorem 3.1, we can construct our neural flow (a parametric vector field based on σ and n) for different representations of the surfaces. For the explicit representations like sphere, plane (analytical), and mesh (discretized), our neural flow can be constructed with sampling on the surfaces and computing/querying the normal. Moreover, the most significant aspect of our construction lies in its applicability to implicit representations, particularly the implicit neural representation (INR), such as Deep SDF (Park et al., 2019) and siren (Sitzmann et al., 2020). These methods take x R3 as input and return the sign distance function s(x). In the application on this 1Here we follow the similar discussions as Richter-Powell et al. (2022) and omit the case for non-zero homology for clear and concise in theoretical analysis. We actually can address the issue as we state in the following sections. Published as a conference paper at ICLR 2025 type of surface, points x S can be simply sampled by x s(x) s(x) s(x) 2 leveraging the characteristics of the sign distance function, while the normal on the surface can be computed using s(x). Then, for the sampled points on the surface, we can create a parametric form σ(θ, x) and finalize the divergence-free construction with the normals following Eq. 4. The function can be also derived via Eq. 5. Our construction of v directly enables tasks like Hodge-Helmholtz decomposition (extracting the divergence-free component given a vector field). It can be simply achieved with our construction with the mean square error loss for v. In addition, the construction of ω can be utilized to conduct advection serving for the vortices dynamics in fluid simulation, as we will introduce subsequently. 4 APPLICATIONS WITH NEURAL FLOW ON SURFACES In this section, we put our construction of neural flow on surfaces into practice. We first discuss the advection for the simulation of the real fluid dynamics. Then we delve into applications involving the conditioning. 4.1 ADVECTION OF NEURAL FLOW ON SURFACES Our vector field inherently satisfies the incompressibility condition with the divergence-free construction, eliminating the need for pressure projection to enforce the constraint and associated errors. However, our proposed field, while possessing this advantageous property, introduces a new challenge: how can we advect this neural field over time to adhere to physical laws? Φ𝑠(𝜔𝑡)(𝑝 ) = 𝑎0 𝑻𝒑(𝑺) 𝒗(𝑝) 𝑝 𝑝 Figure 3: Illustration for covariant derivative. The incompressible Euler equation can be written with the vortex form (Davidson, 2015): ωt t = (vt )ωt, (6) where vt is the divergence-free velocity field and ω is the vorticity field. Remark: A direct sequence of Eq. 6 is that ωt is transported in the same manner as fluid particles. This implies that we can advect/diffuse ωt similarly to a scalar property carried by the particles, such as temperature. For the advection process, instead of the semi-Lagrangian scheme (Staniforth & Cˆot e, 1991) that requires the discretization, we opt for a functional way to fully enjoy the continuity of our neural field construction (Azencot et al., 2014). As indicated in the remark above, we define the flow ϕt(p) that denotes the particle position after time t for the particle that starts at time 0. That is, t = vt(ϕt(p)), ϕ0(p) = p, (7) for p S. Then we can observe ϕt is an invertible self-map on S and can be adopted to transport ωt. Define ϕt acts on the smooth function f : S R through the push forward: Φt(f) = f ϕ 1 t . (8) We can find out our vorticity mapped by Φ satisfies: ωt = Φt(ω0). (9) Next we implement the advection of ω. We discretize time and consider time i, ωi with the time step h. Then we can impose the following requirement, by assuming vi advecting for time h t and vi+1 advecting for time t: ωi+1 = Φi i+1(ωi) = Φvi+1 t Φvi h t, (10) as t [0, h], which is similar with the implicit Euler scheme (Butcher, 2016). This enables us to derive the equivalence for the forward advected ωi (along vi) and backward advected ωi+1 (along vi+1) by taking t = h/2 as follows: Φvi+1 h/2(ωi+1) = Φvi h/2(ωi). (11) Published as a conference paper at ICLR 2025 Then our goal turns to compute Φv t . We first define the covariant derivative Dv(f) as a function g, which measures the change in f w.r.t. the flow under v: g(p) = Dv(f)(p) = lim t 0 f(ϕt(p)) f(p) A classic result in Riemannian geometry is that the covariant derivative can be computed as Morita (2001) (as shown in Fig. 3): Dv(f)(p) = g(p) = ( f)(p), v(p) p . (13) With the conclusion in Azencot et al. (2013) (Lemma 2.5), we can derive that Φv t (ω) = exp(t Dv)ω = with respect to v. Using Eq. 13 and Eq. 14 above, along with the first-order approximation estimated by inner product, we can derive the following expression for each time step: Li = ωi ωi+1 + h 2 ωi, vi + h 2 ωi+1, vi+1 = 0. (15) Then our advection schemes involve iteratively minimizing the loss function Li with respect to parametric ωi+1 and vi+1 through Theorem 3.1 and preparing them for the advection for next time i + 2. More specifically, for the parameters θi of the parametric function ωi and vi, we seek to optimize θi+1 = argminθi+1 X x M S Li(h, ω(θi), v(θi), ω(θi+1), v(θi+1)), (16) where M is the sample set from the surface and {θi}T i=0 represents the vector field for T time steps. For the initial time θ0, we utilize the given velocity field v0 or vorticity field ω0 and conduct fitting for initialization. For the case that v0 and ω0 are all provided, these exists the harmonic component that does not contribute to the vorticity but influences the velocity field. The term is often associated with the topological structure and modeled as time-invariant (Azencot et al., 2014). Therefore, we need to construct another time-invariant field parameterization (MLP) to fit the harmonic term for the non-zero homology. We can follow Richter-Powell et al. (2022); Azencot et al. (2014) and simply add another parameterized vector field η to Eq. 4. Specifically, at the initial time, we employ another MLP η to learn the residual in the initial velocity after fitting the vorticity, treating this residual as the harmonic components. Similar with Azencot et al. (2014), η remains time-invariant in subsequent velocity computations and not trained in the following iterative computation. More discussions about the topology are included in Appendix F.1. For all the examples presented in this work, we solve this time-integration optimization problem via Adam (Kingma & Ba, 2014), a first-order stochastic gradient descent method. The computational process is showed in the pseudocode Algorithm 1 in Appendix D. 4.2 CONDITIONING PROPERTY IN NEURAL FLOW ON SURFACES As mentioned in Theorem 3.1, we need to construct parametric σ(θ, x) for x S and parameters θ. The formulation also allows us to conduct conditioning, i.e. function σ(θ, x, z), where z is the features of the conditioning for the divergence-free field. This property can be leveraged in different tasks. For instance, in Sec. 4.1, the feature z can represent time t. Moreover, for the tasks involving the vector field generation, encode z can represent semantic features extracted from natural scenes or images, thereby enabling control over the shape, scale or other more semantic information of the field. We further present an example utilizing the variation auto-encoder for the vorticity field generation (settings in Appendix C and results in Sec. 5.3), showing the capability of our framework to involve the semantic prior. 5 NUMERICAL STUDIES In this section, we conduct numerical studies for our proposed framework. Our primary emphasis lies in verifying the efficacy of our framework in fluid dynamics across various surface representations, exploring conditioning characteristics, and demonstrating practical applications such as the Published as a conference paper at ICLR 2025 Time 0s Time 4s Time 8s Time 0s Time 4s Time 8s Figure 4: Comparison results for sphere jet flow. Helmholtz decomposition using real-world data. It is worth noting that our method does not rely on any training data, distinguishing it from certain neural network-based simulation methods (Morimoto et al., 2021; Lu et al., 2019; Pfaff et al., 2020; Torres et al., 2023). Methods Error Time Storage PINN 1.73e5 12.1 h 568.1KB INSR 8.63e4 20.2 h 516.3KB Small-F.S. 5.34e3 0.8h 583.8KB Ours 2.89e2 16.5 h 532.8KB GT N/A 8.3 h 2643.0 KB Table 1: Quantitative results for the sphere jet flow. Error: mean square error (MSE) averaged by 100 time steps on 81924 mesh vertices. 5.1 FLOW ON THE ANALYTICAL SURFACES In this subsection, we present the benchmark studies for our proposed framework on the analytical sphere and inclined plane, which allows us to easily derive samples and normal. We design the sphere jet flow and inclined plane to assess the correctness of our methods, compared against functional fluid dynamics and other baseline methods. Additional, we also utilize a sphere rot case to further validate the energy-preservation characteristics of our method in Appendix E.1. GT Ours Small.F.S INSR PINN HOLA-7 Pseudospectral Elcott et al. 2007 Figure 5: Vorticity comparison for inclined planar Taylor vortices on the 40th time step. HOLA; Pseudospectral; Elcott et al 2007 results are quoted from Mc Kenzie (2007). Sphere Jet. On the sphere, we also simulate a jet flow as illustrated in Fig. 4. We initialize two opposite vortices on the sphere to generate the jet. Our vorticity results are compared with those from PINN (Raissi et al., 2019), Implicit Neural Spatial Representations (INSR) (Chen et al., 2023) and Functional Fluid on Surfaces with the same storage cost. We use higher-resolution Functional Fluid on Surfaces as the reference ground truth, whose vector field storage is 5 times than ours. The results depicted in Fig. 4 indicate our method better characterizes the jet phenomenon of the flow vortices compared with other methods. Our method allows optimization on the subspace of divergencefree functionals via the physical constraint in network design while others need projection to the divergence-free functional subspace via an extra-designed loss function, which brings the error in each discretized time step, leading to the significant inaccuracy with the cascading effects in Fig. 4. Additionally, our method produces smoother results since we sample across the entire sphere rather than only solving for the mesh vertices as in high/low-resolution Functional Fluid on Surfaces. In other words, for higher resolution and smoother results, the Functional Fluid on Surfaces method requires more storage, whereas our memory cost remains constant. We provide the quantitative comparison for the vorticity values on the reference ground truth method s mesh vertices. The mean square error in 100 time steps is shown in Tab. 1. The results demonstrate that at the same memory cost, ours can achieve the highest accuracy compared to other methods. More empirical results and comparison with both classical and advanced methods are included in Appendix E.3 to further verify the effectiveness of the proposed framework. Published as a conference paper at ICLR 2025 Hand Vortex Pair Spot Jet Flow Time 0 Time 1 Time 2 Time 0 Time 1 Time 2 Figure 6: Results for flow on explicit meshes. Taylor vortices on inclined plane. We also simulate Taylor vortices on an inclined plane with a known normal. The phenomena observed should be similar to those on a 2D plane. The initialization of the vortices is referred to Mc Kenzie (2007). The quantitative results are displayed in Fig. 5. Ours capture the details of vortices phenomenon with high smoothness and low alias. The classical methods with the close memory cost lost some details or leaves aliasing artifacts. The advanced method INSR shows a similar result but still fails in details and vortices energy preservation due to the energy dissipation. PINN suffers from the largest energy dissipation and the result is the least accurate due to the enforcement of the divergence-free constraint. We also include the quantitative results and more comparison with classic solvers in Appendix E.2 to further show the memory efficiency and accuracy of our proposed method. 5.2 FLOW ON THE EXPLICIT AND IMPLICIT MESHES Time 1 Time 2 Time 3 Time 4 Time 1 Time 2 Time 3 Time 4 Figure 7: Results for flow on implicit neural representation. In this subsection, to further demonstrate the generality and robustness of our methods, we present the flow results for both explicit and implicit geometry. For these geometries, the samples and normal can not be analytically derived, resulting in inherent errors. Nevertheless, we demonstrate that our method still performs effectively and captures the flow phenomenon accurately2. Flow on the explicit meshes. We initialize the Taylor vortices pair and jet flow on the explicit hand (Jacobson et al., 2018) and spot model (Crane et al., 2013a). The results are depicted in Fig. 6. They show that albeit less smooth results, due to the relatively low mesh resolution, our flows exhibit the similar behavior as the reference GT and effectively capture the vortices and jet phenomenon. Flow on the implicit neural representation. We also simulate a jet flow on the implicit neural represented Armadillo (Krishnamurthy & Levoy, 1996) and Lucy (Turk & Levoy, 1994). The results in Fig. 7 accurately capture the smooth jet flow phenomenon on different implicit surfaces. Surprisingly, the classic functional flow on surface method fails to converge for both the mesh obtained from marching cubes on our implicit neural representation and even the original mesh (using a Newton solver). To substantiate our claim regarding non-convergence, we include more studies in the supplementary (Appendix E.4) that lists the simulation crash time steps across different marching cube resolutions for the traditional methods. The outcome indicates that our method keeps higher robustness than the traditional method which depends on mesh quality and exhibits instability but consumes high memory when applied to the implicit neural representations. Instead, our method 2Note that we do not compared with INSR and PINN in these cases, since the advection and projection in the two methods can not be simply adopted to the flow on the various surfaces without the surface parameterization to R2 and the continuous pullback to R3, which are beyond the scope of the work. Published as a conference paper at ICLR 2025 only needs the samples in 3D rather than meshes, thereby avoiding the subtle in the geometry and topology inherent in the complex shape, and therefore derive the smooth jet flow results. Moreover, the learnable neural divergence-free field is capable of tolerating errors and supports more robust simulation in the long term. Generally speaking, our method supports wider applications on different geometry representations with high memory efficiency in practice. Date 1: Divergence-free Magnitude Date 1: Total Magnitude Date 2: Divergence-free Magnitude Date 2: Total Magnitude (a) Vorticity fields generation. (b) Helmholtz decomposition for 100-metre wind data. Figure 8: Vorticity fields generation and decomposition tasks. 5.3 FLOW WITH CONDITIONING To verify the conditioning property for our proposed neural-based framework, we design the generation task as stated in Sec. 4.2 and Appendix C. We take EMNIST dataset (Cohen et al., 2017) as the image input and generation the divergence-free velocity fields with the vorticity imitating the silhouettes of alphabets. We construct a variational auto-encoder for the vorticity fields representing different alphabets in Fig. 8 (a). The numerical studies demonstrate the feasibility of our proposed framework, benefitting from the neural network, to effectively utilize conditioning encodes from other modalities. Ours operates in a more end-to-end manner compared to approaches that first generate and then fit with classic simulators, thereby avoiding the repeated fitting process for each generation. 5.4 FLOW FOR HELMHOLTZ DECOMPOSITION Finally, we apply our method to the real-world atmosphere dataset (Raoult et al., 2017). The Helmholtz decomposition is performed on the 100-metre wind velocity data with our proposed framework. In this study, we make the sphere assumption for the latitude and longitude coordinates in the dataset and derive the normal analytically. The results, shown in Fig. 8 (b), reveal identifiable vortices after the decomposition. Further inference can be made regarding atmosphere information from the results (Cao et al., 2014; Hammond & Lewis, 2021). 6 DISCUSSION AND CONCLUSION This work adopts the neural representation to construct the parametric divergence-free vector field based on the Closest Point Method that supports exterior calculus on the surfaces. Our framework facilitates field construction along with covariant-derivative-based advection directly on various surface representations, especially on the implicit neural representation, bypassing the need for marching cubes or meshing functions. It is not well-supported by classic simulators. Our framework aligns well with current trends in neural implicit representation methods like Deep SDF (Park et al., 2019) and Neu S (Wang et al., 2021) from this point. The experiment results validate the correctness, energy-preserving and memory-efficiency of our method. Furthermore, our framework shows high robustness and flexibility and also supports various conditioning tasks for further applications. While offering important benefits, our method also suffers from limitations. Our main limitations stem from topology, geometry and the time efficiency, and we have discussed the details in Appendix F. For more future extensions, a theoretical analysis of convergence and stability of our method would be valuable. Also, more efforts in advection can be made, including high order approximation (Suzuki, 1985), combination with other advanced methods like reference mapping (Li et al., 2023c) and data-driven network (Liu et al., 2021). Expanding support for boundary conditions and flow viscosity 3 on the surfaces are also vital, for more realistic and practical applications. It is also a promising direction to integrate our proposed framework into the existing neural reconstruction pipeline like Ne RF (Mildenhall et al., 2021; Wang et al., 2021), enabling more improved performances in dynamic reconstruction and inverse physics with the vision input. 3The viscosity term can be also supported by applying the Laplacian to the vorticity function ω and ( ω), v . We can derive the calculation on the surface using Theorem 3.1 by substituting the stream function with ω and ( ω), v . Published as a conference paper at ICLR 2025 ACKNOWLEDGEMENTS This work was supported by National Key R&D Program of China (No.2023YFB3209700 and No.2024YFB2809101), National Natural Science Foundation of China (No.62322110 and No.62171255). Eddie Aamari, Jisu Kim, Fr ed eric Chazal, Bertrand Michel, Alessandro Rinaldo, and Larry Wasserman. Estimating the reach of a manifold. 2019. Ryoichi Ando, Nils Thuerey, and Chris Wojtan. A stream function solver for liquid simulations. ACM Transactions on Graphics (TOG), 34(4):1 9, 2015. Stefan Auer and R udiger Westermann. A semi-lagrangian closest point method for deforming surfaces. In Computer graphics forum, volume 32, pp. 207 214. Wiley Online Library, 2013. Omri Azencot, Mirela Ben-Chen, Fr ed eric Chazal, and Maks Ovsjanikov. An operator approach to tangent vector field processing. In Computer Graphics Forum, volume 32, pp. 73 82. Wiley Online Library, 2013. Omri Azencot, Steffen Weißmann, Maks Ovsjanikov, Max Wardetzky, and Mirela Ben-Chen. Functional fluids on surfaces. In Computer Graphics Forum, volume 33, pp. 237 246. Wiley Online Library, 2014. Mikhail Belkin and Partha Niyogi. Towards a theoretical foundation for laplacian-based manifold methods. Journal of Computer and System Sciences, 74(8):1289 1308, 2008. Haimasree Bhattacharya, Joshua A Levine, and Adam W Bargteil. Fluid simulation on unstructured quadrilateral surface meshes. UMBC Computer Science and Electrical Engineering Department Collection, 2019. James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake Vander Plas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+Num Py programs, 2018. URL http: //github.com/google/jax. John Charles Butcher. Numerical methods for ordinary differential equations. John Wiley & Sons, 2016. Jie Cao, Lingkun Ran, and Na Li. An application of the helmholtz theorem in extracting the externally induced deformation field from the total wind field in a limited domain. Monthly Weather Review, 142(5):2060 2066, 2014. Jumyung Chang, Ruben Partono, Vinicius C Azevedo, and Christopher Batty. Curl-flow: boundaryrespecting pointwise incompressible velocity interpolation for grid-based fluids. ACM Transactions on Graphics (TOG), 41(6):1 21, 2022. Honglin Chen, Rundi Wu, Eitan Grinspun, Changxi Zheng, and Peter Yichen Chen. Implicit neural spatial representations for time-dependent pdes. In International Conference on Machine Learning, pp. 5162 5177. PMLR, 2023. Ka Chun Cheung, Leevan Ling, and Steven J Ruuth. A localized meshless method for diffusion on folded surfaces. Journal of Computational Physics, 297:194 206, 2015. Julian Chibane, Gerard Pons-Moll, et al. Neural unsigned distance fields for implicit function learning. Advances in Neural Information Processing Systems, 33:21638 21652, 2020. Gregory Cohen, Saeed Afshar, Jonathan Tapson, and Andre Van Schaik. Emnist: Extending mnist to handwritten letters. In 2017 international joint conference on neural networks (IJCNN), pp. 2921 2926. IEEE, 2017. Published as a conference paper at ICLR 2025 Keenan Crane, Ulrich Pinkall, and Peter Schr oder. Robust fairing via conformal curvature flow. ACM Transactions on Graphics (TOG), 32(4):1 10, 2013a. Keenan Crane, Clarisse Weischedel, and Max Wardetzky. Geodesics in heat: A new approach to computing distance based on heat flow. ACM Transactions on Graphics (TOG), 32(5):1 11, 2013b. Darren Crowdy and Jonathan Marshall. Analytical solutions for rotating vortex arrays involving multiple vortex patches. Journal of Fluid Mechanics, 523:307 337, 2005. Qiaodong Cui, Pradeep Sen, and Theodore Kim. Scalable laplacian eigenfluids. ACM Transactions on Graphics (TOG), 37(4):1 12, 2018. Qiaodong Cui, Timothy R Langlois, Pradeep Sen, and Theodore Kim. Spiral-spectral fluid simulation. ACM Trans. Graph., 40(6):202 1, 2021. Fang Da, Christopher Batty, Chris Wojtan, and Eitan Grinspun. Double bubbles sans toil and trouble: Discrete circulation-preserving vortex sheets for soap films and foams. ACM Transactions on Graphics (TOG), 34(4):1 9, 2015. Peter Alan Davidson. Turbulence: an introduction for scientists and engineers. Oxford university press, 2015. Tyler De Witt, Christian Lessig, and Eugene Fiume. Fluid simulation using laplacian eigenfunctions. ACM Transactions on Graphics (TOG), 31(1):1 11, 2012. Yitong Deng, Mengdi Wang, Xiangxin Kong, Shiying Xiong, Zangyueyang Xian, and Bo Zhu. A moving eulerian-lagrangian particle method for thin film and foam simulation. ACM Transactions on Graphics (TOG), 41(4):1 17, 2022. Yitong Deng, Hong-Xing Yu, Diyang Zhang, Jiajun Wu, and Bo Zhu. Fluid simulation on neural flow maps. ACM Transactions on Graphics (TOG), 42(6):1 21, 2023. Manfredo P Do Carmo. Differential forms and applications. Springer Science & Business Media, 1998. Sharif Elcott, Yiying Tong, Eva Kanso, Peter Schr oder, and Mathieu Desbrun. Stable, circulationpreserving, simplicial fluids. ACM Transactions on Graphics (TOG), 26(1):4 es, 2007a. Sharif Elcott, Yiying Tong, Eva Kanso, Peter Schr oder, and Mathieu Desbrun. Stable, circulationpreserving, simplicial fluids. ACM Trans. Graph., 26(1):4 es, jan 2007b. ISSN 0730-0301. doi: 10.1145/1189762.1189766. URL https://doi.org/10.1145/1189762.1189766. Robert A Gingold and Joseph J Monaghan. Smoothed particle hydrodynamics: theory and application to non-spherical stars. Monthly notices of the royal astronomical society, 181(3):375 389, 1977. Mark Hammond and Neil T Lewis. The rotational and divergent components of atmospheric circulation on tidally locked planets. Proceedings of the National Academy of Sciences, 118(13): e2022705118, 2021. Kyle Hegeman, Michael Ashikhmin, Hongyu Wang, Hong Qin, and Xianfeng Gu. Gpu-based conformal flow on surfaces. Communications in Information & Systems, 9(2):197 212, 2009. David J Hill and Ronald D Henderson. Efficient fluid simulation on the surface of a sphere. ACM Transactions on Graphics (TOG), 35(2):1 9, 2016. Jiahui Huang, Zan Gojcic, Matan Atzmon, Or Litany, Sanja Fidler, and Francis Williams. Neural kernel surface reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4369 4379, 2023. Alec Jacobson, Daniele Panozzo, et al. libigl: A simple C++ geometry processing library, 2018. https://libigl.github.io/. Published as a conference paper at ICLR 2025 J urgen Jost and Jeurgen Jost. Riemannian geometry and geometric analysis, volume 42005. Springer, 2008. Byungsoo Kim, Vinicius C Azevedo, Nils Thuerey, Theodore Kim, Markus Gross, and Barbara Solenthaler. Deep fluids: A generative network for parameterized fluid simulations. In Computer graphics forum, volume 38, pp. 59 70. Wiley Online Library, 2019. Nathan King, Haozhe Su, Mridul Aanjaneya, Steven Ruuth, and Christopher Batty. A closest point method for surface pdes with interior boundary conditions for geometry processing. ar Xiv preprint ar Xiv:2305.04711, 2023. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ar Xiv preprint ar Xiv:1412.6980, 2014. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. ar Xiv preprint ar Xiv:1312.6114, 2013. Lukas Koestler, Daniel Grittner, Michael Moeller, Daniel Cremers, and Zorah L ahner. Intrinsic neural fields: Learning functions on manifolds. In European Conference on Computer Vision, pp. 622 639. Springer, 2022. Venkat Krishnamurthy and Marc Levoy. Fitting smooth surfaces to dense polygon meshes. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pp. 313 324, 1996. Shingyu Leung, John Lowengrub, and Hongkai Zhao. A grid based particle method for solving partial differential equations on evolving surfaces and modeling high order geometrical motion. Journal of Computational Physics, 230(7):2540 2561, 2011. Mica Li, Michael Owens, Juheng Wu, Grace Yang, and Albert Chern. Closest point exterior calculus. In SIGGRAPH Asia 2023 Posters, SA 23, New York, NY, USA, 2023a. Association for Computing Machinery. ISBN 9798400703133. doi: 10.1145/3610542.3626143. URL https://doi.org/10.1145/3610542.3626143. Wei Li, Tongtong Wang, Zherong Pan, Xifeng Gao, Kui Wu, and Mathieu Desbrun. High-order moment-encoded kinetic simulation of turbulent flows. ACM Transactions on Graphics (TOG), 42(6):1 13, 2023b. Xingqiao Li, Xingyu Ni, Bo Zhu, Bin Wang, and Baoquan Chen. Garm-ls: A gradient-augmented reference-map method for level-set fluid simulation. ACM Transactions on Graphics (TOG), 42 (6):1 20, 2023c. Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial differential equations. ar Xiv preprint ar Xiv:2010.08895, 2020. Peirong Liu, Lin Tian, Yubo Zhang, Stephen Aylward, Yueh Lee, and Marc Niethammer. Discovering hidden physics behind transport dynamics. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10082 10092, 2021. Xiaoxiao Long, Cheng Lin, Lingjie Liu, Yuan Liu, Peng Wang, Christian Theobalt, Taku Komura, and Wenping Wang. Neuraludf: Learning unsigned distance fields for multi-view reconstruction of surfaces with arbitrary topologies. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20834 20843, 2023. Lu Lu, Pengzhan Jin, and George Em Karniadakis. Deeponet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators. ar Xiv preprint ar Xiv:1910.03193, 2019. Lok Ming Lui, Yalin Wang, and Tony F Chan. Solving pdes on manifolds with global conformal parametriazation. In International Workshop on Variational, Geometric, and Level Set Methods in Computer Vision, pp. 307 319. Springer, 2005. Published as a conference paper at ICLR 2025 Thomas Marz and Colin B Macdonald. Calculus on surfaces with general closest point functions. SIAM Journal on Numerical Analysis, 50(6):3303 3328, 2012. Alexander George Mc Kenzie. HOLA: a high-order Lie advection of discrete differential forms with applications in Fluid Dynamics. Ph D thesis, California Institute of Technology, 2007. Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99 106, 2021. Jonathan Miller, Peter B Weichman, and MC Cross. Statistical mechanics, euler s equation, and jupiter s red spot. Physical Review A, 45(4):2328, 1992. Joe J Monaghan. Smoothed particle hydrodynamics. Annual review of astronomy and astrophysics, 30(1):543 574, 1992. Masaki Morimoto, Kai Fukami, Kai Zhang, Aditya G Nair, and Koji Fukagata. Convolutional neural networks for fluid flow analysis: toward effective metamodeling and low dimensionalization. Theoretical and Computational Fluid Dynamics, 35(5):633 658, 2021. Shigeyuki Morita. Geometry of differential forms. American Mathematical Soc., 2001. Patrick Mullen, Keenan Crane, Dmitry Pavlov, Yiying Tong, and Mathieu Desbrun. Energypreserving integrators for fluid animation. ACM Transactions on Graphics (TOG), 28(3):1 8, 2009. Thomas M uller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. ACM Transactions on Graphics (To G), 41(4):1 15, 2022. Mohammad Sina Nabizadeh, Stephanie Wang, Ravi Ramamoorthi, and Albert Chern. Covector fluids. ACM Transactions on Graphics (TOG), 41(4):1 16, 2022. Andrew Nealen. An as-short-as-possible introduction to the least squares, weighted least squares and moving least squares methods for scattered data approximation and interpolation. URL: http://www. nealen. com/projects, 130(150):25, 2004. Peter Niiler. The world ocean surface circulation. In International Geophysics, volume 77, pp. 193 204. Elsevier, 2001. Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 165 174, 2019. Dmitry Pavlov, Patrick Mullen, Yiying Tong, Eva Kanso, Jerrold E Marsden, and Mathieu Desbrun. Structure-preserving discretization of incompressible fluids. Physica D: Nonlinear Phenomena, 240(6):443 458, 2011. Fabiano Petronetto, Afonso Paiva, Elias S Helou, David E Stewart, and Luis Gustavo Nonato. Meshfree discrete laplace beltrami operator. In Computer Graphics Forum, volume 32, pp. 214 226. Wiley Online Library, 2013. Tobias Pfaff, Meire Fortunato, Alvaro Sanchez-Gonzalez, and Peter W Battaglia. Learning meshbased simulation with graph networks. ar Xiv preprint ar Xiv:2010.03409, 2020. Ziyin Qu, Xinxin Zhang, Ming Gao, Chenfanfu Jiang, and Baoquan Chen. Efficient and conservative fluids using bidirectional mapping. ACM Transactions on Graphics (TOG), 38(4):1 12, 2019. Maziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics, 378:686 707, 2019. Chengping Rao, Hao Sun, and Yang Liu. Physics-informed deep learning for incompressible laminar flows. Theoretical and Applied Mechanics Letters, 10(3):207 212, 2020. Published as a conference paper at ICLR 2025 Baudouin Raoult, Cedric Bergeron, A L opez Al os, Jean-No el Th epaut, and Dick Dee. Climate service develops user-friendly data store. ECMWF newsletter, 151:22 27, 2017. Jack Richter-Powell, Yaron Lipman, and Ricky TQ Chen. Neural conservation laws: A divergencefree perspective. Advances in Neural Information Processing Systems, 35:38075 38088, 2022. Liangwang Ruan, Jinyuan Liu, Bo Zhu, Shinjiro Sueda, Bin Wang, and Baoquan Chen. Solid-fluid interaction with surface-tension-dominant contact. ACM Transactions on Graphics (TOG), 40(4): 1 12, 2021. Steven J Ruuth and Barry Merriman. A simple embedding method for solving partial differential equations on surfaces. Journal of Computational Physics, 227(3):1943 1961, 2008. R.W. Scharstein. Helmholtz decomposition of surface electric current in electromagnetic scattering problems. In [1991 Proceedings] The Twenty-Third Southeastern Symposium on System Theory, pp. 424 426, 1991. doi: 10.1109/SSST.1991.138595. Nicholas Sharp and Alec Jacobson. Spelunking the deep: Guaranteed queries on general neural implicit surfaces via range analysis. ACM Transactions on Graphics (TOG), 41(4):1 16, 2022. Lin Shi and Yizhou Yu. Inviscid and incompressible fluid simulation on triangle meshes. Computer Animation and Virtual Worlds, 15(3-4):173 181, 2004. Vincent Sitzmann, Julien Martel, Alexander Bergman, David Lindell, and Gordon Wetzstein. Implicit neural representations with periodic activation functions. Advances in neural information processing systems, 33:7462 7473, 2020. Jos Stam. Stable fluids. In Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 99, pp. 121 128, USA, 1999. ACM Press/Addison-Wesley Publishing Co. ISBN 0201485605. doi: 10.1145/311535.311548. URL https://doi.org/ 10.1145/311535.311548. Jos Stam. Flows on surfaces of arbitrary topology. ACM Transactions On Graphics (TOG), 22(3): 724 731, 2003. Andrew Staniforth and Jean Cˆot e. Semi-lagrangian integration schemes for atmospheric models a review. Monthly weather review, 119(9):2206 2223, 1991. Pratik Suchde. A meshfree lagrangian method for flow on manifolds. International Journal for Numerical Methods in Fluids, 93(6):1871 1894, 2021. Masuo Suzuki. Decomposition formulas of exponential operators and lie exponentials with some applications to quantum mechanics and statistical physics. Journal of mathematical physics, 26 (4):601 612, 1985. Rui Tao, Hongxiang Ren, Jun Liu, and Fangbing Xiao. A lagrangian vortex method for smoke simulation with two-way fluid solid coupling. Computers & Graphics, 107:289 302, 2022. F Arend Torres, Marcello Massimo Negri, Marco Inversi, Jonathan Aellen, and Volker Roth. Lagrangian flow networks for conservation laws. ar Xiv preprint ar Xiv:2305.16846, 2023. Greg Turk and Marc Levoy. Zippered polygon meshes from range images. In Proceedings of the 21st annual conference on Computer graphics and interactive techniques, pp. 311 318, 1994. Ari M Turner, Vincenzo Vitelli, and David R Nelson. Vortices on curved surfaces. Reviews of Modern Physics, 82(2):1301, 2010. Hui Wang, Yongxu Jin, Anqi Luo, Xubo Yang, and Bo Zhu. Codimensional surface tension flow using moving-least-squares particles. ACM Transactions on Graphics (TOG), 39(4):42 1, 2020. Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, and Wenping Wang. Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. ar Xiv preprint ar Xiv:2106.10689, 2021. Published as a conference paper at ICLR 2025 DC Wiggert and EB Wylie. Numerical predictions of two-dimensional transient groundwater flow by the method of characteristics. Water Resources Research, 12(5):971 977, 1976. Yuxuan Xue, Bharat Lal Bhatnagar, Riccardo Marin, Nikolaos Sarafianos, Yuanlu Xu, Gerard Pons Moll, and Tony Tung. Nsf: Neural surface fields for human modeling from monocular depth. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 15049 15060, 2023. Bowen Yang, William Corse, Jiecong Lu, Joshuah Wolper, and Chen-Fanfu Jiang. Real-time fluid simulation on the surface of a sphere. Proceedings of the ACM on Computer Graphics and Interactive Techniques, 2(1):1 17, 2019. Guandao Yang, Serge Belongie, Bharath Hariharan, and Vladlen Koltun. Geometry processing with neural fields. Advances in Neural Information Processing Systems, 34:22483 22497, 2021. Xianghui Yang, Guosheng Lin, Zhenghao Chen, and Luping Zhou. Neural vector fields: Implicit representation by explicit learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16727 16738, 2023. Wang Yifan, Shihao Wu, Cengiz Oztireli, and Olga Sorkine-Hornung. Iso-points: Optimizing neural implicit surfaces with hybrid representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 374 383, 2021. Hang Yin, Mohammad Sina Nabizadeh, Baichuan Wu, Stephanie Wang, and Albert Chern. Fluid cohomology. ACM Trans. Graph., 42(4), jul 2023. ISSN 0730-0301. doi: 10.1145/3592402. URL https://doi.org/10.1145/3592402. Published as a conference paper at ICLR 2025 A PRELIMINARIES FOR THE MATHEMATICAL TOOLS A.1 THE DIFFERENTIAL GEOMETRY IN Rn We provide an in-depth discussion of our field construction which follows the introduction in Appendix A of Richter-Powell et al. (2022). Please refer to the work for more details. Readers with a background in differential geometry can skip this section. We first discussion the basic concept in the differential geometry for the introduction of differential form that supports us to derive and prove our theorem in the paper. We take a local coordinate chart for x Rn as x = (x1, ..., xn) and dx1, .., dxn denotes the coordinate differentials, i.e. dxi(x) = xi, i [n] = {1, ..., n}, which is also the co-vector field of the local coordinates. Note that we discuss Rn as an example and all the definition can be extended for smooth manifold and well-defined but needs the construction of the local chart or other mathematical manipulations. For more extensive introduction see Do Carmo (1998); Morita (2001). Define the linear vector space k(Rn) for Rn as the space of the k-linear alternating map: k times z }| { Rn . . . Rn R. (17) A k-linear alternating map ϕ is linear in each coordinate and satisfies the alternating property: ϕ(v1, . . . , vi, . . . , vj, . . . , vn) = ϕ(v1, . . . , vj, . . . , vi, . . . , vn). (18) The basis of the space Λk(Rn) can be denoted with the differentials by dxi1 dxik. The way of the basis act on k-vectors, v1, ..., vk Rn as: dxi1 dxik(v1, ..., vk) = 1 Υ Sk sgn(Υ)dxi1 dxik(vΥ(1), ..., vΥ(k)) = det[dxir(vs)]r,s [k], (19) where Sk is a permutation for i1, ..., ik. More specifically, for χ Λk(Rn) can be represented by the basis as: i1