# equivariant_neural_diffusion_for_molecule_generation__c8f33c3f.pdf Equivariant Neural Diffusion for Molecule Generation François Cornet Technical University of Denmark frjc@dtu.dk Grigory Bartosh University of Amsterdam g.bartosh@uva.nl Mikkel N. Schmidt Technical University of Denmark mnsc@dtu.dk Christian A. Naesseth University of Amsterdam c.a.naesseth@uva.nl We introduce Equivariant Neural Diffusion (END), a novel diffusion model for molecule generation in 3D that is equivariant to Euclidean transformations. Compared to current state-of-the-art equivariant diffusion models, the key innovation in END lies in its learnable forward process for enhanced generative modelling. Rather than pre-specified, the forward process is parameterized through a timeand data-dependent transformation that is equivariant to rigid transformations. Through a series of experiments on standard molecule generation benchmarks, we demonstrate the competitive performance of END compared to several strong baselines for both unconditional and conditional generation. 1 Introduction The discovery of novel chemical compounds with relevant properties is critical to a number of scientific fields, such as drug discovery and materials design (Merchant et al., 2023). However, due to the large size and complex structure of the chemical space (Ruddigkeit et al., 2012), which combines continuous and discrete features, it is notably difficult to search. Additionally, ab-initio Quantum Mechanics (QM) methods for computing target properties are often computationally expensive, preventing brute-force enumeration. While some of these heavy computations can be amortized through learned surrogates, the need for innovative search methods remains, and generative models have recently emerged as a promising avenue (Anstine and Isayev, 2023). Such models can learn complex data distributions, that, in turn, can be sampled from to obtain novel samples similar to the original data. Compared to other data modalities such as images or text, molecules present additional challenges as they have to adhere to strict chemical rules, and obey the symmetries of the 3D space. Currently, the most promising directions for molecule generation in 3D are either auto-regressive models (Gebauer et al., 2019; 2022; Luo and Ji, 2022; Daigavane et al., 2024) building molecules one atom at a time, or Diffusion Models (DMs) (Hoogeboom et al., 2022; Vignac et al., 2023; Le et al., 2024) that learn to revert a corruption mechanism that transforms the data distribution into noise. As both approaches directly operate in 3D space, they can leverage architectures designed for machine learned force fields (Unke et al., 2021), that were carefully developed to encode the symmetries inherent to the data (Schütt et al., 2017; 2021; Batzner et al., 2022; Batatia et al., 2022). The success of DMs has not been limited to molecule generation, and promising results have been demonstrated on a variety of other data modalities (Yang et al., 2023). Nevertheless, most existing DMs pre-specify the forward process, forcing the reverse process to comply with it. A recent line of work has sought to overcome that limitation and improve generation by replacing the fixed forward process with a learnable one (Bartosh et al., 2023; Nielsen et al., 2024; Bartosh et al., 2024). 38th Conference on Neural Information Processing Systems (Neur IPS 2024). Contributions In this paper, we present Equivariant Neural Diffusion (END), a novel diffusion model for molecule generation in 3D that (1) is equivariant to Euclidean transformations, and (2) features a learnable forward process. We demonstrate competitive unconditional molecule generation performance on the QM9 and GEOM-Drugs benchmarks. For conditional generation driven by composition and substructure constraints, our approach exhibits a substantial performance gain compared to existing equivariant diffusion models. Our set of experiments underscores the benefit of a learnable forward process for improved unconditional and conditional molecule generation. 2 Background We begin by establishing the necessary background for generative modeling of geometric graphs. We first introduce the data representation and its inherent symmetries. We then discuss Diffusion Models (DMs), and more specifically the Equivariant Diffusion Model (EDM) (Hoogeboom et al., 2022). Finally, we present the Neural Flow Diffusion Models (NFDM) framework (Bartosh et al., 2024). 2.1 Equivariance Molecules as geometric graphs in E(3) We consider geometric graphs embedded in 3-dimensional Euclidean space that represent molecules. Formally, each atomistic system can be described by a tuple x = (r, h), where r = (r1, ..., r M) RM 3 form a collection of vectors in 3D representing the coordinates of the atoms, and h = (h1, ..., h M) RM D are the associated scalar features (e.g. atomic types or charges). When dealing with molecules, we are particularly interested in E(3), the Euclidean group in 3 dimensions, generated by translations, rotations and reflections. Each group element in E(3) can be represented as a combination of a translation vector t R3 and an orthogonal matrix R O(3) encoding rotation or reflection. While scalar features h remain invariant, coordinates r transform under translation, rotation and reflection as Rr+t = (Rr1+t, ..., Rr M +t). Equivariant functions A function f : X Y is said to be equivariant to the action of a group G, or G-equivariant, if g f(x) = f(g x), g G. It is said to be G-invariant, if f(x) = f(g x), g G. In the case of a function f : (RM 3 RM D) (RM 3 RM D) operating on geometric graphs, the function is said to be E(3)-equivariant if Ry(r) + t, y(h) = f Rr + t, h , R O(3) and t R3, where y(r) and y(h) denote the output related to r and h respectively. There exists a large variety of graph neural network architectures designed to be equivariant to the Euclidean group (Schütt et al., 2017; 2021; Batzner et al., 2022; Batatia et al., 2022). Equivariant distributions A conditional distribution p(y|x) is equivariant to rotations and reflections when p(y|x) = p(Ry|Rx), R O(3), while a distribution is said to be invariant when p(x) = p(Rx), R O(3). Regarding translation, it is not possible to have a translation-invariant non-zero distribution, as it would require that p(x) = p(x + t), t R3, x RM 3, which would mean that p(x) cannot integrate to 1 (Garcia Satorras et al., 2021). However, a translation-invariant distribution can be constructed in the linear subspace R, where the centre of gravity is fixed to 0 (i.e. zero Co M subspace): R = {r RM 3 : 1 M PM i=1 ri = 0} (Xu et al., 2022). As R can be shown to be intrinsically equivalent to R(M 1) 3 (Bao et al., 2023), we will consider in what follows that r is defined in R(M 1) 3 for ease of notation. 2.2 Equivariant Diffusion Models Diffusion Models (DMs) (Sohl-Dickstein et al., 2015; Ho et al., 2020) are generative models that learn distributions through a hierarchy of latent variables, corresponding to perturbed versions of the data at increasing noise scales. DMs consist of a forward and a reverse (or generative) process. The Equivariant Diffusion Model (EDM) (Hoogeboom et al., 2022) is a particular instance of a DM, where the learned marginal pθ(x) is made invariant to the action of translations, rotations and reflections by construction. Intuitively, this means that the likelihood of a given molecule under the model does not depend on its orientation. Forward process The forward process perturbs samples from the data distribution, x q(x), over time through noise injection, resulting in a trajectory of latent variables (zt)t [0,1], conditional on x. The conditional distribution for (zt)t [0,1] given x, can be described by an initial distribution q(z0|x) and a Stochastic Differential Equation (SDE), d{z(r) t , z(h) t } = f(t) z(r) t , z(h) t dt +g(t) d{w(r), w(h)}, where the drift f(t) and volatility g(t) are scalar functions of time, and w(r) and w(h) are two independent standard Wiener processes defined in R(M 1) 3 and RM D respectively. Specifically, EDM implements the Variance-Preserving SDE (VP-SDE) scheme (Song et al., 2020), with f(t) = 1 2β(t) and g(t) = p β(t) for a fixed schedule β(t). Due to the linearity of the drift term, the conditional marginal distribution is known in closed-form (Särkkä and Solin, 2019), and can be reconstructed as q [z(r) t , z(h) t ] [r, h] = q(z(r) t |r)q(z(h) t |h) = N(z(r) t , |αtr, σ2 t I) N(z(h) t , |αth, σ2 t I), where αt = exp 1 2 R t 0 β(s) ds and σt = 1 exp 1 2 R t 0 β(s) ds . It evolves from a low-variance Gaussian centered around the data q(z0|x) N(z0|x, δ2I) to an uninformative prior distribution (that contains no information about the data distribution), i.e. a unit Gaussian q(z1|x) N(z1|0, I). Reverse (generative) process Starting from the prior [z(r) 1 , z(h) 1 ] N(z(r) t |0, I) N(z(h) t |0, I), samples from q(x) can be generated by reversing the forward process. This can be done by following the reverse-time SDE (Anderson, 1982), dzt = f(t) z(r) t , z(h) t g2(t) z(r) t log q(zt), z(h) t log q(zt) dt +g(t) d{ w(r), w(h)}, where w(r) and w(h) are independent standard Wiener processes defined in R(M 1) 3 and RM D, respectively, with time flowing backwards. DMs approximate the reverse process by learning an approximation of the score function zt log q(zt) parameterized by a neural network sθ(zt, t). With the learned score function sθ(zt, t), a sample z0 pθ(z0) q(z0) q(x) can be obtained by first sampling from the prior [z(r) 1 , z(h) 1 ] N(z(r) t |0, I) N(z(h) t |0, I), and then simulating the reverse SDE, dzt = f(t) z(r) t , z(h) t g2(t) s(r) θ (zt, t), s(h) θ (zt, t) dt +g(t) d{ w(r), w(h)}, where the true score function has been replaced by its approximation sθ(zt, t). In EDM, the approximate score is parameterized through an equivariant function: sθ(zt, t) = s(r) θ (zt, t), s(h) θ (zt, t) such that sθ([Rz(r) t , z(h) t ], t) = Rs(r) θ (zt, t), s(h) θ (zt, t) , R O(3). Practically, this is realized through the specific parameterization, sθ(zt, t) = αt ˆxθ(zt, t) zt where the data point predictor ˆxθ is implemented by an equivariant neural network. Optimization The data point predictor ˆxθ, or sθ, is trained by optimizing the denoising score matching loss (Vincent, 2011), LDSM = Eu(t),q(x,zt) h λ(t) sθ(zt, t) zt log q(zt|x) 2 2 where λ(t) is a positive weighting function, and u(t) is a uniform distribution over the interval [0, 1]. 2.3 Neural Flow Diffusion Models Neural Flow Diffusion Models (NFDM) (Bartosh et al., 2024) are based on the observation that latent variables in DMs, i.e. zt, are conventionally inferred through a pre-specified transformation as implied by the chosen type of SDE and the noise schedule. This potentially limits the flexibility of the latent space, and makes the learning of the reverse (generative) process more challenging. Forward process In contrast to conventional DMs, NFDM define the forward process implicitly through a learnable transformation Fφ(ε, t, x) of injected noise ε, time t, and data point x. The latent variables zt are obtained by transforming noise samples ε, conditional on data point x and time step t: zt = Fφ(ε, t, x). If Fφ is differentiable with respect to ε and t, and invertible with respect to ε, then, for fixed x and ε, samples from qφ(zt|x) can be obtained by solving the following conditional Ordinary Differential Equation (ODE) until time t, dzt = fφ(zt, t, x) dt with fφ(zt, t, x) = Fφ(ε, t, x) ε=F 1 φ (zt,t,x), (1) with z0 q(z0|x). While Fφ and q(ε) define the conditional marginal distribution qφ(zt|x), we need a distribution over the trajectories (zt)t [0,1]. NFDM obtain this through the introduction of a conditional SDE starting from z0 and running forward in time. Given access to the ODE in Eq. (1) and the score function zt log qφ(zt|x), the conditional SDE sharing the same conditional marginal distribution qφ(zt|x) is given by dzt = f F φ (zt, t, x) dt +gφ(t) dw with f F φ (zt, t, x) = fφ(zt, t, x) + g2 φ(t) 2 zt log qφ(zt|x), (2) where the score function of qφ(zt|x) is zt log qφ(zt|x) = zt log q(ε) + log J 1 F , with ε = F 1 φ (zt, t, x), and J 1 F = F 1 φ (zt,t,x) zt . Reverse (generative) process A conditional reverse SDE that starts from z1 q(z1), runs backward in time, and reverses the conditional forward SDE from Eq. (2) can be defined as dzt = f B φ (zt, t, x) + gφ(t) d w with f B φ (zt, t, x) = fφ(zt, t, x) g2 φ(t) 2 zt log qφ(zt|x). (3) As x is unknown when generating samples, we can rewrite Eq. (3) with the prediction of x instead, dzt = ˆfθ,φ(zt, t) dt +gφ(t) d w, where ˆfθ,φ(zt, t) = f B φ zt, t, ˆxθ(zt, t) , (4) where ˆxθ is a function that predicts the data point x. Provided that the reconstruction distribution q(z0|x) and prior distribution q(z1) are defined, this fully specifies the reverse (generative) process. Optimization The forward and reverse processes can be optimized jointly by matching the drift terms of the true and approximate conditional reverse SDEs through the following objective, LNFDM = Eu(t),qφ(x,zt) h 1 2g2φ(t) f B φ (zt, t, x) ˆfθ,φ(zt, t) 2 2 which can be shown to be equivalent to minimizing the Kullback-Leibler divergence between the posterior distributions resulting from the discretization of Eqs. (3) and (4) (Bartosh et al., 2024). 3 Equivariant Neural Diffusion We now introduce Equivariant Neural Diffusion (END), that generalizes the Equivariant Diffusion Model (EDM) (Hoogeboom et al., 2022), by defining the forward process through a learnable transformation. Our approach is a synthesis of NFDM introduced in Section 2.3, and leverages ideas of EDM outlined in Section 2.2 to maintain the desired invariance of the learned marginal distribution pθ,φ(z0). By providing an equivariant learnable transformation Fφ and an equivariant data point predictor ˆxθ, we show that it is possible to obtain a generative model with the desired properties. Finally, we propose a simple yet flexible parameterization meeting these requirements. 3.1 Formulation The key innovation in END lies in its forward process, which is also leveraged in the reverse (generative) process. The forward process is defined through a learnable timeand data-dependent transformation Fφ(ε, t, x), such that the latent zt transforms covariantly with the injected noise ε (i.e. a collection of random vectors) and the data point x, Fφ(Rε, t, Rx) = RFφ(ε, t, x) = Rzt, R O(3). We then define ˆxθ as another learnable equivariant function, such that the predicted data point transforms covariantly with the latent variable zt, i.e. ˆxθ(Rzt, t) = Rˆxθ(zt, t). Finally, we choose the noise and prior distribution, i.e. p(ε) and p(z1), to be invariant to the considered symmetry group. Invariance of the learned distribution With the following choices: (1) p(z1) an invariant distribution, (2) Fφ an equivariant function that satisfies Fφ(Rε, t, Rx) = RFφ(ε, t, x), and (3) ˆxθ an equivariant function, we have that the learned marginal pθ,φ(z0) is invariant as desired. This can be shown by demonstrating that the reverse SDE is equivariant. We start by noting that the reverse SDE in END is given by dzt = ˆfθ,φ(zt, t) dt +gφ(t) d w . As the Wiener process is isotropic, this boils down to verifying that the drift term, ˆfθ,φ(zt, t) is equivariant, i.e. ˆfθ,φ(Rzt, t) = R ˆfθ,φ(zt, t). As the drift is expressed as a sum of two terms, we inspect each of them separately. The first term is fφ zt, t, ˆxθ(zt, t) = Fφ ε, t, ˆxθ(zt, t) ε=F 1 φ (zt,t,ˆxθ(zt,t)). If Fφ is equivariant, then so is its time-derivative (see Appendix A.3.1). The same holds for its inverse with respect to ε (see Appendix A.3.2), such that we have F 1 φ (Rzt, t, Rx) = RF 1 φ (zt, t, x) = Rε. We additionally have that ˆxθ is equivariant, by definition. As the equivariance of Fφ implies the equivariance of qφ, from the second term of the drift, we observe that, for yt = Rzt, we have yt log qφ yt|ˆxθ(yt, t) = R zt log qφ zt|ˆxθ(zt, t) , R O(3). In summary, in addition to an invariant prior, an equivariant Fφ and an equivariant ˆxθ ensure the equivariance of the reverse process, and hence the invariance of the learned distribution. In Appendix A.3.3, we additionally show that the objective function is invariant, i.e. LEND(Rx) = LEND(x), R O(3). We note that alternative formulations for the drift of the reverse process, ˆfθ,φ, exist. Most notably, it can be learned directly through an equivariant function, without explicit dependence on Fφ, while maintaining the desired invariance. 3.2 Parameterization We now introduce a simple parameterization of Fφ that meets the requirements outlined above, Fφ(ε, t, x) = µφ(x, t) + Uφ(x, t)ε, (6) where, due to the geometric nature of x, Uφ(x, t) R(M 1) 3 3 is structured as a block-diagonal matrix where each block is a 3 3 matrix, ensuring a cheap calculation of the inverse transformation and its Jacobian. This is equivalent to a diagonal parameterization in the case of scalar features. Similarly to EDM, our parametrization of Fφ leads to a conditional marginal qφ(zt|x) that is a conditional Gaussian with (block-) diagonal covariance, with the notable difference that the mean and covariance are now dataand time-dependent, and learnable through Fφ, qφ(zt|x) = N zt|µφ(x, t), Σφ(x, t) , (7) where Σφ(x, t) = Uφ(x, t)U φ (x, t), such that Σφ(x, t) is also block-diagonal. As Fφ is linear in ε, both µφ and Uφ must be equivariant functions whose outputs transform covariantly with x, in order to ensure the desired equivariance of Fφ, Fφ(Rε, t, Rx) = µφ(Rx, t) + Uφ(Rx, t)Rε = Rµφ(x, t) + R Uφ(x, t)ε = RFφ(ε, t, x). We can then readily check that the resulting qφ is equivariant, as R O(3), we have that qφ(zt|x) = N zt|µφ(x, t), Σφ(x, t) = N Rzt|Rµφ(x, t), RΣφ(x, t)R = qφ(Rzt|Rx). We note that other, and more advanced, parametrizations are possible, e.g. based on normalizing flows with a flow architecture similar to that of Klein et al. (2024). Prior and Reconstruction While not strictly required, it can be advantageous to parameterize the transformation Fφ such that the prior and reconstruction losses need not be computed. To do so, we design Fφ(ε, x, t) such that the conditional distribution evolves from a low-variance Gaussian centered around the data, i.e. q(z0|x) N(z0|x, δ2I) to an uninformative prior distribution (that contains no information about the data distribution), i.e. a unit Gaussian q(z1|x) N(z1|0, I). Specifically, we parameterize Fφ through the following functions, µφ(x, t) = (1 t)x + t 1 t µφ(x, t), (8) Uφ(x, t) = δ1 t σφ(x, t)t(1 t) I + t 1 t Uφ(x, t), (9) which ensure that (i) in t = 0, µφ(x, 0) = x and Σφ(x, 0) = δ2I; (ii) in t = 1, µφ(x, 1) = 0 and Σφ(x, 1) = I; while being unconstrained for t ]0, 1[. We note that this is only one possible parametrization for Fφ, and that, by adapting µφ and Uφ, richer priors can easily be leveraged e.g. harmonic prior given a molecular graph or scale-dependent prior (Jing et al., 2023; Irwin et al., 2024). Implementation In practice, Fφ is implemented as a neural network with an architecture similar to that of the data point predictor ˆxθ(zt, t), but with a specific readout layer that produces [ µφ(x, t), σφ(x, t), Uφ(x, t)]. The mean output µφ(x, t) is similar to that of ˆxθ(zt, t). For Uφ(x, t), as Σφ(x, t) = Uφ(x, t)U φ (x, t) should rotate properly, σφ(x, t) is a positive invariant scalar, while Uφ(x, t) is constructed as a matrix whose columns are vectors that transform covariantly with x. To ease notation, we introduced all notations in the linear subspace R, however in practice we work in the ambient space, i.e. r RM 3. We detail in Appendix A.5.1, how working in ambient space is possible. The training and sampling procedures are detailed in Algorithms 1 and 2 in the appendix. 3.3 Conditional Model While unconditional generation is a required stepping stone, many practical applications require some form of controllability. As other generative models, DMs can model conditional distributions p(x|c), where c is a given condition. Different methods exist for sampling from the conditional distribution, e.g. via guidance (Bao et al., 2023; Jung et al., 2024) or twisting (Wu et al., 2024), but the simplest approach consists in training a conditional model on pairs (x, c). In such setting, Fφ and ˆxθ simply receive an extra input c representing the conditional information, such that they respectively become Fφ(ε, t, x, c), and ˆxθ(zt, t, c). It is important to note that, compared to conventional DMs, the forward process of END is now also condition-dependent. 4 Experiments In this section, we demonstrate the benefits of END with a comprehensive set of experiments. In Section 4.1, we first display the advantages of END for unconditional generation on 2 standard benchmarks, namely QM9 (Ramakrishnan et al., 2014) and GEOM-DRUGS (Axelrod and Gomez Bombarelli, 2022). Then, in Section 4.2, we perform conditional generation in 2 distinct settings on QM9. Additional experimental details are provided in Appendix A.6. Datasets The QM9 dataset (Ramakrishnan et al., 2014) contains 134 thousand smalland mediumsized organic molecules with up to 9 heavy atoms, and up to 29 when counting hydrogen atoms. GEOM-DRUGS (Axelrod and Gomez-Bombarelli, 2022) contains 430 thousand mediumand largesized drug-like molecules with 44 atoms on average, and up to maximum 181 atoms. We use the same data setup as in previous work (Hoogeboom et al., 2022; Xu et al., 2022). 4.1 Unconditional Generation Task We sample 10 000 molecules using the stochastic sampling procedure detailed in Algorithm 2. As END is trained in continuous-time, we vary the number of integration steps from 50 to 1000. We repeat each sampling for 3 seeds, and report averages along with standard deviations for each metric. Evaluation metrics We follow previous work (Hoogeboom et al., 2022; Xu et al., 2023), and first evaluate the chemical quality of the generated samples in terms of stability, validity, and uniqueness (in Tables 1, 6 and 9). On QM9, we additionally evaluate how well the model learns the atom and bond types distributions by measuring the total variation between the dataset s and generated distributions, as well as the overall quality of the generated structures via their strain energy, expressed as the energy difference between the generated structure and a relaxed version thereof (in Tables 2 and 6). On GEOM-DRUGS, we additionally compute connectivity, total variation for atom types, and strain energy (in Tables 2 and 9). Connectivity accounts for the fact that validity can easily Table 1: Stability and validity results on QM9 and GEOM-DRUGS obtained over 10000 samples, with mean/standard deviation across 3 sampling runs. END compares favorably to the baseline across all metrics on both datasets, while offering competitive performance for reduced number of sampling steps. It reaches a performance level similar to that of current SOTA methods. denotes results obtained by our own experiments. QM9 GEOM-DRUGS Stability ( ) Val. / Uniq. ( ) Stability ( ) Val. / Conn. ( ) A [%] M [%] V [%] V U [%] A [%] V [%] V C [%] Model Steps Data 99.0 95.2 97.7 97.7 86.5 99.0 99.0 EDM 1000 98.7 82.0 91.9 90.7 81.3 92.6 (Hoogeboom et al., 2022) EDM-BRIDGE 1000 98.8 84.6 92.0 90.7 82.4 92.8 (Wu et al., 2022) GEOLDM 1000 98.9 .1 89.4 .5 93.8 .4 92.7 .5 84.4 99.3 45.8 (Xu et al., 2023) 50 98.3 .1 85.1 .5 92.3 .4 90.7 .3 75.1 91.7 GEOBFN 100 98.6 .1 87.2 .3 93.0 .3 91.5 .3 78.9 93.1 (Song et al., 2024) 500 98.8 .8 88.4 .2 93.4 .2 91.8 .2 81.4 93.5 1000 99.1 .1 90.9 .2 95.3 .1 93.0 .1 85.6 92.1 50 97.6 .0 77.6 .5 90.2 .2 89.2 .2 84.7 .0 93.6 .2 46.6 .3 100 98.1 .0 81.9 .4 92.1 .2 90.9 .2 85.2 .1 93.8 .3 56.2 .4 250 98.3 .0 84.3 .1 93.2 .4 91.7 .3 85.4 .0 94.2 .1 61.4 .6 500 98.4 .0 85.2 .5 93.5 .2 92.2 .3 85.4 .0 94.3 .2 63.4 .1 1000 98.4 .0 85.3 .3 93.5 .1 91.9 .1 85.3 .1 94.4 .1 64.2 .6 50 98.6 .0 84.6 .1 92.7 .1 91.4 .1 87.1 .1 84.6 .5 66.0 .4 100 98.8 .0 87.4 .2 94.1 .0 92.3 .2 87.2 .1 87.0 .2 73.7 .4 250 98.9 .1 88.8 .5 94.7 .2 92.6 .1 87.1 .1 88.5 .2 77.4 .4 500 98.9 .0 88.8 .4 94.8 .2 92.8 .2 87.0 .0 88.8 .3 78.6 .3 1000 98.9 .0 89.1 .1 94.8 .1 92.6 .2 87.0 .0 89.2 .3 79.4 .4 be increased by generating several disconnected fragments (where only the largest counts towards validity), the total variation ensures that the model properly samples all atom types, whereas the strain energy evaluates the generated geometries. On both datasets, we also report additional drug-related metrics as per Qiang et al. (2023) (in Tables 7 and 10). More details about evaluation metrics are provided in Appendix A.6.1. Baselines We compare END to several relevant baselines from the literature: the original EDM (Hoogeboom et al., 2022); EDM-BRIDGE (Wu et al., 2022), an improved version of EDM that adds a physics-inspired force guidance in the reverse process; GEOLDM (Xu et al., 2023), an equivariant latent DM; and GEOBFN (Song et al., 2024), a geometric version of Bayesian Flow Networks (Graves et al., 2024). A detailed discussion about related work can be found in Section 5. Ablations of END In addition to baselines from the literature, we compare different ablated versions of END. As the key component to our method is the learnable forward process, the logical ablation is whether to include a learnable forward (=END), or not (=EDM). To ensure a fair comparison and that any difference in performance does not stem from an increase in learnable parameters, an architectural change or the training paradigm, we implement our own EDM (Hoogeboom et al., 2022), denoted EDM*. It features the exact same architecture as END, the same amount of learnable parameters (i.e. through a deeper ˆxθ), and is similarly trained in continuous-time. Additionally, we provide EDM* + γ, similar to EDM* but with a learned SNR (Kingma et al., 2021) for each data modality (i.e. atomic types and coordinates), and END (µφ only), an ablated version of END where only the mean is learned whereas the standard deviation of the conditional marginal is pre-specified and derived from the noise schedule of EDM. Table 5 provides an overview of the compared models. Results on QM9 Our main results on the QM9 dataset are summarized in the left part of Tables 1 and 2, where END is shown to significantly outperform the baseline and reach a level of performance similar to current SOTA methods, GEOLDM (Xu et al., 2023) and GEOBFN (Song et al., 2024), across stability and validity metrics. The ablation study in Appendix A.2.1 clearly reveals the benefits of a learnable forward. On the one hand, the two variants of END are shown to outperform (or be on par with) all baselines across all metrics in Table 6 (in particular, in terms of the more challenging Table 2: Additional results on QM9 and GEOM-DRUGS obtained over 10000 samples, with mean/standard deviation across 3 sampling runs. END matches better the training distributions, and generates less strained structures than baselines. denotes results obtained by our own experiments. QM9 GEOM-DRUGS TV ( ) Strain En. ( ) TV ( ) Strain En. ( ) A [10 2] B [10 3] E [kcal/mol] A [10 2] E [kcal/mol] Model Steps Data 7.7 15.8 GEOLDM (Xu et al., 2023) 1000 1.6 1.3 10.4 10.6 133.5 50 4.6 .1 1.7 .5 16.4 .2 10.5 .1 134.2 1.5 100 3.5 .1 1.4 .3 13.5 .1 8.0 .1 110.9 0.3 250 2.8 .2 1.3 .4 12.3 .4 6.7 .1 98.9 0.3 500 2.6 .2 1.3 .4 11.7 .1 6.4 .1 95.3 0.1 1000 2.5 .1 1.4 .4 11.3 .1 6.2 .0 92.9 1.1 50 1.4 .1 1.9 .4 13.1 .1 5.9 .1 86.3 .6 100 1.1 .0 1.5 .2 11.1 .1 4.5 .1 67.9 .9 250 1.0 .0 0.5 .1 10.3 .1 3.5 .0 58.9 .1 500 0.9 .0 0.7 .1 10.0 .1 3.3 .0 56.4 .8 1000 0.9 .2 1.0 .1 9.7 .2 3.0 .0 55.0 .5 molecule stability), and be in better agreement with the data distribution in Table 7 (except for QED which is captured perfectly by all methods). On the other hand, we observe that, with as few as 100 integration steps, END is capable of generating samples that are qualitatively better than those generated by the simpler baselines in 1000 steps. A few illustrative QM9-like molecules generated by END are displayed in Fig. 1. Results on GEOM-DRUGS Our main results on the more realistic GEOM-DRUGS dataset are presented in the right part of Tables 1 and 2, where we observe that END is competitive against the compared baselines in terms of atom stability, while being slightly subpar in terms of validity. However, when accounting for connectivity (via the V C metric), we observe that END does outperform the baseline with an increased sucess rate of around 20% on average, as well as better than the SOTA method GEOLDM (Xu et al., 2023) by a large margin. The generated molecules also better follow the atom types distribution of the dataset as per the lower total variation, and feature better geometries than those generated by competitors as implied by the lower strain energy. As for QM9, the ablation study provided in Appendix A.2.3 illustrates the clear improvement provided by a learnable forward process compared to a fixed one. In this more challenging setting, only learning the mean function yields a slight decrease in performance across the metrics reported in Table 9, and a similar agreement with the dataset in terms of the drug-related metrics in Table 10, compared to the full model. Furthermore, while each sampling step is currently slower than EDM* (Table 11), the improved sampling efficiency afforded by END (in terms of integration steps) allows practical time gains on this more complex dataset. As a concrete example, samples obtained with END with only 100 steps are qualitatively better those generated by EDM* in 1000 integration steps, amounting to a 3x time cut. Examples of GEOM-DRUGS-like molecules generated by END are provided in Fig. 1. Figure 1: Representative samples generated by END on QM9 (top), and GEOM-DRUGS (bottom). 4.2 Conditional Generation Table 3: On composition-conditioned generation, CEND offers nearly perfect composition controllability. Matching refers to the % of samples featuring the prompted composition. Matching [%] ( ) Model Steps 50 69.6 0.6 100 73.0 0.6 250 74.1 1.4 500 76.2 0.6 1000 75.5 0.5 50 89.2 0.8 100 90.1 1.0 250 91.2 0.8 500 91.5 0.8 1000 91.0 0.9 Table 4: On substructure-conditioned generation, CEND shows competitive performance, surpassing EEGSDE that uses an additional property predictor. Results are borrowed from Bao et al. (2023). Tanimoto Sim. ( ) Model Steps CEDM 1000 .671 .004 EEGSDE 1000 .750 .003 50 .601 .000 100 .640 .002 250 .663 .002 500 .669 .001 1000 .673 .002 50 .783 .001 100 .807 .001 250 .819 .001 500 .825 .001 1000 .828 .001 Figure 2: Excerpt of substructureconditioned samples, where CEND matches the provided substructure better (in terms of compositions and local patterns). Dataset and Setup We perform our experiments on the QM9 dataset, on 2 different tasks: composition-conditioned and substructure-conditioned generation. Both tasks allow for direct validation with ground-truth properties without requiring expensive QM calculations, or approximations with surrogate models. In each case, we train a conditional diffusion model as described in Section 3.3, i.e. where Fφ and ˆxθ are provided with an extra input corresponding to the condition. Additional details are provided in Appendix A.6.4. Task 1: composition-conditioned generation The model is tasked to generate a compound with a predefined composition, i.e. structural isomers of a given formula. The condition is specified as a vector c = (c1, ..., c D) ZD, where cd denotes the number of atoms of type d that the sample should contain. To evaluate the model, we generate 10 samples per target formula, and compute the proportion of samples that match the provided composition. Our results are provided in Table 3, where we observe that CEND significantly outperforms the baseline, and offers nearly fully controllable composition generation. Additionally, reducing the number of sampling steps has a very limited impact on the controllability. Finally, we perform an ablation whose results are presented in Table 8, where fixing the standard deviation is shown to lead to a small decrease in performance, with respect to the full model while remaining significantly better than the baseline with fixed forward. Task 2: substructure-conditioned generation We adopt a setup similar to that of Bao et al. (2023), and train a conditional END, where the condition is a molecular fingerprint encoding structural information about the molecule. A fingerprint is a binary vector c = (c1, ..., c F ) {0, 1}F , where cf is set to 1 if substructure f is present in the molecule, or to 0 if not. Fingerprints are obtained using OPENBABEL (O Boyle et al., 2011). We evaluate the ability of the compared models to leverage the provided structural information, by sampling conditionally on fingerprints obtained from the test set. We then compute the Tanimoto similarity between the fingerprints yielded by generated molecules and the fingerprints provided as conditional inputs. We compare CEND to EEGSDE (Bao et al., 2023), an improved version of EDM (Hoogeboom et al., 2022), that performs conditional generation by combining a conditional diffusion model and regressor guidance. Our results are presented in Table 4, along with a handful of samples in Fig. 2, where CEND is shown to offer better controllability than the compared baselines, as highlighted by the higher similarity. 5 Related Work The main approaches to molecule generation in 3D are autoregressive models (Gebauer et al., 2019; Simm et al., 2020; Gebauer et al., 2022; Luo and Ji, 2022; Daigavane et al., 2024), flow-based models (Garcia Satorras et al., 2021), and diffusion models (Hoogeboom et al., 2022; Igashov et al., 2024). A notable exception to the geometric graph representation of 3D molecules are voxels (Skalic et al., 2019; Ragoza et al., 2022; O Pinheiro et al., 2024), from which the 3D graph is extracted via some post-processing procedure. Recently, several works have shown that leveraging 2D connectivity information can lead to improved results (Peng et al., 2023; Vignac et al., 2023; Le et al., 2024). While not incompatible with END, we perform our experiments without modeling that auxiliary information, and therefore do not compare to these approaches directly. Other frameworks have also been tailored to molecule generation, such as Flow Matching (Lipman et al., 2022; Song et al., 2023; Irwin et al., 2024) or Bayesian Flow Networks (Graves et al., 2024; Song et al., 2024), also showing promises for accelerated sampling. In the realm of diffusion models for molecules, EDM-BRIDGE (Wu et al., 2022) and EEGSDE (Bao et al., 2023) extend upon a continuous-time formulation of EDM (Hoogeboom et al., 2022), as END also does. Based on the observation that there exists an infinity of processes mapping from prior to target distributions, EDM-BRIDGE constructs one such process that incorporates some prior knowledge, i.e. part of the drift term is a physically-inspired force term. END can be seen as a generalization of EDM-BRIDGE, where the forward drift term is now learned instead of prespecified. Through experiments, we show that a learnable forward performs better than a fixed one, even when the latter is physics-inspired. EEGSDE specifically targets conditional generation by combining (1) a conditional model score model, (2) a method similar to classifier-guidance (requiring the training of an auxiliary model). In CEND, we instead only learn a conditional model. Finally, GEOLDM (Xu et al., 2023) is a latent diffusion model that performs diffusion in the latent space of an equivariant Variational Auto-Encoder (VAE), and it can be seen as a particular case of END, where Fφ(ε, t, x) = αt Eφ(x) + ϵσt, with Eφ(x) denoting the (time-independent) encoder of the VAE. 6 Conclusion In this work, we have presented Equivariant Neural Diffusion (END), a novel diffusion model that is equivariant to Euclidean transformations. The key innovation in END lies in the forward process that is specified by a learnable dataand time-dependent transformation. Experimental results demonstrate the benefits of our method. In the unconditional setting, we show that END yields competitive generative performance across two different benchmarks. In the conditional setting, END offers improved controllability, when conditioning on composition and substructure. Finally, as a by-product of the introduced learnable forward, we also find the sampling efficiency (in terms of integration steps) to be improved, while that property is not actively optimized for in the design of the parameterization nor in the training procedure. Avenues for future work are numerous. In particular, further leveraging the flexible framework of NFDM (Bartosh et al., 2024) to constrain the generative trajectories, e.g. to be straight and enable even faster sampling, modelling bond information, or extending the conditional setting to other types of conditioning information, e.g. other point cloud or target property, are all promising directions. Limitations From a computational perspective, END is currently slower to train and sample from compared to a baseline with fixed forward with identical number of learnable parameters. Even if convergence is similarly fast, each training (resp. sampling) step incurs a relative 2.5x (resp. 3x). However, END usually requires much fewer number of function evaluations to achieve comparable (or better) accuracy, and alternative (and more efficient) parameterizations of the reverse process exist. In particular, the drift ˆfθ,φ could be learned without direct dependence on fφ, thereby leading to an improved training time and, more importantly, a very limited overhead with respect to vanilla DMs for sampling. In terms of scalability, END suffers from limitations similar to concurrent approaches. It operates on fully-connected graphs preventing its scaling to very large graphs, and models categorical features through an arbitrary continuous relaxation potentially suboptimal and scaling linearly with the number of chemical elements in the modeled dataset. Encodings that scale more gracefully, such as that of Analog Bits (Chen et al., 2023) (logarithmic) or Geo LDM (Xu et al., 2023) (learned low-dimensional), are good candidates to better deal with discrete features within END. In terms of data, the presented findings are limited to organic molecules and the metrics, while widely used in the community, also have some evident limitations. To fully assess the practical interest of the generated molecules, thorough validation with QM simulations would be required. Broader Impact Generative models for molecules have the potential to accelerate in-silico discovery and design of drugs or materials. This work proposes an instance of such model. As any generative model, it also comes with potential dangers, as this could be misused for designing e.g. chemicals with adversarial properties. Acknowledgments and Disclosure of Funding FC acknowledges financial support from the Independent Research Fund Denmark with project DELIGHT (Grant No. 0217-00326B). Amil Merchant, Simon Batzner, Samuel S Schoenholz, Muratahan Aykol, Gowoon Cheon, and Ekin Dogus Cubuk. Scaling deep learning for materials discovery. Nature, 624(7990):80 85, 2023. Lars Ruddigkeit, Ruud Van Deursen, Lorenz C Blum, and Jean-Louis Reymond. Enumeration of 166 billion organic small molecules in the chemical universe database gdb-17. Journal of chemical information and modeling, 52(11):2864 2875, 2012. Dylan M Anstine and Olexandr Isayev. Generative models as an emerging paradigm in the chemical sciences. Journal of the American Chemical Society, 145(16):8736 8750, 2023. Niklas Gebauer, Michael Gastegger, and Kristof Schütt. Symmetry-adapted generation of 3d point sets for the targeted discovery of molecules. Advances in neural information processing systems, 32, 2019. Niklas WA Gebauer, Michael Gastegger, Stefaan SP Hessmann, Klaus-Robert Müller, and Kristof T Schütt. Inverse design of 3d molecular structures with conditional generative neural networks. Nature communications, 13(1):973, 2022. Youzhi Luo and Shuiwang Ji. An autoregressive flow model for 3d molecular geometry generation from scratch. In International Conference on Learning Representations, 2022. URL https: //openreview.net/forum?id=C03Ajc-NS5W. Ameya Daigavane, Song Eun Kim, Mario Geiger, and Tess Smidt. Symphony: Symmetry-equivariant point-centered spherical harmonics for 3d molecule generation. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id= MIEn Ytl Gyv. Emiel Hoogeboom, Vıctor Garcia Satorras, Clément Vignac, and Max Welling. Equivariant diffusion for molecule generation in 3d. In International conference on machine learning, pages 8867 8887. PMLR, 2022. Clement Vignac, Nagham Osman, Laura Toni, and Pascal Frossard. Midi: Mixed graph and 3d denoising diffusion for molecule generation. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 560 576. Springer, 2023. Tuan Le, Julian Cremer, Frank Noe, Djork-Arné Clevert, and Kristof T Schütt. Navigating the design space of equivariant diffusion-based generative models for de novo 3d molecule generation. In The Twelfth International Conference on Learning Representations, 2024. URL https:// openreview.net/forum?id=kz Gui RXZr Q. Oliver T Unke, Stefan Chmiela, Huziel E Sauceda, Michael Gastegger, Igor Poltavsky, Kristof T Schuett, Alexandre Tkatchenko, and Klaus-Robert Mueller. Machine learning force fields. Chemical Reviews, 121(16):10142 10186, 2021. Kristof Schütt, Pieter-Jan Kindermans, Huziel Enoc Sauceda Felix, Stefan Chmiela, Alexandre Tkatchenko, and Klaus-Robert Müller. Schnet: A continuous-filter convolutional neural network for modeling quantum interactions. Advances in neural information processing systems, 30, 2017. Kristof Schütt, Oliver Unke, and Michael Gastegger. Equivariant message passing for the prediction of tensorial properties and molecular spectra. In International Conference on Machine Learning, pages 9377 9388. PMLR, 2021. Simon Batzner, Albert Musaelian, Lixin Sun, Mario Geiger, Jonathan P Mailoa, Mordechai Kornbluth, Nicola Molinari, Tess E Smidt, and Boris Kozinsky. E (3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials. Nature communications, 13(1):2453, 2022. Ilyes Batatia, David P Kovacs, Gregor Simm, Christoph Ortner, and Gábor Csányi. Mace: Higher order equivariant message passing neural networks for fast and accurate force fields. Advances in Neural Information Processing Systems, 35:11423 11436, 2022. Ling Yang, Zhilong Zhang, Yang Song, Shenda Hong, Runsheng Xu, Yue Zhao, Wentao Zhang, Bin Cui, and Ming-Hsuan Yang. Diffusion models: A comprehensive survey of methods and applications. ACM Computing Surveys, 56(4):1 39, 2023. Grigory Bartosh, Dmitry Vetrov, and Christian A Naesseth. Neural diffusion models. ar Xiv preprint ar Xiv:2310.08337, 2023. Beatrix Miranda Ginn Nielsen, Anders Christensen, Andrea Dittadi, and Ole Winther. Diffenc: Variational diffusion with a learned encoder. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=8nxy1b QWTG. Grigory Bartosh, Dmitry Vetrov, and Christian A Naesseth. Neural flow diffusion models: Learnable forward process for improved diffusion modelling. ar Xiv preprint ar Xiv:2404.12940, 2024. Victor Garcia Satorras, Emiel Hoogeboom, Fabian Fuchs, Ingmar Posner, and Max Welling. E (n) equivariant normalizing flows. Advances in Neural Information Processing Systems, 34:4181 4192, 2021. Minkai Xu, Lantao Yu, Yang Song, Chence Shi, Stefano Ermon, and Jian Tang. Geodiff: A geometric diffusion model for molecular conformation generation. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=Pzcvx EMzv QC. Fan Bao, Min Zhao, Zhongkai Hao, Peiyao Li, Chongxuan Li, and Jun Zhu. Equivariant energyguided SDE for inverse molecular design. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=r0ot Lt Ow YW. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In Francis Bach and David Blei, editors, Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 2256 2265, Lille, France, 07 09 Jul 2015. PMLR. URL https://proceedings.mlr.press/v37/sohl-dickstein15.html. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840 6851, 2020. Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2020. Simo Särkkä and Arno Solin. Applied stochastic differential equations, volume 10. Cambridge University Press, 2019. Brian DO Anderson. Reverse-time diffusion equation models. Stochastic Processes and their Applications, 12(3):313 326, 1982. Pascal Vincent. A connection between score matching and denoising autoencoders. Neural computation, 23(7):1661 1674, 2011. Leon Klein, Andrew Foong, Tor Fjelde, Bruno Mlodozeniec, Marc Brockschmidt, Sebastian Nowozin, Frank Noé, and Ryota Tomioka. Timewarp: Transferable acceleration of molecular dynamics by learning time-coarsened dynamics. Advances in Neural Information Processing Systems, 36, 2024. Bowen Jing, Ezra Erives, Peter Pao-Huang, Gabriele Corso, Bonnie Berger, and Tommi S Jaakkola. Eigenfold: Generative protein structure prediction with diffusion models. In ICLR 2023-Machine Learning for Drug Discovery workshop, 2023. Ross Irwin, Alessandro Tibo, Jon Paul Janet, and Simon Olsson. Efficient 3d molecular generation with flow matching and scale optimal transport. In ICML 2024 AI for Science Workshop, 2024. URL https://openreview.net/forum?id=Cx Aj Gjdkqu. Hojung Jung, Youngrok Park, Laura Schmid, Jaehyeong Jo, Dongkyu Lee, Bongsang Kim, Se-Young Yun, and Jinwoo Shin. Conditional synthesis of 3d molecules with time correction sampler. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. Luhuan Wu, Brian Trippe, Christian Naesseth, David Blei, and John P Cunningham. Practical and asymptotically exact conditional sampling in diffusion models. Advances in Neural Information Processing Systems, 36, 2024. Raghunathan Ramakrishnan, Pavlo O Dral, Matthias Rupp, and O Anatole Von Lilienfeld. Quantum chemistry structures and properties of 134 kilo molecules. Scientific data, 1(1):1 7, 2014. Simon Axelrod and Rafael Gomez-Bombarelli. Geom, energy-annotated molecular conformations for property prediction and molecular generation. Scientific Data, 9(1):185, 2022. Minkai Xu, Alexander S Powers, Ron O Dror, Stefano Ermon, and Jure Leskovec. Geometric latent diffusion models for 3d molecule generation. In International Conference on Machine Learning, pages 38592 38610. PMLR, 2023. Bo Qiang, Yuxuan Song, Minkai Xu, Jingjing Gong, Bowen Gao, Hao Zhou, Wei-Ying Ma, and Yanyan Lan. Coarse-to-fine: a hierarchical diffusion model for molecule generation in 3d. In International Conference on Machine Learning, pages 28277 28299. PMLR, 2023. Lemeng Wu, Chengyue Gong, Xingchao Liu, Mao Ye, and qiang liu. Diffusion-based molecule generation with informative prior bridges. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=TJUNti Zi TKE. Yuxuan Song, Jingjing Gong, Hao Zhou, Mingyue Zheng, Jingjing Liu, and Wei-Ying Ma. Unified generative modeling of 3d molecules with bayesian flow networks. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id= NSVtmmze RB. Alex Graves, Rupesh Kumar Srivastava, Timothy Atkinson, and Faustino Gomez. Bayesian flow networks, 2024. Diederik Kingma, Tim Salimans, Ben Poole, and Jonathan Ho. Variational diffusion models. Advances in neural information processing systems, 34:21696 21707, 2021. Noel M O Boyle, Michael Banck, Craig A James, Chris Morley, Tim Vandermeersch, and Geoffrey R Hutchison. Open babel: An open chemical toolbox. Journal of cheminformatics, 3:1 14, 2011. Gregor Simm, Robert Pinsler, and José Miguel Hernández-Lobato. Reinforcement learning for molecular design guided by quantum mechanics. In International Conference on Machine Learning, pages 8959 8969. PMLR, 2020. Ilia Igashov, Hannes Stärk, Clément Vignac, Arne Schneuing, Victor Garcia Satorras, Pascal Frossard, Max Welling, Michael Bronstein, and Bruno Correia. Equivariant 3d-conditional diffusion model for molecular linker design. Nature Machine Intelligence, pages 1 11, 2024. Miha Skalic, José Jiménez, Davide Sabbadin, and Gianni De Fabritiis. Shape-based generative modeling for de novo drug design. Journal of chemical information and modeling, 59(3):1205 1214, 2019. Matthew Ragoza, Tomohide Masuda, and David Ryan Koes. Generating 3d molecules conditional on receptor binding sites with deep generative models. Chemical science, 13(9):2701 2713, 2022. Pedro O O Pinheiro, Joshua Rackers, Joseph Kleinhenz, Michael Maser, Omar Mahmood, Andrew Watkins, Stephen Ra, Vishnu Sresht, and Saeed Saremi. 3d molecule generation by denoising voxel grids. Advances in Neural Information Processing Systems, 36, 2024. Xingang Peng, Jiaqi Guan, Qiang Liu, and Jianzhu Ma. Moldiff: Addressing the atom-bond inconsistency problem in 3d molecule diffusion generation. In International Conference on Machine Learning, pages 27611 27629. PMLR, 2023. Yaron Lipman, Ricky TQ Chen, Heli Ben-Hamu, Maximilian Nickel, and Matthew Le. Flow matching for generative modeling. In The Eleventh International Conference on Learning Representations, 2022. Yuxuan Song, Jingjing Gong, Minkai Xu, Ziyao Cao, Yanyan Lan, Stefano Ermon, Hao Zhou, and Wei-Ying Ma. Equivariant flow matching with hybrid probability transport for 3d molecule generation. In A. Oh, T. Neumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems, volume 36, pages 549 568. Curran Associates, Inc., 2023. URL https://proceedings.neurips.cc/paper_files/paper/2023/ file/01d64478381c33e29ed611f1719f5a37-Paper-Conference.pdf. Ting Chen, Ruixiang ZHANG, and Geoffrey Hinton. Analog bits: Generating discrete data using diffusion models with self-conditioning. In The Eleventh International Conference on Learning Representations, 2023. Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. The Journal of Machine Learning Research, 13(1):723 773, 2012. Greg Landrum et al. Rdkit: A software suite for cheminformatics, computational chemistry, and predictive modeling. Greg Landrum, 8:31, 2013. David Weininger. Smiles, a chemical language and information system. 1. introduction to methodology and encoding rules. Journal of chemical information and computer sciences, 28(1):31 36, 1988. Peter Ertl and Ansgar Schuffenhauer. Estimation of synthetic accessibility score of drug-like molecules based on molecular complexity and fragment contributions. Journal of cheminformatics, 1:1 11, 2009. Tuan Le, Frank Noe, and Djork-Arné Clevert. Representation learning on biomolecular structures using equivariant graph attention. In The First Learning on Graphs Conference, 2022. URL https://openreview.net/forum?id=kv4x Uo5Pu6. A Appendices A.1 Compared models In Table 5, we detail all the compared diffusion models in terms of their transformation Fφ. Table 5: Compared models. Fφ(ε, t, x) Comment EDM (Hoogeboom et al., 2022) EDM* αtx + σtε αt = exp 1 2 R t 0 β(s) ds σt = 1 exp 1 2 R t 0 β(s) ds GEOLDM (Xu et al., 2023) αt Eφ(x) + σtε αt and σt similar to EDM p(x|z0) = N x|Dφ(z0), δ2I EDM* + γφ αφ(t)x + σφ(t)ε learned γφ with 2 outputs (r and h) END (µφ only) µφ(x, t) + σtε σt similar to EDM END µφ(x, t) + Uφ(x, t)ε as introduced in Eq. (6) A.2 Additional results A.2.1 Ablation on unconditional QM9 Table 6: Main results of the ablation study on QM9. Metrics are obtained over 10000 samples, with mean/standard deviation across 3 sampling runs. The two variants of END compare favorably to baselines across metrics, while offering competitive performance for reduced number of sampling steps. Stability ( ) Validity / Uniqueness ( ) TV ( ) Strain En. ( ) A [%] M [%] V [%] V U [%] A [10 2] B [10 3] E [kcal/mol] Model Steps Data 99.0 95.2 97.7 97.7 7.7 50 97.6 .0 77.6 .5 90.2 .2 89.2 .2 4.6 .1 1.7 .5 16.4 .2 100 98.1 .0 81.9 .4 92.1 .2 90.9 .2 3.5 .1 1.4 .3 13.5 .1 250 98.3 .0 84.3 .1 93.2 .4 91.7 .3 2.8 .2 1.3 .4 12.3 .4 500 98.4 .0 85.2 .5 93.5 .2 92.2 .3 2.6 .2 1.3 .4 11.7 .1 1000 98.4 .0 85.3 .3 93.5 .1 91.9 .1 2.5 .1 1.4 .4 11.3 .1 50 97.7 .0 77.4 .3 91.1 .4 90.2 .4 4.3 .1 1.5 .2 15.5 .2 100 98.2 .0 82.6 .2 92.9 .2 91.6 .2 3.2 .1 1.2 .2 12.8 .1 250 98.5 .0 85.3 .3 93.9 .1 92.4 .1 2.5 .1 1.0 .1 11.3 .0 500 98.5 .1 86.1 .4 94.1 .2 92.5 .2 2.2 .1 1.0 .3 11.1 .1 1000 98.5 .0 86.1 .3 94.1 .2 92.4 .2 2.1 .1 1.1 .1 10.8 .1 END (µφ only) 50 98.5 .0 83.9 .2 95.2 .2 93.8 .3 1.4 .1 1.9 .4 13.1 .1 100 98.7 .0 87.0 .3 95.5 .2 93.6 .2 1.1 .0 1.5 .2 11.1 .1 250 98.9 .0 89.0 .2 95.8 .2 93.8 .2 1.0 .0 0.5 .1 10.3 .1 500 98.9 .0 88.6 .2 95.6 .1 93.5 .1 0.9 .0 0.7 .1 10.0 .1 1000 98.9 .0 89.2 .3 95.6 .1 93.5 .1 0.9 .2 1.0 .1 9.7 .2 50 98.6 .0 84.6 .1 92.7 .1 91.4 .1 1.5 .1 1.9 .4 12.1 .3 100 98.8 .0 87.4 .2 94.1 .0 92.3 .2 1.3 .0 1.8 .3 10.6 .2 250 98.9 .1 88.8 .5 94.7 .2 92.6 .1 1.2 .1 0.8 .2 9.6 .2 500 98.9 .0 88.8 .4 94.8 .2 92.8 .2 1.2 .1 0.8 .5 9.5 .1 1000 98.9 .0 89.1 .1 94.8 .1 92.6 .2 1.2 .1 0.8 .5 9.3 .1 Table 7: Additional results of the ablation study with metrics from HIERDIFF (Qiang et al., 2023) and MMD (Gretton et al., 2012; Daigavane et al., 2024) on QM9. The two variants of END shows better, or on par, agreement with the true data distribution compared to the baselines. SA ( ) QED ( ) log P ( ) MW MMD ( ) [10 1] Model Steps Data 0.626 0.462 0.339 122.7 7.7 50 0.609 .001 0.456 .000 0.049 .008 125.7 .0 1.91 .03 100 0.613 .001 0.458 .000 0.049 .003 124.9 .1 1.67 .02 250 0.617 .001 0.461 .001 0.107 .013 124.4 .0 1.52 .02 500 0.618 .001 0.462 .000 0.124 .006 124.2 .1 1.50 .04 1000 0.619 .000 0.462 .001 0.135 .007 124.2 .0 1.51 .02 50 0.612 .000 0.454 .001 0.053 .003 125.3 .0 2.04 .02 100 0.616 .001 0.459 .000 0.049 .010 124.6 .0 1.66 .01 250 0.620 .001 0.461 .000 0.124 .011 124.1 .0 1.54 .02 500 0.620 .001 0.461 .000 0.144 .005 124.0 .1 1.50 .02 1000 0.622 .000 0.462 .001 0.162 .008 123.9 .1 1.45 .03 END (µφ only) 50 0.606 .001 0.456 .001 0.096 .003 123.8 .1 2.80 .06 100 0.615 .001 0.458 .000 0.157 .002 123.6 .1 1.97 .05 250 0.622 .002 0.460 .001 0.198 .005 123.3 .1 1.48 .03 500 0.626 .001 0.462 .000 0.219 .006 123.1 .1 1.36 .03 1000 0.627 .001 0.462 .001 0.225 .011 123.0 .0 1.36 .02 50 0.602 .001 0.456 .001 0.074 .012 123.7 .1 1.91 .00 100 0.613 .001 0.459 .000 0.125 .010 123.3 .1 1.63 .02 250 0.620 .001 0.461 .000 0.164 .002 123.1 .1 1.44 .04 500 0.622 .001 0.462 .000 0.193 .008 123.2 .0 1.41 .01 1000 0.623 .001 0.462 .001 0.198 .013 123.0 .0 1.37 .04 A.2.2 Ablation on composition-conditioned QM9 Table 8: Ablation on composition-conditioned generation. The two versions CEND display better controllability. Fixing the standard deviation leads to a small decrease in performance, with respect to the full model while remaining significantly better than the baseline with fixed forward. Matching [%] ( ) Model Steps 50 69.6 0.6 100 73.0 0.6 250 74.1 1.4 500 76.2 0.6 1000 75.5 0.5 CEND (µφ only) 50 75.7 0.4 100 79.9 0.4 250 82.7 0.5 500 83.0 0.8 1000 83.5 0.6 50 89.2 0.8 100 90.1 1.0 250 91.2 0.8 500 91.5 0.8 1000 91.0 0.9 A.2.3 Ablations on GEOM-DRUGS Table 9: Main results of the ablation study on GEOM-DRUGS. Metrics are obtained over 10000 samples, with mean/standard deviation across 3 sampling runs. Most notably, END generates more connected samples, and less strained structures. Fixing the standard deviation leads to a slight decrease in performance. Stability ( ) Val. / Conn. ( ) TV ( ) Strain En. ( ) A [%] V [%] V C [%] A [10 2] E [kcal/mol] Model Steps Data 86.5 99.0 99.0 15.8 50 84.7 .0 93.6 .2 46.6 .3 10.5 .1 134.2 1.5 100 85.2 .1 93.8 .3 56.2 .4 8.0 .1 110.9 .3 250 85.4 .0 94.2 .1 61.4 .6 6.7 .1 98.9 .3 500 85.4 .0 94.3 .2 63.4 .1 6.4 .1 95.3 .1 1000 85.3 .1 94.4 .1 64.2 .6 6.2 .0 92.9 1.1 END (µφ only) 50 85.6 .1 87.8 .2 66.0 .4 7.9 .0 105.6 1.0 100 85.8 .1 89.9 .1 73.7 .4 6.1 .1 85.5 0.5 250 85.7 .1 91.2 .2 77.4 .4 5.0 .1 74.5 1.5 500 85.8 .1 91.6 .1 78.6 .3 4.8 .1 72.3 1.1 1000 85.8 .1 91.8 .1 79.4 .4 4.6 .0 71.0 0.6 50 87.1 .1 84.6 .5 68.6 .4 5.9 .1 86.3 .6 100 87.2 .1 87.0 .2 76.7 .5 4.5 .1 67.9 .9 250 87.1 .1 88.5 .2 80.7 .6 3.5 .0 58.9 .1 500 87.0 .0 88.8 .3 81.7 .4 3.3 .0 56.4 .8 1000 87.0 .0 89.2 .3 82.5 .3 3.0 .0 55.0 .5 Table 10: Additional results of the ablation study with metrics from HIERDIFF (Qiang et al., 2023) on GEOM-DRUGS. The two variants of END shows better agreement with the true data distribution, compared to the baseline with fixed forward. SA ( ) QED ( ) log P ( ) MW Model Steps Data 0.832 0.672 2.985 360.0 50 0.590 .001 0.480 .001 1.105 .012 354.2 0.9 100 0.614 .002 0.534 .001 1.482 .022 352.3 0.4 250 0.630 .001 0.559 .003 1.716 .009 351.4 0.4 500 0.637 .001 0.570 .003 1.794 .023 350.7 0.3 1000 0.641 .002 0.574 .004 1.831 .013 350.5 1.4 END (µφ only) 50 0.634 .001 0.526 .001 1.435 .007 352.4 0.6 100 0.664 .002 0.568 .001 1.792 .020 351.3 0.5 250 0.681 .000 0.591 .002 1.996 .014 351.1 0.6 500 0.687 .000 0.596 .002 2.059 .015 350.6 0.7 1000 0.690 .001 0.602 .001 2.093 .019 351.6 0.4 50 0.621 .003 0.487 .002 0.939 .019 352.0 1.4 100 0.660 .001 0.550 .002 1.530 .010 351.6 1.9 250 0.685 .001 0.578 .002 1.832 .009 351.4 1.3 500 0.695 .002 0.586 .004 1.945 .012 352.1 1.6 1000 0.698 .002 0.590 .002 2.009 .009 352.0 0.6 A.3 Equivariance / invariance proofs A.3.1 Time-derivative of an O(3)-equivariant function Let f : X [0, 1] Y be a function that is equivariant to actions of the group O(3), such that R f(x, t) = f(R x, t), R O(3). Proof sketch We need to show that t f(R x, t) = R t f(x, t) , R O(3) and x X. t f(R x, t) = t R f(x, t) , t f(x, t) , where the last equality follows by linearity. A.3.2 Inverse of an O(3)-equivariant function Let f : X Y be a function that (1) is equivariant to the action of the group O(3), and (2) admits an inverse f 1 : Y X, then f 1 is also equivariant to the action of O(3). Proof sketch We need to show that f 1 R y = R f 1(y), R O(3) and y Y. Since f is invertible, we have that to any y Y corresponds a unique x X, such that y = f(x), and that f 1(y) = f 1(f(x)) = x. As f is equivariant to the action of O(3), we have that R f(x) = f(R x), R O(3): f 1 R y = f 1 R f(x) , = f 1 f(R x) , = R f 1(y). A.3.3 O(3)-invariance of the objective function In this section, we show that the objective function is invariant under the action of O(3): LEND(Rx) = LEND(x), R O(3), provided that Fφ and ˆxθ are equivariant. LEND(Rx) = Eu(t),qφ(zt|Rx)q(Rx) h 1 2g2φ(t) fφ(zt, t, Rx) g2 φ(t) 2 zt log qφ(zt|Rx) fφ(zt, t, ˆxθ(zt, t)) + g2 φ(t) 2 zt log qφ(zt|ˆxθ(zt, t)) 2 2 = Eu(t),qφ(zt|Rx)q(Rx) h 1 2g2φ(t) fφ(RR 1zt, t, Rx) g2 φ(t) 2 zt log qφ(RR 1zt|Rx) fφ(RR 1zt, t, ˆxθ(RR 1zt, t)) + g2 φ(t) 2 zt log qφ(RR 1zt|ˆxθ(RR 1zt, t)) 2 2 = Eu(t),qφ(zt|Rx)q(Rx) h 1 2g2φ(t) Rfφ(R 1zt, t, x) g2 φ(t) 2 zt log qφ(R 1zt|x) Rfφ(R 1zt, t, ˆxθ(R 1zt, t)) + g2 φ(t) 2 zt log qφ(R 1zt|ˆxθ(R 1zt, t)) 2 2 = Eu(t),qφ(Ryt|Rx)q(Rx) h 1 2g2φ(t) Rfφ(yt, t, x) g2 φ(t) 2 R yt log qφ(yt|x) Rfφ(yt, t, ˆxθ(yt, t)) + g2 φ(t) 2 R yt log qφ(yt|ˆxθ(yt, t)) 2 2 = Eu(t),qφ(yt|x)q(x) h 1 2g2φ(t) fφ(yt, t, x) g2 φ(t) 2 yt log qφ(yt|x) fφ(yt, t, ˆxθ(yt, t)) + g2 φ(t) 2 yt log qφ(yt|ˆxθ(yt, t)) 2 2 The first equality is obtained by replacing x by Rx in the definition of the objective function Eq. (5). The second is obtained by multiplying by RR 1 = I. The third equality by leveraging that fφ, qφ and ˆxθ are equivariant. We then perform a change of variable yt = R 1zt. As rotation does preserve distances, we obtain the last equality. A.4 Algorithms Algorithm 1 Training algorithm of END Require: q(x), Fφ, ˆxθ for training iterations do x q(x), t u(t), ε p(ε) zt µφ(x, t) + Uφ(x, t)ε L = 1 2g2φ(t) f B φ (zt, t, x) ˆfθ,φ(zt, t) 2 2 Gradient step on θ and φ end for Algorithm 2 Stochastic sampling from END Require: Fφ, ˆxθ, integration steps T, empirical distribution of number of atoms p(N) T N p(N) z1 p(z1) for t = 1, ..., 1 T do w N(0, I) zt t zt ˆfθ,φ(zt, t) t + gφ(t) w t end for x p(x|z0) A.5 Details about Fφ A.5.1 Working in ambient space In this section, we omit the invariant features, and consider that x, zt, ε RM d are all collections of vectors. In opposition to the notations introduced in Section 3, we would like Fφ and ˆxθ (which ultimately are neural networks) to operate in ambient space directly, i.e. on zt, x, ε R with R = {v RM d : 1 M PM i=1 vi = 0}, instead of R(M 1) d as initially presented. To do so, we use the results from Garcia Satorras et al. (2021) that showed that the Jacobian of the transformation Fφ can be computed in ambient space, for zt and ε that live in the linear subspace R, provided that the transformation Fφ (invertible with respect to ε) leaves the center of mass of ε unchanged. Considering flat representations of zt, x, ε RM d, the parameterization of Fφ introduced in Eqs. (6), (8) and (9) can be adapted as follows to achieve such property Fφ(ε, t, x) = µφ(x, t) + Uφ(x, t)ε, (10) µφ(x, t) = (IM d 1 M TT ) µφ(x, t) (11) Uφ(x, t) = (IM d 1 M TT ) Uφ(x, t) + 1 M TT , (12) where T RM d d = [Id, Id, ...] , and 1 M TT corresponds to the linear operator computing the center of mass. In Eq. (11), the unconstrained mean output µφ(x, t) is simply projected onto the 0-Co M subspace, thereby inducing no translation. In Eq. (12), the unconstrained output of Uφ(x, t) is first projected onto the 0-Co M subspace, before being translated back to the initial center of mass. With this adapted formulation, we need to be able to (1) compute the latent variable zt given ε, (2) evaluate the Jacobian of the transformation Fφ, and (3) evaluate the inverse transformation F 1 φ . Computing zt from ε Given that ε R, obtaining zt R from ε simply amounts to (1) computing zt = µφ(x, t) + Uφ(x, t)ε, and then (2) removing the center of mass from zt the second term in Eq. (12) induces no translation as ε R. Computing |JFφ| and F 1 φ In what follows, we shorten the notation, and denote Uφ(x, t) by U and Uφ(x, t) by U. To leverage known identities, we start by reorganizing Eq. (12), as M TT (IM d U). (13) The Jacobian of Fφ is given by the determinant of U, and the latter can be derived by leveraging the Matrix Determinant Lemma, det U = det U det Id + 1 M T (IM d U) U 1T , (14) = det U det Id + 1 M T ( U 1 IM d)T , (15) = det U det 1 m=1 ( U m) 1 , (16) = det U det V, (17) m=1 det U m det V, (18) where U m denotes the m-th d d-block in U, and V = 1 M PM m=1( U m) 1 is a d d-matrix. Computing the Jacobian therefore amounts to compute the inverse of M d d-matrices and the determinant of M + 1 d d-matrices. In practice, d = 3 such that all computations can be performed in closed-form. Regarding F 1 φ , we do not need to evaluate the inverse transformation itself, but instead evaluate ε given zt, ε = U 1 zt µ(x, t) . The inverse U 1 can be obtained via the Woodbury matrix identity, U 1 = U + 1 M TT (IM d U) 1, M U 1TV 1T ( U 1 IM d), = U 1(IM d C), where V = 1 M PM m=1( U m) 1 as previously defined in Eq. (18), and C = 1 M TV 1T ( U 1 IM d). Given the specific structure of C, the computation of ε can be simplified to ε = U 1 zt, = U 1(I C) zt, = U 1 zt c , where z = zt µ(x, t) and c = C zt acts as a translation operator. We note that computing the inverse transformation requires to invert M + 1 d d-matrices, but as d = 3 in practice, all computations can be performed in closed-form. A.5.2 Invariant features For simplicity, we omitted in Section 3 and Appendix A.5.1 that molecules are described as tuples: x = (r, h), as only r transform under Euclidean transformations. For the invariant features h, we use the following parameterization µ(h) φ (x, t) = (1 t)h + t 1 t µ(h) φ (x, t), (19) σ(h) φ (x, t) = δ1 t σφ(x, t)t(1 t). (20) which ensures that µ(h) φ (x, 0) = h and σ(h) φ (x, 0) = δ; whereas µ(h) φ (x, 1) = 0 and σ(h) φ (x, 1) = I. Implementation As described in the main text in Section 3.2, Fφ is implemented as a neural network with an architecture similar to that of the data point predictor ˆxθ(zt, t), but with a specific readout layer that produces the outputs related to r ([ µφ(x, t), σφ, Uφ(x, t)]). Additionally, it produces µ(h) φ (x, t) and σ(h) φ (x, t) as invariant outputs. Inverse transformation The logarithm of the determinant of the inverse transformation log |J 1 F | writes log |J 1 F | = log |JF | = i=1 σ(h),i φ (x, t) | {z } invariant features i=m log det(U m φ (x, t)) log det |V | | {z } vectorial features where V is defined as in Eq. (18). A.6 Experimental details In addition to the details provided in this section, we release a public code repository with our implementation of END. A.6.1 Evaluation metrics In this section, we describe the metrics employed to evaluate the different models: Stability: An atom is deemed stable if it has a charge of 0, whereas a molecule is stable if all its atoms have 0 charge. We reuse the lookup table from Hoogeboom et al. (2022) to infer bond types from pairwise distances. Validity and Connectivity: Validity corresponds to the percentage of samples that can be parsed and sanitized by rdkit (Landrum et al., 2013), after inference of the bonds using the lookup table mechanism (Hoogeboom et al., 2022). It should be noted that the metric does not penalize fragmented samples as long as each individual fragment appears valid. This can be problematic when running evaluation on larger compounds such as those in GEOM-DRUGS, as models tend to generated disconnected structures. To account for that, we also report Connectivity, which simply check that a valid molecule is composed of exactly one fragment. Uniqueness: Uniqueness is expressed as the proportion of samples that are valid and have a unique SMILES string (Weininger, 1988) among all the generated samples. On GEOM-DRUGS, we do not report Uniqueness as all generated samples appear unique (as per previous work). Total variation: The total variation is computed as the MAE between the (discrete) marginal obtained on the training data and on the generated samples. For bond types on QM9, we compute the ground truth and generated distributions using the lookup table mechanism (Hoogeboom et al., 2022). Strain Energy: The strain energy is expressed as the difference in energy between generated structures and a relaxed version thereof obtained as per rdkit s MMFF (Landrum et al., 2013). From the generated samples, we infer rdkit mol objects using OPENBABEL (O Boyle et al., 2011). We only evaluate the strain energy of valid and connected samples. SA, QED, log P and MW: SA denotes the "Synthetic Accessibility Score", which is a rule-based scoring function that evaluates the complexity of synthesizing a structure by organic reactions (Ertl and Schuffenhauer, 2009). We normalized its values between 0 and 1, with 0 being difficult to synthesize and 1 easy to synthesize . QED denotes "Quantitative Estimation of Drug-likeness". log P denotes the octanol-water partition coefficient. MW denotes the molecular weight. We employ the rdkit s implementation of all metrics. To do so, we convert the generated samples to rdkit mol objects using OPENBABEL (O Boyle et al., 2011). We then only evaluate the different metrics for valid and connected samples. MMD: On QM9, we follow the procedure of Daigavane et al. (2024), and compute the MMD Gretton et al. (2012) between true and generated pairwise distances distributions for the 10 most common bonds in the dataset: ["C-H:1.0", "C-C:1.0", "C-O:1.0", "C-N:1.0", "H-N:1.0", "C-O:2.0", "C-N:1.5", "H-O:1.0", "C-C:1.5", "C-N:2.0"]. A.6.2 Architecture Our forward transformation Fφ and data point predictor ˆxθ share a common neural network architecture that we detail here. The architecture is similar to that of EQCAT (Le et al., 2022), and updates a collection of invariant and equivariant features for each node in the graph. We choose that architecture because it allows for an easy construction of Uφ(x, t) by linear projection of the final equivariant layer. We follow previous work (Hoogeboom et al., 2022) and consider fully-connected graphs. We initially featurize pairwise distances through Gaussian Radial Basis functions, with dataset-specific cutoff taken large enough to ensure full connectivity. In opposition to Hoogeboom et al. (2022), we do not update positions in the message-passing phase, but instead obtain the positions prediction through a linear projection of the final equivariant hidden states. The predictions for the invariant features are obtained by reading out the final invariant hidden states. Optimization For all model variants, we employ Adam with a learning rate of 10 4. We perform gradient clipping (norm) with a value of 10 on QM9, and a value of 1 on GEOM-DRUGS. A.6.3 Unconditional generation We reuse the data setup from previous work (Hoogeboom et al., 2022; Xu et al., 2023). QM9 On QM9, we use 10 layers of message passing for EDM*, while the variants of END feature 5 layers of message-passing in Fφ and 5 layers in ˆxθ. For all models, we use 256 invariant and 256 equivariant hidden features, along with an RBF expansion of dimension 64 with a cutoff of 12Å for pairwise distances. This ensures that the compared models have the same number of learnable parameters, i.e. 9.4M each. We train all models for at most 1000 epochs with a batch size of 64. GEOM-Drugs On GEOM-DRUGS, we use 10 layers of message passing for EDM*, while the variants of END feature 5 layers of message-passing in Fφ and 5 in ˆxθ. The hidden size of the invariant and equivariant features is set to 192, along with an RBF expansion of dimension 64 with a cutoff of 30Å for pairwise distances (as to ensure full-connectivity). Each model features 5.4M learnable parameters. We train all models for 10 epochs with an effective batch size of 64. A.6.4 Conditional generation We use 10 layers of message passing for EDM*, while the variants of END feature 5 layers of message-passing in Fφ and 5 in ˆxθ. The hidden size of the invariant and equivariant features is set to 192 , along with an RBF expansion of dimension 64 with a cutoff of 10Å for pairwise distances. We train all models for 1000 epochs with a batch size of 64. After an initial encoding, the conditional information is introduced at the end of each message passing step, and alters the scalar hidden states through a one-layer MLP, that shares the same dimension as the hidden scalar state. Composition-conditioned generation The encoding of the condition follows that of Gebauer et al. (2022). Each atom type gets its own embedding (of dimension 64), weighted by the proportion it represents in the provided formula. The weighted embeddings of all atom types are then concatenated and flattened, and the obtained vector (of dimension 64 number of atom types ) is processed through a 2-layer MLP with 64 hidden units. The compositions used at sampling time are extracted from the validation and test sets. For each unique formula, the model gets to generate 10 samples. The reported matching rate refers to the % of generated samples featuring the prompted composition. Substructure-conditioned generation The encoding of the condition follows that of Bao et al. (2023). Each molecule is first converted to an OPENBABEL object (O Boyle et al., 2011) (solely based on positions and atom types), from which a fingerprint is in turn calculated. The obtained 1024dimensional fingerprint is simply processed by a 2 layer MLP with hidden dimensions [512, 256], and a final linear projection to 192, i.e. the hidden size of the invariant features. The model is evaluated by computing the Tanimoto similarity between the fingerprints obtained from the generated samples and the fingerprints provided as conditional inputs. A.7 Compute resources All experiments were run on a single GPU. The experiments on QM9 were run on a NVIDIA SM3090 with 24 GB of memory. The experiments on GEOM-DRUGS were run on NVIDIA A100 with 40 GB of memory. Training took up to 7 days. Sampling The current implementation of END leads to 3x increase relative to EDM (with comparable number of learnable parameters) per function evaluation when performing sampling. However, END usually requires much fewer number of function evaluations to achieve comparable (or better) accuracy, and we note that alternative parameterizations of the reverse process are possible. In particular, the drift of the reverse process ˆfθ,φ could be learned without direct dependence on fφ, thereby leading to very limited overhead with respect to vanilla diffusion models for sampling only one neural network forward network would then be required. For reference, we report sampling times on QM9 (1024 samples) in Table 11, for varying numbers of integration steps. Table 11: Average sampling time in seconds for 1024 samples on QM9. The current implementation of END leads to 3x increase relative to EDM. sampling time [s] Model Steps 50 30.4 100 60.6 250 149.7 500 297.7 1000 593.8 50 88.6 100 179.6 250 445.9 500 886.4 1000 1765.8 Training A training step on QM9 (common batch size of 64) takes on average 0.37s (END) vs 0.16s (EDM), this corresponds to a 2.3x relative increase. We observe the same trend on GEOM-Drugs (with an effective batch size of 64), a training step takes on average 0.40s for END vs. 0.15s for EDM (corresponding to a 2.7x relative increase). In summary, other things equal, END leads to 2.5x increase relative to EDM per training step while we find it to converge with a similar number of epochs. As for sampling, a direct parametrization of ˆfθ,φ would enable faster training while still requiring the evaluation of fφ to obtain the target reverse drift term in Eq. (5). Neur IPS Paper Checklist Question: Do the main claims made in the abstract and introduction accurately reflect the paper s contributions and scope? Answer: [Yes] Justification: The method section shows that the proposed model has the properties we claim it has, while the experimental section demonstrates its performance. We additionally highlight limitations Section 6. Guidelines: The answer NA means that the abstract and introduction do not include the claims made in the paper. The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. 2. Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [Yes] Justification: In Section 6, we discuss the limitations of the proposed approach, and of the experimental validation. Guidelines: The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. The authors are encouraged to create a separate "Limitations" section in their paper. The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren t acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. 3. Theory Assumptions and Proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: [Yes] Justification: When appropriate, we provide a proof sketch in Appendix. Guidelines: The answer NA means that the paper does not include theoretical results. All the theorems, formulas, and proofs in the paper should be numbered and crossreferenced. All assumptions should be clearly stated or referenced in the statement of any theorems. The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. Theorems and Lemmas that the proof relies upon should be properly referenced. 4. Experimental Result Reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: [Yes] Justification: The data configuration follows from previous work, while most of the experimental details are provided in Appendix. Guidelines: The answer NA means that the paper does not include experiments. If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. While Neur IPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example (a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. (b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. (c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). (d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. 5. Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes] Justification: We provide a public code repository with our implementation. Guidelines: The answer NA means that paper does not include experiments requiring code. Please see the Neur IPS code and data submission guidelines (https://nips.cc/ public/guides/Code Submission Policy) for more details. While we encourage the release of code and data, we understand that this might not be possible, so No is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). The instructions should contain the exact command and environment needed to run to reproduce the results. See the Neur IPS code and data submission guidelines (https: //nips.cc/public/guides/Code Submission Policy) for more details. The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. 6. Experimental Setting/Details Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [Yes] Justification: The data configuration follows from previous work, while most of the experimental details are provided in Appendix. Guidelines: The answer NA means that the paper does not include experiments. The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. The full details can be provided either with the code, in appendix, or as supplemental material. 7. Experiment Statistical Significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: [Yes] Justification: All sampling results are averages over multiple seeds and we include standard errors. Guidelines: The answer NA means that the paper does not include experiments. The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) The assumptions made should be given (e.g., Normally distributed errors). It should be clear whether the error bar is the standard deviation or the standard error of the mean. It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. 8. Experiments Compute Resources Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: [Yes] Justification: We provide compute resources details in Appendix A.7. Guidelines: The answer NA means that the paper does not include experiments. The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn t make it into the paper). 9. Code Of Ethics Question: Does the research conducted in the paper conform, in every respect, with the Neur IPS Code of Ethics https://neurips.cc/public/Ethics Guidelines? Answer: [Yes] Justification: The paper conforms with the Neur IPS Code of Ethics. Guidelines: The answer NA means that the authors have not reviewed the Neur IPS Code of Ethics. If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). 10. Broader Impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: [Yes] Justification: There is a dedicated paragraph in Section 6. Guidelines: The answer NA means that there is no societal impact of the work performed. If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). 11. Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [NA] Justification: The paper does not pose such risks. Guidelines: The answer NA means that the paper poses no such risks. Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. 12. Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: [Yes] Justification: We have cited the original papers that released the datasets. Guidelines: The answer NA means that the paper does not use existing assets. The authors should cite the original paper that produced the code package or dataset. The authors should state which version of the asset is used and, if possible, include a URL. The name of the license (e.g., CC-BY 4.0) should be included for each asset. For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. If this information is not available online, the authors are encouraged to reach out to the asset s creators. 13. New Assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: [NA] Justification: No assets are introduced in the paper. Guidelines: The answer NA means that the paper does not release new assets. Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. The paper should discuss whether and how consent was obtained from people whose asset is used. At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. 14. Crowdsourcing and Research with Human Subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: [NA] Justification: The paper did not require crowdsourcing. Guidelines: The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. According to the Neur IPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. 15. Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [NA] Justification: No study participants were involved. Guidelines: The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the Neur IPS Code of Ethics and the guidelines for their institution. For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.