# improved_contrastive_divergence_training_of_energybased_models__555c47d1.pdf Improved Contrastive Divergence Training of Energy-Based Models Yilun Du 1 Shuang Li 1 Joshua Tenenbaum 1 Igor Mordatch 2 Contrastive divergence is a popular method of training energy-based models, but is known to have difficulties with training stability. We propose an adaptation to improve contrastive divergence training by scrutinizing a gradient term that is difficult to calculate and is often left out for convenience. We show that this gradient term is numerically significant and in practice is important to avoid training instabilities, while being tractable to estimate. We further highlight how data augmentation and multi-scale processing can be used to improve model robustness and generation quality. Finally, we empirically evaluate stability of model architectures and show improved performance on a host of benchmarks and use cases,such as image generation, OOD detection, and compositional generation. 1 Introduction Energy-Based models (EBMs) have received an influx of interest recently and have been applied to realistic image generation (Han et al., 2019; Du & Mordatch, 2019), 3D shapes synthesis (Xie et al., 2018b) , out of distribution and adversarial robustness (Lee et al., 2018; Du & Mordatch, 2019; Grathwohl et al., 2019), compositional generation (Hinton, 1999; Du et al., 2020a), memory modeling (Bartunov et al., 2019), text generation (Deng et al., 2020), video generation (Xie et al., 2017), reinforcement learning (Haarnoja et al., 2017; Du et al., 2019), continual learning (Li et al., 2020), protein design and folding (Ingraham et al.; Du et al., 2020b) and biologically-plausible training (Scellier & Bengio, 2017). Contrastive divergence is a popular and elegant procedure for training EBMs proposed by (Hinton, 2002) which lowers the energy of the training data and raises the energy of the sampled confabulations generated by the model. The model confabulations are generated via an MCMC process (commonly Gibbs sampling or Langevin 1MIT CSAIL 2Google Brain. Correspondence to: Yilun Du . Proceedings of the 38 th International Conference on Machine Learning, PMLR 139, 2021. Copyright 2021 by the author(s). Figure 1: (Left) 128x128 samples on unconditional Celeb A-HQ. (Right) 128x128 samples on unconditional LSUN Bedroom. dynamics), leveraging the extensive body of research on sampling and stochastic optimization. The appeal of contrastive divergence is its simplicity and extensibility. It does not require training additional auxiliary networks (Kim & Bengio, 2016; Dai et al., 2019) (which introduce additional tuning and balancing demands), and can be used to compose models zero-shot. Despite these advantages, training EBMs with contrastive divergence has been challenging due to training instabilities. Ensuring training stability required either combinations of spectral normalization and Langevin dynamics gradient clipping (Du & Mordatch, 2019), parameter tuning (Grathwohl et al., 2019), early stopping of MCMC chains (Nijkamp et al., 2019b), or avoiding the use of modern deep learning components, such as self-attention or layer normalization (Du & Mordatch, 2019). These requirements limit modeling power, prevent the compatibility with modern deep learning architectures, and prevent long-running training procedures required for scaling to larger datasets. With this work, we aim to maintain the simplicity and advantages of contrastive divergence training, while resolving stability issues and incorporating complementary deep learning advances. An often overlooked detail of contrastive divergence formulation is that changes to the energy function change the MCMC samples, which introduces an additional gradient term in the objective function (see Section 2.1 for details). This term was claimed to be empirically negligible in the original formulation and is typically ignored (Hinton, 2002; Liu & Wang, 2017) or estimated via high-variance likelihood ratio approaches (Ruiz & Titsias, 2019). We show that this term can be efficiently estimated for continuous data via a combination of auto-differentiation and nearest-neighbor entropy estimators. We also empirically show that this term contributes significantly to the overall training gradient and has the effect of stabilizing training. It enables inclusion of self-attention blocks into network architectures, removes the need for capacity-limiting spectral normalization, and Improved Contrastive Divergence Training of Energy-Based Models allows us to train the networks for longer periods. We do not introduce any new objectives or complexity - our procedure is simply a more complete form of the original formulation. We further present techniques to improve mixing and mode exploration of MCMC transitions in contrastive divergence. We propose data augmentation as a useful tool to encourage mixing in MCMC by directly perturbing input images to related images. By incorporating data augmentation as semantically meaningful perturbations, we are able to greatly improve mixing and diversity of MCMC chains. We also leverage compositionality of EBMs to evaluate an image sample at multiple image resolutions when computing energies. Such evaluation and coarse and fine scales leads to samples with greater spatial coherence, but leaves MCMC generation process unchanged. We note that such hierarchy does not require specialized mechanisms such as progressive refinement (Karras et al., 2017) Our contributions are as follows: firstly, we show that a gradient term neglected in the popular contrastive divergence formulation is both tractable to estimate and is important in avoiding training instabilities that previously limited applicability and scalability of energy-based models. Secondly, we highlight how data augmentation and multi-scale processing can be used to improve model robustness and generation quality. Thirdly, we empirically evaluate stability of model architectures and show improved performance on a host of benchmarks and use cases, such as image generation, OOD detection, and compositional generation1. 2 An Improved Contrastive Divergence Framework for Energy-Based Models Energy-Based Models (EBMs) represent the likelihood of a probability distribution p D(x) for x RD as pθ(x) = exp( Eθ(x)) Z(θ) where the function Eθ(x) : RD R, is known as the energy function, and Z(θ) = R x exp Eθ(x) is known as the partition function. Thus, an EBM can be represented by an neural network that takes x as input and outputs a scalar. Training an EBM through maximum likelihood (ML) is not straightforward, as Z(θ) cannot be reliably computed, since this involves integration over the entire input domain of x. However, the gradient of log-likelihood with respect to a data sample x can be represented as θ Epθ(x ) h Eθ(x ) Note that Equation 1 is still not tractable, as it requires using Markov Chain Monte Carlo (MCMC) to draw samples from the model distribution pθ(x), which often takes ex- 1Project page and code: https://energy-basedmodel.github.io/improved-contrastive-divergence/ ponentially long to mix. As a practical approximation to the above objective, (Hinton, 2002) proposes the contrastive divergence objective KL(p D(x) || pθ(x)) KL(Πt θ(p D(x)) || pθ(x)), (2) where Πθ represents a MCMC transition kernel for pθ, and Πt θ(p D(x)) represents t sequential MCMC transitions starting from p(x). In this objective, if we can guarantee that KL(p D(x) || pθ(x)) KL(Πt θ(p D(x)) || pθ(x)), (3) then the objective guarantees that pθ(x) converges to the data distribution p D(x), since the objective is only zero (at its fixed point) when pθ(x) = p D(x). If Π represents a MCMC transition kernel, this property is guaranteed (Lyu, 2011). Note that Π does not need to converge to the underlying probability distribution, and only a finite number of steps of MCMC sampling may be used. In fact, this objective may be utilized to maximize likelihood even if Π is not a MCMC transition kernel, but instead a model such as an amortized generator, as long as we ensure that Equation 3 holds. In the appendix, we show that our approach applies even when MCMC chains are not initialized from the data distribution. 2.1 A Missing Term in Contrastive Divergence When taking the negative gradient of the contrastive divergence objective (Equation 2), we obtain the expression Eqθ(x )[ Eθ(x ) θ KL(qθ(x ) || pθ(x )) where for brevity, we summarize Πt θ(p(x)) = qθ(x). The first two terms are identical to those of Equation 1 and the third gradient term (which we refer to as the KL divergence term) corresponds to minimizing the divergence between qθ(x) and pθ(x). In practice, past contrastive divergence approaches have ignored the third gradient term, which was difficult to estimate and claimed to be empirically negligible (Hinton, 1999) (which in Figure 9 we show to be non-negligible), leading to the incorrect optimization of Equation 2. To correctly optimize Equation 2, we construct a new joint loss expression LFull, consisting of traditional contrastive loss LCD and a new loss expression LKL, to accurately exhibit all three gradient terms. Specifically, we have LFull = LCD + LKL where LCD is LCD = Ep D(x)[Eθ(x)] Estop grad(qθ(x ))[Eθ(x )], (5) and the ignored KL divergence term corresponding to the following KL loss: LKL = Eqθ(x)[Estop grad(θ)(x)] + Eqθ(x)[log(qθ(x))]. (6) Improved Contrastive Divergence Training of Energy-Based Models Figure 2: Illustration of our overall proposed framework for training EBMs. EBMs are trained with contrastive divergence, where the energy function decreases energy of real data samples (green dot) and increases the energy of hallucinations (red dot). EBMs are further trained with a KL loss which encourages generated hallucinations (shown as a solid red ball) to have low underlying energy and high diversity (shown as blue balls). Red/green arrows indicate forward computation while dashed arrows indicate gradient backpropogation. Despite being difficult to estimate, we show that LKL is a useful tool for both speeding up and stabilizing training of EBMs. We provide derivations showing the equivalence of gradients of LFull and that of Equation 2 in the appendix, where stop gradient operators are necessary to ensure correct gradients. Figure 2 illustrates the overall effects of both losses. Equation 5 encourage the energy function to assign low energy to real samples and high energy for generated samples. However, only optimizing Equation 5 often leads to an adversarial mode where the energy function learns to simply generate an energy landscape that makes sampling difficult. The KL divergence term counteracts this effect and encourages sampling to closely approximate the underlying distribution pθ(x), by encouraging samples to be both low energy under the energy function as well as diverse. Empirically, we find that including for KL term significantly improves both the stability, generation quality, and robustness to different model architectures (Figure 8). Next, we will discuss our approach towards estimating this KL divergence. 2.2 Estimating the missing gradient term Estimating LKL can further be decomposed into two separate objectives, minimizing the energy of samples from qθ(x), which we refer to as Lopt (Equation 7) and maximizing the entropy of samples from qθ(x) which we refer to as Lent (Equation 8). Minimizing Sampler Energy. To minimize the energy of samples from qθ(x) we can directly differentiate through both the energy function and MCMC sampling. We follow recent work in EBMs and utilize Langevin dynamics (Du & Mordatch, 2019; Nijkamp et al., 2019b; Grathwohl et al., 2019) for our MCMC transition kernel, and note that each step of Langevin sampling is fully differentiable with respect to underlying energy function parameters. Precisely, gradient of Lopt, Lopt Eqθ(x 0,...,x t) Estop grad(θ)(x t 1 x t 1Eθ(x t 1)+ω) where ω N(0, λ) and x i represents the ith step of Langevin sampling. To reduce the memory overhead of this differentiation procedure, we only differentiate through the last step of Langevin sampling as also done in (Vahdat et al., 2020). In the appendix we show that this leads to the same effect as differentiation through Langevin sampling. Entropy Estimation. To maximize the entropy of samples from qθ(x), we use a non-parametric nearest neighbor entropy estimator (Beirlant et al., 1997), which is shown to be mean square consistent (Kozachenko & Leonenko, 1987) with root-n convergence rate (Tsybakov & Van der Meulen, 1996). The entropy H of a distribution p(x) can be estimated through a set X = x1, x2, . . . , xn of n different points sampled from p(x) as H(pθ(x)) = 1 n Pn i=1 ln(n NN(xi, X))+O(1) where the function NN(xi, X) denotes the nearest neighbor distance of xi to any other data point in X. Based off the above entropy estimator, we write Lent as the entropy loss: Lent = Eq(x)[ log(NN(x, B))] (8) where we measure the nearest neighbor with respect to a set B of 100 past samples from MCMC chains. We utilize L2 distance as the metric for computing nearest neighbors. This type of nearest entropy estimator is known to scale poorly Improved Contrastive Divergence Training of Energy-Based Models to high dimensions (requiring an exponential number of samples to yield an accurate entropy estimate). However, in our setting, we do not need an accurate estimate of entropy. Instead, our computation of entropy is utilized as a fast regularizer to prevent sampling from collapsing. 2.3 Data Augmentation Transitions Langevin sampling, our MCMC transition kernel, is prone to falling into local probability modes (Neal, 2011). In the image domain, this manifests with sampling chains always converging to a fixed image (Du & Mordatch, 2019). A core difficulty is that distances between two qualitatively similar images can be significantly far away from each other in the input domain, on which sampling is applied. While LKL serves as a regularizer to prevent sampling collapse in Langevin dynamics, Langevin dynamics alone is not enough to encourage large jumps in a finite number of steps. It is further beneficial to have an individual sampling chain have the ability to mix between probability modes. Algorithm 1 EBM training algorithm Input: data dist p D(x), step size λ, number of steps K, data augmentation D( ), stop gradient operator Ω( ), EBM Eθ( ) B while not converged do x+ i p D x0 i B with 99% probability and U otherwise X B for nearest neighbor entropy calculation Apply data augmentation to sample: x0 i = D( x0 i ) Generate sample using Langevin dynamics: for sample step k = 1 to K do xk 1 i = Ω( xk 1 i ) xk xk 1 x Eθ( xk 1) + ω, ω N(0, σ) end for Generate two variants of x with and without gradient propagation: x i = Ω( xk i ) ˆx i = xk i Optimize objective LCD + LKL wrt θ: LCD = 1 N P i(Eθ(x+ i ) Eθ(x i ) LKL = EΩ(θ)(ˆx i ) log(NN(ˆx i , X) Optimize objective LCD + LKL wrt θ: θ θ(LCD + LKL) Update θ based on θ using Adam optimizer Update replay buffer B B B x i end while To encourage greater exploration between similar inputs in our model, we propose to augment chains of MCMC sampling with periodic data augmentation transitions that encourages movement between similar inputs. In particular, we utilize a combination of color, horizontal flip, Downsampling Downsampling Energy Addition Figure 3: Illustration of our multi-scale EBM architecture. Our energy function over an image is defined compositionally as the sum of energy functions on different resolutions of an image. rescaling, and Gaussian blur augmentations. Such combinations of augmentation has recently seen success applied in unsupervised learning (Chen et al., 2020). Specifically, during training time, we initialize MCMC sampling from a data augmentation applied to an input sampled from the buffer of past samples. At test time, during the generation, we apply a random augmentation to the input after every 20 steps of Langevin sampling. We illustrate this process in the bottom of Figure 2. Data augmentation transitions are always taken. 2.4 Compositional Multi-scale Generation To encourage energy functions to focus on features in both low and high resolutions, we define our energy function as the composition (sum) of a set of energy functions operating on different scales of an image, illustrated in Figure 3. Since the downsampling operation is fully differentiable, Langevin based sampling can be directly applied to the energy function. In our experiments, we utilize full, half, and quarter resolution image as input and show that this improves the generation performance. Algorithm 2 EBM sampling algorithm Input: number of data augmentation applications N, step size λ, number of steps K, data augmentation D( ), EBM Eθ( ) Generate samples through N iterative steps of data augmentation/Langevin dynamics: for sample step n = 1 to N do Apply data augmentation to samples: x0 = D( x0 i ) Run K steps of Langevin dynamics: for sample step k = 1 to K do xk xk 1 x Eθ( xk 1) + ω, ω N(0, σ) end for Iteratively refine samples: Final output: x = x0 Improved Contrastive Divergence Training of Energy-Based Models 2.5 Training Algorithm and Sampling We provide an overview of our overall proposed training algorithm in Algorithm 1. Our overall approach is similar to the algorithm presented in (Du & Mordatch, 2019), with two notable differences. First, we apply data augmentation to samples drawn for the replay buffer. Second, we propagate gradients through sampling to efficiently compute LKL. We further present the sample generation algorithm for a trained EBMs in Algorithm 2. We iteratively apply N steps of data augmentation and Langevin sampling to mimic the replay buffer utilized during training. 3 Experiments We perform empirical experiments to validate the following set of questions: (1) What are the effects of each proposed component towards training EBMs? (2) Are our trained EBMs able to perform well on downstream applications of EBMs, such as image generation, out-of-distribution detection, and concept compositionality? 3.1 Experimental Setup We investigate the efficacy of our proposed approach. Models are trained using the Adam Optimizer (Kingma & Ba, 2015), on a single 32GB Volta GPU for CIFAR-10 for 1 day, and for 3 days on 8 32GB Volta GPUs for Celeba HQ, LSUN and Image Net 32x32 datasets. We provide detailed training configuration details in the appendix. Our improvements are largely built on top of the EBMs training framework proposed in (Du & Mordatch, 2019). We use a buffer size of 10000, with a resampling rate of 0.1%. Our approach is significantly more stable than IGEBM, allowing us to remove aspects of regularization in (Du & Mordatch, 2019). We remove the clipping of gradients in Langevin sampling as well as spectral normalization on the weights of the network. In addition, we add self-attention blocks and layer normalization blocks in residual networks of our trained models. In multi-scale architectures, we utilize 3 different resolutions of an image, the original image resolution, half the image resolution and a quarter the image resolution. We report detailed architectures in the appendix. When evaluating models, we utilize the EMA model with EMA weight of 0.9999. 3.2 Image Generation We evaluate our approach on CIFAR-10, Imagenet 32x32 (Deng et al., 2009), and Celeb A-HQ (Karras et al., 2017) datasets. Additional quantitative comparisons, results, and ablations can be found in the appendix of the paper. Image Quality. We evaluate our approach on unconditional generation in Table 1. We utilize Inception (Salimans et al., 2016) and FID (Heusel et al., 2017) implementations from Table 1: Table of Inception and FID scores for generations of CIFAR-10, Celeb A-HQ and Image Net32x32 images. All others numbers are taken directly from corresponding papers. On CIFAR-10, our approach outperforms past EBM approaches and achieves performance close to SNGAN. On Celeb A-HQ, our approach achieves performance close to that of SSGAN. On Image Net 32x32, our approach achieves similar performance to the Pixel IQN (large) model with around one tenth the parameters. Model Inception FID CIFAR-10 Unconditional Pixel CNN (Van Oord et al., 2016) 4.60 65.9 Multigrid EBM (Gao et al., 2018) 6.56 40.1 IGEBM (Ensemble) (Du & Mordatch, 2019) 6.78 38.2 Short-Run EBM (Nijkamp et al., 2019b) 6.21 44.5 DCGAN (Radford et al., 2016) 6.40 37.1 WGAN + GP (Gulrajani et al., 2017) 6.50 36.4 NCSN (Song & Ermon, 2019) 8.87 25.3 Ours 7.85 25.1 SNGAN (Miyato et al., 2018) 8.22 21.7 SSGAN (Chen et al., 2019) - 19.7 Celeb A-HQ 128x128 Unconditional Ours - 28.78 SSGAN (Chen et al., 2019) - 24.36 Image Net 32x32 Unconditional Pixel CNN (van den Oord et al., 2016) 7.16 40.51 Pixel IQN (small) (Ostrovski et al., 2018) 7.29 37.62 Pixel IQN (large) (Ostrovski et al., 2018) 8.69 26.56 IGEBM (Du & Mordatch, 2019) 5.85 62.23 Ours 8.73 32.48 Figure 4: Randomly selected unconditional 128x128 Celeb AHQ images generated from our trained EBM model. Samples are relatively diverse with limited artifacts. Improved Contrastive Divergence Training of Energy-Based Models Generated images using more sampling steps Generated images using less sampling steps Figure 5: Visualization of Langevin dynamics sampling chains on an EBM trained on Celeb A-HQ 128x128. Samples travel between different modes of images. Each consecutive images represents 30 steps of sampling, with data augmentation transitions every 60 steps. Data Augmentation Transitions No Data Augmentation Transitions Figure 6: Output samples after running Langevin dynamics from a fixed initial sample (center of square), with or without intermittent data-augmentation transitions. Without data-augmentation transitions, all samples converge to same image, while data augmentations enables chains to seperate. (Du & Mordatch, 2019) to evaluate samples. On CIFAR10, we find that our approach outperforms many past EBM approaches in both FID and Inception scores using a similar number of parameters. We find that our performance is slightly worse than that of SNGAN and SSGAN. On Celeb A-HQ, we find that our approach outperforms our reimplementation of SNGAN using default Image Net hyperparameters, and is close to the reported numbers of SSGAN. Finally, on Imagenet32x32, we find that our approach outperforms previous EBM models and achieves performance comparable to that of the large Pixel IQN model in terms of FID and Inception score. We note, however, that our model is significantly smaller than the Pixel IQN model and has one tenth the number of parameters. We present example qualitative images from Celeb A-HQ in Figure 4 and present qualitative images on other datasets in the appendix of the paper. While our overall generative performance are not the best reported, we emphasize that it improves existing generative performance of EBMs, which have unique benefits such as compositionality (Section 3.4). Effect of Data Augmentation. We evaluate the effect of data augmentation on sampling in EBMs. In Figure 5 we show that by combining Langevin sampling with data augmentation transitions, we are able to enable chains to mix across different images, whereas prior works have shown Langevin converging to fixed images. In Figure 6 we show that given a fixed random noise initialization, data augmentation transitions enable to reach a diverse number of different samples, while sampling without data augmentation transitions leads all chains to converge to the same face. Mode Convergence. We further investigate high likelihood modes of our model. In Figure 7, we compare very low energy samples (obtained after running gradient descent 1000 steps on an energy function) for both our model with data augmentation and KL loss and the IGEBM model. Due to improved mode exploration, we find that low temperature samples under our model with data augmentation/KL loss reflect typical high likelihood modes in the training dataset, while our baseline models converges to odd shapes, also noted in (Nijkamp et al., 2019a). Stability/KL Loss. EBMs are difficult to train and are sensitive to both the exact architecture and to various hyperparameters. We found that the addition of a KL term into our training objective significantly improved the stability of training, by encouraging the sampling distribution to match the model distribution. In Figure 8, we measure training stability by measuring the energy differences between real and generated images. Stable training occurs when the energy difference is close to zero. Without LKL, we found that training our model with or without self-attention were both unstable, with differences spiking. Adding spectral normalization stabilizes training, but the addition of selfattention once again destabilizes training. In contrast with LKL, the addition of self-attention is also stable. We further compare Inception scores in Figure 8 over training and find that while spectral normalization stabilizes training, it does at the expense of decreased improvement of Inception score. The addition of the KL term itself is not too expensive, simply requiring an additional nearest neighbor computation Improved Contrastive Divergence Training of Energy-Based Models Figure 7: Illustration of very low temperature samples from our model with KL loss and data augmentation compared to those from IGEBM on CIFAR-10 (left) and Celeb A-HQ (right). After a large number of sampling steps, IGEBM converges to stranges hues in CIFAR-10 and random textures on Celeb A-HQ. In contrast, due to better mode exploration, adding both improvements maintains naturalistic image modes on both CIFAR-10 and Celeb A-HQ. Figure 8: The KL loss significantly improves the stability of EBM training. Stable EBM training occurs when the energy difference (illustrated in bottom row) is roughly zero. We find that without using the KL loss term (left column), EBM training quickly diverges (bottom left). Spectral normalization prevents divergence of energies, but cannot be combined with self attention without destabilizing training. KL loss (right column) maintains an energy differences to 0 (bottom right), even with the addition of self attention. Inception scores rapidly rises with KL loss (top right), but fall without KL loss (top left) due to destabilized training. Spectral norm prevents the Inception score from falling, but the score also does not increase much due to constraints in the architecture. during training, a relatively insignificant cost compared to the number of negative sampling steps used during training. With a intermediate number of negative sampling steps (60 steps) during training, adding the KL term incurs a roughly 20% computational cost. This difference is further decreased with a larger number of sampling steps. Ablations. We ablate each portion of our proposed approach in Table 2. We find that each our proposed components have significant gains in generation performance. In particular, we find a large gain in overall generative performance when adding the KL loss. This is in part due to a large boost in training stability (Figure 8), enabling signifi- Table 2: Ablations of each proposed component on CIFAR-10 generation as well as corresponding stability of training. KL Loss significantly stabilizes EBM training, and enables larges boosts in generation performance via longer training. KL Loss KL Loss Data Multiscale Inception FID Stability Lopt Lent Aug Sampling Score No No No No 3.57 169.74 No No No Yes No 5.13 133.84 No No No Yes Yes 6.14 53.78 No Yes No Yes Yes 6.79 32.67 Yes Yes Yes Yes Yes 7.85 25.08 Yes Figure 9: Plots of the gradient magnitude of LKL and LCD across training iterations. Influences and relative magnitude of both loss terms stays constant through training. cantly longer training times with both multiscale sampling and data augmentation. KL Gradient. We plot the overall gradient magnitudes of LCD and LKL when training an EBM on CIFAR-10 in Figure 9. We find that relative magnitude of gradients of both training objectives remains constant across training, and that the gradient of the KL objective is non-negligible. 3.3 Out of Distribution Robustness Energy-Based Models (EBMs) have also been shown to exhibit robustness to both out-of-distribution and adversarial samples (Du & Mordatch, 2019; Grathwohl et al., 2019; 2020a). We evaluate out-of-distribution detection of our trained energy function through log-likelihood using Improved Contrastive Divergence Training of Energy-Based Models Table 3: Table of AUROC values in out-of-distribution detection on unconditional models trained on CIFAR-10 using log(pθ(x)). Our approach performs the best out of all methods. *JEM is not directly comparable as it uses supervised labels. Model Pixel CNN++ Glow IGEBM JEM* VERA Ours SVHN 0.32 0.24 0.63 0.67 0.83 0.91 Textures 0.33 0.27 0.48 0.60 - 0.88 CIFAR10 Interp 0.71 0.51 0.70 0.65 0.86 0.65 CIFAR100 0.63 0.55 0.50 0.67 0.73 0.83 Average 0.50 0.39 0.57 0.65 - 0.82 Table 4: Table of compositional generation accuracy across different models trained on the Celeb A-HQ dataset. Generation accuracy measured through attribute predictions using a Resnet-18 classifier is trained to regress young, female, smiling, and wavy hair attributes in Celeb A-HQ. Our approach achieves best performance. Model Young Female Smiling Wavy JVAE (Young) 0.543 - - - JVAE (Young & Female) 0.440 0.554 - - JVAE (Young & Female & Smiling) 0.488 0.520 0.526 JVAE (Young & Female & Smiling & Wavy) 0.416 0.584 0.561 0.416 IGEBM (Young) 0.506 - - - IGEBM (Young & Female) 0.367 0.160 - - IGEBM (Young & Female & Smiling) 0.604 0.648 0.625 - IGEBM (Young & Female & Smiling & Wavy) 0.550 0.545 0.445 0.781 Ours (Young) 0.847 - - - Ours (Young & Female) 0.770 0.583 - - Ours (Young & Female & Smiling) 0.906 0.718 0.968 - Ours (Young & Female & Smiling & Wavy) 0.922 0.625 0.843 0.906 the AUROC evaluation metrics proposed in Hendrycks & Gimpel (2016). We similarly evaluate out-of-distribution detection of an unconditional CIFAR-10 model. Results. We present out-of-distribution results in Table 3, comparing with both likelihood models and EBMs using log-likelihood to detect outliers. We find that our approach significantly outperforms other baselines, with the exception of CIFAR-10 interpolations. We note the JEM (Grathwohl et al., 2019) further requires supervised labels to train the energy function, which has to shown to improve out-ofdistribution performance. We posit that by more efficiently exploring modes of the energy distribution at training time, we are able to reduce the spurious modes of the energy function and thus improve out-of-distribution performance. 3.4 Compositionality Energy-Based Models (EBMs) have the ability to compose with other models at generation time (Hinton, 1999; Haarnoja et al., 2017; Du et al., 2020a). We investigate to what extent EBMs trained under our new framework can also exhibit compositionality. See (Du et al., 2020a) for a discussion of various compositional operators and applications in EBMs. In particular, we train independent EBMs E(x|c1), E(x|c2), E(x|c3), that learn conditional generative distribution over concept factors c such as facial expression. We test whether we can compose independent Size AND Type Size AND Type AND Position Size AND Type AND Position AND Rotation Figure 10: Examples of EBM compositional generations across different object attributes. Our model is successfully able to compose attributes and is able to construct high resolution, globally coherent compositional renderings, including fine detail such as lighting and reflections. energy functions together to generate images with each concept factor simultaneously. We test compositions on the Celeb A-HQ dataset, where we train separate EBMs on face attributes of age, gender, smiling, and wavy hair and on a rendered Blender dataset where we train separate EBMs on object attributes of size, position, rotation, and identity. Qualitative Results. We present qualitative results on compositions of energy functions on Celeb A-HQ in Figure 11. We consider compositional generation on the factors of young, female, smiling, and wavy hair. Compared to baselines, our approach is able to successfully generate images with each of conditioned factor, with faces being significantly higher resolution than baselines. In Figure 10, we further consider compositions of energy functions over object attributes. We consider compositional generation on factors of size, type, position, and rotation. Again, we find that each associated image generation exhibits the corresponding conditioned attribute. Images are further visually consistent in terms of lighting, shadows and reflections. We note that different from most past work, generations of these combinations of different factors are only specified at generation time, with models being trained independently. Our results indicate that our framework for training EBMs is a promising direction for high resolution compositional visual generation. Quantitative Comparison. We quantitatively compare compositional generations of our model with IGEBM and JVAE (Vedantam et al., 2018) models on the Celeb A-HQ dataset. We assess the compositional generation accuracy of different models by measuring the accuracy with which a pretrained Res Net18 classifier can recover the underlying conditioned attributes. In Table 4, we find that our model has significantly higher attribute recovery compared to baselines across all compositions of attributes. Improved Contrastive Divergence Training of Energy-Based Models Figure 11: Qualitative comparisons of compositionality on Celeb A-HQ faces. Our approach generates much more realistic looking faces than IGEBM (Du & Mordatch, 2019) and JVAE (Vedantam et al., 2018), with each conditioned attribute. 4 Related Work Our work is related to a large, growing body of work on different approaches for training EBMs. Our approach is based on contrastive divergence (Hinton, 2002), where an energy function is trained to contrast negative samples drawn from a model distribution and from real data. In recent years, such approaches have been applied to the image domain (Salakhutdinov & Hinton, 2009; Xie et al., 2016; Gao et al., 2018; Du & Mordatch, 2019). (Gao et al., 2018) also proposes a multiscale approach towards generating images from EBMs, but different from our work, uses each subscale EBM to initialize the generation of the next EBM, while we jointly sample across each resolution. Our work builds on existing works, and aims to provide improvements in generation and stability. A difficulty with contrastive divergence training is negative sample generation. To sidestep this issue, a separate line of work utilizes an auxiliary network to amortize the negative portions of the sampling procedure (Kim & Bengio, 2016; Kumar et al., 2019; Han et al., 2019; Xie et al., 2018a; Song & Ou, 2018; Dai et al., 2019; Che et al., 2020; Grathwohl et al., 2020a; Arbel et al., 2020). One line of work (Kim & Bengio, 2016; Kumar et al., 2019; Song & Ou, 2018), utilizes a separate generator network for negative image sample generations. In contrast, (Xie et al., 2018a), utilizes a generator to warm start generations for negative samples and (Han et al., 2019) minimizes a divergence triangle between three models. While such approaches enable better qualitative generation, they also lose some of the flexibility of the EBM formulation. For example, separate energy models can no longer be composed together for generation. In addition, other approaches towards training EBMs seek instead to investigate separate objectives to train the EBM. One such approach is score matching, where the gradients of an energy function are trained to match the gradients of real data (Hyv arinen, 2005; Song & Ermon, 2019), or with a related denoising (Sohl-Dickstein et al., 2015; Saremi et al., 2018; Ho et al., 2020) approach. Additional objectives include noise contrastive estimation (Gao et al., 2020), learned Steins discrepancy (Grathwohl et al., 2020b), and learned F divergences (Yu et al., 2020). Most prior work in contrastive divergence has ignored the KL term (Hinton, 1999; Salakhutdinov & Hinton, 2009). A notable exception is (Ruiz & Titsias, 2019), which obtains a similar KL divergence term to ours. Ruiz & Titsias (2019) use a high variance REINFORCE estimator to estimate the gradient of the KL term, while our approach relies on auto-differentiation and nearest neighbor entropy estimators. Differentiation through model generation procedures has previously been explored in other models (Finn & Levine, 2017; Metz et al., 2016). Other related entropy estimators include those based on Stein s identity (Liu et al., 2017) and MINE (Belghazi et al., 2018). In contrast to these approaches, our entropy estimator relies only on nearest neighbor calculation, and does not require the training of an independent neural network. 5 Conclusion We propose a simple and general framework for improving generation and ease of training of EBMs. We show that the framework enables high resolution compositional image generation and out-of-distribution robustness. In the future, we are interested in further computational scaling of our framework and its applications to additional domains such as text, video, and reasoning. 6 Acknowledgements We would like to thank Jasha Sohl Dickstein, Bo Dai, Rif Sauros, Simon Osindero, Alex Alemi and anonymous reviewers for there helpful feedback on initial versions of the manuscript. Yilun Du is supported by a NSF GFRP fellowship. This work is in part supported by ONR MURI N00014-18-1-2846 and IBM Thomas J. Watson Research Center CW3031624. Improved Contrastive Divergence Training of Energy-Based Models Michael Arbel, Liang Zhou, and Arthur Gretton. Generalized energy based models. ar Xiv preprint ar Xiv.org/abs/2003.05033, 2020. Sergey Bartunov, Jack W Rae, Simon Osindero, and Timothy P Lillicrap. Meta-learning deep energy-based memory models. ar Xiv preprint ar Xiv:1910.02720, 2019. Jan Beirlant, E. Dudewicz, L. Gyor, and E.C. Meulen. Nonparametric entropy estimation: An overview. International Journal of Mathematical and Statistical Sciences, 6, 01 1997. Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeswar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and R Devon Hjelm. Mine: mutual information neural estimation. ar Xiv preprint ar Xiv:1801.04062, 2018. Tong Che, Ruixiang Zhang, Jascha Sohl-Dickstein, Hugo Larochelle, Liam Paull, Yuan Cao, and Yoshua Bengio. Your gan is secretly an energy-based model and you should use discriminator driven latent sampling. ar Xiv preprint ar Xiv.org/abs/2003.06060, 2020. Ting Chen, Xiaohua Zhai, Marvin Ritter, Mario Lucic, and Neil Houlsby. Self-supervised gans via auxiliary rotation loss. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 12154 12163, 2019. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. ar Xiv preprint ar Xiv:2002.05709, 2020. Bo Dai, Zhen Liu, Hanjun Dai, Niao He, Arthur Gretton, Le Song, and Dale Schuurmans. Exponential family estimation via adversarial dynamics embedding. In Advances in Neural Information Processing Systems, pp. 10979 10990, 2019. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. Yuntian Deng, Anton Bakhtin, Myle Ott, Arthur Szlam, and Marc Aurelio Ranzato. Residual energy-based models for text generation. ar Xiv preprint ar Xiv:2004.11714, 2020. Yilun Du and Igor Mordatch. Implicit generation and generalization in energy-based models. ar Xiv preprint ar Xiv:1903.08689, 2019. Yilun Du, Toru Lin, and Igor Mordatch. Model based planning with energy based models. ar Xiv preprint ar Xiv:1909.06878, 2019. Yilun Du, Shuang Li, and Igor Mordatch. Compositional visual generation with energy based models. In Advances in Neural Information Processing Systems, 2020a. Yilun Du, Joshua Meier, Jerry Ma, Rob Fergus, and Alexander Rives. Energy-based models for atomic-resolution protein conformations. ar Xiv preprint ar Xiv:2004.13167, 2020b. Chelsea Finn and Sergey Levine. Meta-learning and universality: Deep representations and gradient descent can approximate any learning algorithm. ar Xiv:1710.11622, 2017. Ruiqi Gao, Yang Lu, Junpei Zhou, Song-Chun Zhu, and Ying Nian Wu. Learning generative convnets via multigrid modeling and sampling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9155 9164, 2018. Ruiqi Gao, Erik Nijkamp, Diederik P Kingma, Zhen Xu, Andrew M Dai, and Ying Nian Wu. Flow contrastive estimation of energy-based models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7518 7528, 2020. Will Grathwohl, Kuan-Chieh Wang, J orn-Henrik Jacobsen, David Duvenaud, Mohammad Norouzi, and Kevin Swersky. Your classifier is secretly an energy based model and you should treat it like one. ar Xiv preprint ar Xiv:1912.03263, 2019. Will Grathwohl, Jacob Kelly, Milad Hashemi, Mohammad Norouzi, Kevin Swersky, and David Duvenaud. No mcmc for me: Amortized sampling for fast and stable training of energy-based models. ar Xiv preprint arxiv.org/abs/2010.04230, 2020a. Will Grathwohl, Kuan-Chieh Wang, Jorn-Henrik Jacobsen, David Duvenaud, and Richard Zemel. Cutting out the middle-man: Training and evaluating energy-based models without sampling. ar Xiv preprint ar Xiv:2002.05616, 2020b. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved training of wasserstein gans. In NIPS, 2017. Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning with deep energy-based policies. ar Xiv preprint ar Xiv:1702.08165, 2017. Tian Han, Erik Nijkamp, Xiaolin Fang, Mitch Hill, Song Chun Zhu, and Ying Nian Wu. Divergence triangle for joint training of generator model, energy-based model, and inferential model. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8670 8679, 2019. Improved Contrastive Divergence Training of Energy-Based Models Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. ar Xiv preprint ar Xiv:1610.02136, 2016. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, pp. 6626 6637, 2017. Geoffrey E Hinton. Products of experts. International Conference on Artificial Neural Networks, 1999. Geoffrey E Hinton. Training products of experts by minimizing contrastive divergence. Neural Comput., 14(8): 1771 1800, 2002. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. ar Xiv preprint ar Xiv:2006.11239, 2020. Aapo Hyv arinen. Estimation of non-normalized statistical models by score matching. Journal of Machine Learning Research, 6(Apr):695 709, 2005. John Ingraham, Adam Riesselman, Chris Sander, and Debora Marks. Learning protein structure with a differentiable simulator. Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. In ICLR, 2017. Taesup Kim and Yoshua Bengio. Deep directed generative models with energy-based probability estimation. ar Xiv preprint ar Xiv:1606.03439, 2016. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. LF Kozachenko and Nikolai N Leonenko. Sample estimate of the entropy of a random vector. Problemy Peredachi Informatsii, 23(2):9 16, 1987. Rithesh Kumar, Anirudh Goyal, Aaron Courville, and Yoshua Bengio. Maximum entropy generators for energybased models. ar Xiv preprint ar Xiv:1901.08508, 2019. Kwonjoon Lee, Weijian Xu, Fan Fan, and Zhuowen Tu. Wasserstein introspective neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3702 3711, 2018. Shuang Li, Yilun Du, Gido M van de Ven, and Igor Mordatch. Energy-based models for continual learning. ar Xiv preprint ar Xiv:2011.12216, 2020. Qiang Liu and Dilin Wang. Learning deep energy models: Contrastive divergence vs. amortized mle. ar Xiv preprint ar Xiv:1707.00797, 2017. Yang Liu, Prajit Ramachandran, Qiang Liu, and Jian Peng. Stein variational policy gradient. ar Xiv preprint ar Xiv:1704.02399, 2017. Siwei Lyu. Unifying non-maximum likelihood learning objectives with minimum kl contraction. In Advances in Neural Information Processing Systems, pp. 64 72, 2011. Luke Metz, Ben Poole, David Pfau, and Jascha Sohl Dickstein. Unrolled generative adversarial networks. 11 2016. Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. ar Xiv preprint ar Xiv:1802.05957, 2018. Radford M Neal. Mcmc using hamiltonian dynamics. Handbook of Markov Chain Monte Carlo, 2(11), 2011. Erik Nijkamp, Mitch Hill, Tian Han, Song-Chun Zhu, and Ying Nian Wu. On the anatomy of mcmc-based maximum likelihood learning of energy-based models. ar Xiv preprint ar Xiv:1903.12370, 2019a. Erik Nijkamp, Mitch Hill, Song-Chun Zhu, and Ying Nian Wu. Learning non-convergent non-persistent short-run mcmc toward energy-based model. In Advances in Neural Information Processing Systems, pp. 5232 5242, 2019b. Georg Ostrovski, Will Dabney, and R emi Munos. Autoregressive quantile networks for generative modeling. ar Xiv preprint ar Xiv:1806.05575, 2018. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016. Francisco JR Ruiz and Michalis K Titsias. A contrastive divergence for combining variational inference and mcmc. ar Xiv preprint ar Xiv:1905.04062, 2019. Ruslan Salakhutdinov and Geoffrey E. Hinton. Deep boltzmann machines. In David A. Van Dyk and Max Welling (eds.), AISTATS, volume 5 of JMLR Proceedings, pp. 448 455. JMLR.org, 2009. URL http://www.jmlr.org/proceedings/ papers/v5/salakhutdinov09a.html. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In NIPS, 2016. Saeed Saremi, Arash Mehrjou, Bernhard Sch olkopf, and Aapo Hyv arinen. Deep energy estimator networks. ar Xiv preprint ar Xiv:1805.08306, 2018. Improved Contrastive Divergence Training of Energy-Based Models Benjamin Scellier and Yoshua Bengio. Equilibrium propagation: Bridging the gap between energy-based models and backpropagation. Frontiers in computational neuroscience, 11:24, 2017. Jascha Sohl-Dickstein, Eric A Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. ar Xiv preprint ar Xiv:1503.03585, 2015. Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. In Advances in Neural Information Processing Systems, pp. 11918 11930, 2019. Yunfu Song and Zhijian Ou. Learning neural random fields with inclusive auxiliary generators. ar Xiv preprint ar Xiv:1806.00271, 2018. Alexandre B Tsybakov and EC Van der Meulen. Root-n consistent estimators of entropy for densities with unbounded support. Scandinavian Journal of Statistics, pp. 75 83, 1996. Arash Vahdat, Evgeny Andriyash, and William Macready. Undirected graphical models as approximate posteriors. In Hal Daum e III and Aarti Singh (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 9680 9689. PMLR, 13 18 Jul 2020. URL http://proceedings.mlr.press/v119/ vahdat20a.html. Aaron van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. Conditional image generation with pixelcnn decoders. In NIPS, 2016. Aaron Van Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In ICML, 2016. Ramakrishna Vedantam, Ian Fischer, Jonathan Huang, and Kevin Murphy. Generative models of visually grounded imagination. In ICLR, 2018. Jianwen Xie, Yang Lu, Song-Chun Zhu, and Yingnian Wu. A theory of generative convnet. In International Conference on Machine Learning, pp. 2635 2644, 2016. Jianwen Xie, Song-Chun Zhu, and Ying Nian Wu. Synthesizing dynamic patterns by spatial-temporal generative convnet. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2017. Jianwen Xie, Yang Lu, Ruiqi Gao, and Ying Nian Wu. Cooperative learning of energy-based model and latent variable model via mcmc teaching. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018a. Jianwen Xie, Zilong Zheng, Ruiqi Gao, Wenguan Wang, Song-Chun Zhu, and Ying Nian Wu. Learning descriptor networks for 3d shape synthesis and analysis. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8629 8638, 2018b. Lantao Yu, Yang Song, Jiaming Song, and Stefano Ermon. Training deep energy-based models with f-divergence minimization. ar Xiv preprint ar Xiv:2003.03463, 2020.