# energyinspired_models_learning_with_samplerinduced_distributions__9bc03e7a.pdf Energy-Inspired Models: Learning with Sampler-Induced Distributions Dieterich Lawson Stanford University jdlawson@stanford.edu George Tucker , Bo Dai Google Research, Brain Team {gjt, bodai}@google.com Rajesh Ranganath New York University rajeshr@cims.nyu.edu Energy-based models (EBMs) are powerful probabilistic models [8, 44], but suffer from intractable sampling and density evaluation due to the partition function. As a result, inference in EBMs relies on approximate sampling algorithms, leading to a mismatch between the model and inference. Motivated by this, we consider the sampler-induced distribution as the model of interest and maximize the likelihood of this model. This yields a class of energy-inspired models (EIMs) that incorporate learned energy functions while still providing exact samples and tractable log-likelihood lower bounds. We describe and evaluate three instantiations of such models based on truncated rejection sampling, self-normalized importance sampling, and Hamiltonian importance sampling. These models outperform or perform comparably to the recently proposed Learned Accept/Reject Sampling algorithm [5] and provide new insights on ranking Noise Contrastive Estimation [34, 46] and Contrastive Predictive Coding [57]. Moreover, EIMs allow us to generalize a recent connection between multi-sample variational lower bounds [9] and auxiliary variable variational inference [1, 63, 59, 47]. We show how recent variational bounds [9, 49, 52, 42, 73, 51, 65] can be unified with EIMs as the variational family. 1 Introduction Energy-based models (EBMs) have a long history in statistics and machine learning [16, 75, 44]. EBMs score configurations of variables with an energy function, which induces a distribution on the variables in the form of a Gibbs distribution. Different choices of energy function recover well-known probabilistic models including Markov random fields [36], (restricted) Boltzmann machines [64, 24, 30], and conditional random fields [41]. However, this flexibility comes at the cost of challenging inference and learning: both sampling and density evaluation of EBMs are generally intractable, which hinders the applications of EBMs in practice. Because of the intractability of general EBMs, practical implementations rely on approximate sampling procedures (e.g., Markov chain Monte Carlo (MCMC)) for inference. This creates a mismatch between the model and the approximate inference procedure, and can lead to suboptimal performance and unstable training when approximate samples are used in the training procedure. Currently, most attempts to fix the mismatch lie in designing better sampling algorithms (e.g., Hamiltonian Monte Carlo [54], annealed importance sampling [53]) or exploiting variational techniques [35, 15, 14] to reduce the inference approximation error. Equal contributions. Research performed while at New York University. Code and image samples: sites.google.com/view/energy-inspired-models. 33rd Conference on Neural Information Processing Systems (Neur IPS 2019), Vancouver, Canada. Instead, we bridge the gap between the model and inference by directly treating the sampling procedure as the model of interest and optimizing the log-likelihood of the the sampling procedure. We call these models energy-inspired models (EIMs) because they incorporate a learned energy function while providing tractable, exact samples. This shift in perspective aligns the training and sampling procedure, leading to principled and consistent training and inference. To accomplish this, we cast the sampling procedure as a latent variable model. This allows us to maximize variational lower bounds [33, 7] on the log-likelihood (c.f., Kingma and Welling [38], Rezende et al. [61]). To illustrate this, we develop and evaluate energy-inspired models based on truncated rejection sampling (Algorithm 1), self-normalized importance sampling (Algorithm 2), and Hamiltonian importance sampling (Algorithm 3). Interestingly, the model based on self-normalized importance sampling is closely related to ranking NCE [34, 46], suggesting a principled objective for training the noise distribution. Our second contribution is to show that EIMs provide a unifying conceptual framework to explain many advances in constructing tighter variational lower bounds for latent variable models (e.g., [9, 49, 52, 42, 73, 51, 65]). Previously, each bound required a separate derivation and evaluation, and their relationship was unclear. We show that these bounds can be viewed as specific instances of auxiliary variable variational inference [1, 63, 59, 47] with different EIMs as the variational family. Based on general results for auxiliary latent variables, this immediately gives rise to a variational lower bound with a characterization of the tightness of the bound. Furthermore, this unified view highlights the implicit (potentially suboptimal) choices made and exposes the reusable components that can be combined to form novel variational lower bounds. Concurrently, Domke and Sheldon [19] note a similar connection, however, their focus is on the use of the variational distribution for posterior inference. In summary, our contributions are: The construction of a tractable class of energy-inspired models (EIMs), which lead to consistent learning and inference. To illustrate this, we build models with truncated rejection sampling, self-normalized importance sampling, and Hamiltonian importance sampling and evaluate them on synthetic and real-world tasks. These models can be fit by maximizing a tractable lower bound on their log-likelihood. We show that EIMs with auxiliary variable variational inference provide a unifying framework for understanding recent tighter variational lower bounds, simplifying their analysis and exposing potentially sub-optimal design choices. 2 Background In this work, we consider learned probabilistic models of data p(x). Energy-based models [44] define p(x) in terms of an energy function U(x) p(x) = π(x) exp( U(x)) where π is a tractable prior distribution and Z = R π(x) exp( U(x)) dx is a generally intractable partition function. To fit the model, many approximate methods have been developed (e.g., pseudo loglikelihood [6], contrastive divergence [30, 67], score matching estimator [31], minimum probability flow [66], noise contrastive estimation [28]) to bypass the calculation of the partition function. Empirically, previous work has found that convolutional architectures that score images (i.e., map x to a real number) tend to have strong inductive biases that match natural data (e.g., [70, 71, 72, 25, 22]). These networks are a natural fit for energy-based models. Because drawing exact samples from these models is intractable, samples are typically approximated by Monte Carlo schemes, for example, Hamiltonian Monte Carlo [55]. Alternatively, latent variables z allow us to construct complex distributions by defining the likelihood p(x) = R p(x|z)p(z) dz in terms of tractable components p(z) and p(x|z). While marginalizing z is generally intractable, we can instead optimize a tractable lower bound on log p(x) using the identity log p(x) = Eq(z|x) log p(x, z) + DKL (q(z|x)||p(z|x)) , (1) where q(z|x) is a variational distribution and the positive DKL term can be omitted to form a lower bound commonly referred to as the evidence lower bound (ELBO) [33, 7]. The tightness of the bound is controlled by how accurately q(z|x) models p(z|x), so limited expressivity in the variational family can negatively impact the learned model. 3 Energy-Inspired Models Instead of viewing the sampling procedure as drawing approximate samples from the energy-based models, we treat the sampling procedure as the model of interest. We represent the randomness in the sampler as latent variables, and we obtain a tractable lower bound on the marginal likelihood using the ELBO. Explicitly, if p(λ) represents the randomness in the sampler and p(x|λ) is the generative process, then log p(x) Eq(λ|x) log p(λ)p(x|λ) where q(λ|x) is a variational distribution that can be optimized to tighten the bound. In this section, we explore concrete instantiations of models in this paradigm: one based on truncated rejection sampling (TRS), one based on self-normalized importance sampling (SNIS), and another based on Hamiltonian importance sampling (HIS) [54]. Algorithm 1 TRS(π, U, T) generative process Require: Proposal distribution π(x), energy function U(x), and truncation step T. 1: for t = 1, . . . , T 1 do 2: Sample xt π(x). 3: Sample bt Bernoulli(σ( U(xt))). 4: end for 5: Sample x T π(x) and set b T = 1. 6: Compute i = min t s.t. bt = 1. 7: return x = xi. 3.1 Truncated Rejection Sampling (TRS) Consider the truncated rejection sampling process (Algorithm 1) used in [5], where we sequentially draw a sample xt from π(x) and accept it with probability σ( U(xt)). To ensure that the process ends, if we have not accepted a sample after T steps, then we return x T . In this case, λ = (x1:T , b1:T 1, i), so we need to construct a variational distribution q(λ|x). The optimal q(λ|x) is p(λ|x), which motivates choosing a similarly structured variational distribution. It is straightforward to see that p(i|x) (1 Z)i 1σ( U(x))δi 1 when the true data distribution is in our model family. As a result, it is straightforward to adapt the consistency proof from [46] to our setting. Furthermore, our perspective gives a coherent objective for jointly learning the noise distribution and the energy function and shows that the ranking NCE loss can be viewed as a lower bound on the log likelihood of a well-specified model regardless of whether the true data distribution is in our model family. In addition, we can recover the recently proposed Info NCE [57] bound on mutual information by using SNIS as the variational distribution in the classic variational bound by Barber and Agakov [4] (see Appendix C for details). To train the SNIS model, we perform stochastic gradient ascent on Eq. (3) with respect to the parameters of the proposal distribution π and the energy function U. When the data x are continuous, reparameterization gradients can be used to estimate the gradients to the proposal distribution [61, 38]. When the data are discrete, score function gradient estimators such as REINFORCE [68] or relaxed gradient estimators such as the Gumbel-Softmax [48, 32] can be used. 3.3 Hamiltonian importance sampling (HIS) Simple importance sampling scales poorly with dimensionality, so it is natural to consider more complex samplers with better scaling properties. We evaluated models based on Hamiltonian importance sampling (HIS) [54], which evolve an initial sample under deterministic, discretized 0 500 1000 Steps (thousands) Log likelihood lower bound Checkerboard 0 500 1000 Steps (thousands) Log likelihood lower bound 0 500 1000 Steps (thousands) Log likelihood lower bound Nine Gaussians 0 200 400 600 800 1000 Steps (thousands) Log likelihood lower bound Nine Gaussians Proposal Variance = 0.1 HIS LARS SNIS TRS Figure 1: Performance of LARS, TRS, SNIS, and HIS on synthetic data. LARS, TRS, and SNIS achieve comparable data log-likelihood lower bounds on the first two synthetic datasets, whereas HIS converges slowly on these low dimensional tasks. The results for LARS on the Nine Gaussians problem match previously-reported results in [5]. We visualize the target and learned densities in Appendix Fig. 2. Hamiltonian dynamics with a learned energy function. In particular, we sample initial location and momentum variables, and then transition the candidate sample and momentum with leap frog integration steps, changing the temperature at each step (Algorithm 3). While the quality of samples from SNIS are limited by the samples initially produced by the proposal, a model based on HIS updates the positions of the samples directly, potentially allowing for more expressive power. Intuitively, the proposal provides a coarse starting sample which is further refined by gradient optimization on the energy function. When the proposal is already quite strong, drawing additional samples as in SNIS may be advantageous. In practice, we parameterize the temperature schedule such that QT t=0 αt = 1. This ensures that the deterministic invertible transform from (x0, ρ0) to (x T , ρT ) has a Jacobian determinant of 1 (i.e., p(x0, ρ0) = p(x T , ρT )). Applying Eq. (2) yields a tractable variational objective log p HIS(x T ) Eq(ρT |x T ) log p(x T , ρT ) q(ρT |x T ) = Eq(ρT |x T ) log p(x0, ρ0) q(ρT |x T ) We jointly optimize π, U, ϵ, α0:T , and the variational parameters with stochastic gradient ascent. Goyal et al. [26] propose a similar approach that generates a multi-step trajectory via a learned transition operator. Algorithm 3 HIS(π, U, ϵ, α0:T ) generative process Require: Proposal distribution π(x), energy function U(x), step size ϵ, temperature schedule α0, . . . , αT . 1: Sample x0 π(x) and ρ0 N(0, I). 2: ρ0 = α0ρ0 3: for t = 1, . . . T do 4: ρt = ρt 1 ϵ 2 U(xt 1) 5: xt = xt 1 + ϵ ρt 6: ρt = αt ρt ϵ 7: end for 8: return x T 4 Experiments We evaluated the proposed models on a set of synthetic datasets, binarized MNIST [43] and Fashion MNIST [69], and continuous MINST, Fashion MNIST, and Celeb A [45]. See Appendix D for details on the datasets, network architectures, and other implementation details. To provide a competitive baseline, we use the recently developed Learned Accept/Reject Sampling (LARS) model [5]. 4.1 Synthetic data As a preliminary experiment, we evaluated the methods on modeling synthetic densities: a mixture of 9 equally-weighted Gaussian densities, a checkerboard density with uniform mass distributed in 8 Method Static MNIST Dynamic MNIST Fashion MNIST VAE w/ Gaussian prior 89.20 0.08 84.82 0.12 228.70 0.09 VAE w/ TRS prior 86.81 0.06 82.74 0.10 227.66 0.14 VAE w/ SNIS prior 86.28 0.14 82.52 0.03 227.51 0.09 VAE w/ HIS prior 86.00 0.05 82.43 0.05 227.63 0.04 VAE w/ LARS prior 86.53 83.03 227.45 Conv HVAE w/ Gaussian prior 82.43 0.07 81.14 0.04 226.39 0.12 Conv HVAE w/ TRS prior 81.62 0.03 80.31 0.04 226.04 0.19 Conv HVAE w/ SNIS prior 81.51 0.06 80.19 0.07 225.83 0.04 Conv HVAE w/ HIS prior 81.89 0.02 80.51 0.07 226.12 0.13 Conv HVAE w/LARS prior 81.70 80.30 225.92 SNIS w/ VAE proposal 87.65 0.07 83.43 0.07 227.63 0.06 SNIS w/ Conv HVAE proposal 81.65 0.05 79.91 0.05 225.35 0.07 LARS w/ VAE proposal 83.63 Table 1: Performance on binarized MNIST and Fashion MNIST. We report 1000 sample IWAE log-likelihood lower bounds (in nats) computed on the test set. LARS results are copied from [5]. We note that our implementation of the VAE (on which our models are based) underperforms the reported VAE results in [5] on Fashion MNIST. Method MNIST Fashion MNIST Celeb A Small VAE 1258.81 0.49 2467.91 0.68 60130.94 34.15 LARS w/ small VAE proposal 1254.27 0.62 2463.71 0.24 60116.65 1.14 SNIS w/ small VAE proposal 1253.67 0.29 2463.60 0.31 60115.99 19.75 HIS w/ small VAE proposal 1186.06 6.12 2419.83 2.47 59711.30 53.08 VAE 991.46 0.39 2242.50 0.70 57471.48 11.65 LARS w/ VAE proposal 987.62 0.16 2236.87 1.36 57488.21 18.41 SNIS w/ VAE proposal 988.29 0.20 2238.04 0.43 57470.42 6.54 HIS w/ VAE proposal 990.68 0.41 2244.66 1.47 56643.64 8.78 MAF 1027 Table 2: Performance on continuous MNIST, Fashion MNIST, and Celeb A. We report 1000 sample IWAE log-likelihood lower bounds (in nats) computed on the test set. As a point of comparison, we include a similar result from a 5 layer Masked Autoregressive Flow distribution [58]. squares, and two concentric rings (Fig. 1 and Appendix Fig. 2 for visualizations). For all methods, we used a unimodal standard Gaussian as the proposal distribution (see Appendix D for further details). TRS, SNIS, and LARS perform comparably on the Nine Gaussians and Checkerboard datasets. On the Two Rings datasets, despite tuning hyperparameters, we were unable to make LARS learn the density. On these simple problems, the target density lies in the high probability region of the proposal density, so TRS, SNIS, and LARS only have to reweight the proposal samples appropriately. In high-dimensional problems when the proposal density is mismatched from the target density, however, we expect HIS to outperform TRS, SNIS, and LARS. To test this we ran each algorithm on the Nine Gaussians problem with a Gaussian proposal of mean 0 and variance 0.1 so that there was a significant mismatch in support between the target and proposal densities. The results in the rightmost panel of Fig. 1 show that HIS was almost unaffected by the change in proposal while the other algorithms suffered considerably. 4.2 Binarized MNIST and Fashion MNIST Next, we evaluated the models on binarized MNIST and Fashion MNIST. MNIST digits can be either statically or dynamically binarized for the statically binarized dataset we used the binarization from [62], and for the dynamically binarized dataset we sampled images from Bernoulli distributions with probabilities equal to the continuous values of the images in the original MNIST dataset. We dynamically binarize the Fashion MNIST dataset in a similar manner. First, we used the models as the prior distribution in a Bernoulli observation likelihood VAE. We summarize log-likelihood lower bounds on the test set in Table 1 (referred to as VAE w/ method prior). SNIS outperformed LARS on static MNIST and dynamic MNIST even though it used only 1024 samples for training and evaluation, whereas LARS used 1024 samples during training and 1010 samples for evaluation. As expected due to the similarity between methods, TRS performed comparably to LARS. On all datasets, HIS either outperformed or performed comparably to SNIS. We increased K and T for SNIS and HIS, respectively, and find that performance improves at the cost of additional computation (Appendix Fig. 3). We also used the models as the prior distribution of a convolutional heiarachical VAE (Conv HVAE, following the architecture in [5]). In this case, SNIS outperformed all methods. Then, we used a VAE as the proposal distribution to SNIS. A limitation of the HIS model is that it requires continuous data, so it cannot be used in this way on the binarized datasets. Initially, we thought that an unbiased, low-variance estimator could be constructed similarly to VIMCO [50], however, this estimator still had high variance. Next, we used the Gumbel Straight-Through estimator [32] to estimate gradients through the discrete samples proposed by the VAE, but found that method performed worse than ignoring those gradients altogether. We suspect that this may be due to bias in the gradients. Thus, for the SNIS model with VAE proposal, we report results on training runs which ignore those gradients. Future work will investigate low-variance, unbiased gradient estimators. In this case, SNIS again outperforms LARS, however, the performance is worse than using SNIS as a prior distribution. Finally, we used a Conv HVAE as the proposal for SNIS and saw performance improvements over both the vanilla Conv HVAE and SNIS with a VAE proposal, demonstrating that our modeling improvements are complementary to improving the proposal distribution. 4.3 Continuous MNIST, Fashion MNIST, and Celeb A Finally, we evaluated SNIS and HIS on continuous versions of MNIST, Fashion MNIST, and Celeb A (64x64). We use the same preprocessing as in [18]. Briefly, we dequantize pixel values by adding uniform noise, rescale them to [0, 1], and then transform the rescaled pixel values into logit space by x logit(λ + (1 2λ)x), where λ = 10 6. When we calculate log-likelihoods, we take into account this change of variables. We speculated that when the proposal is already strong, drawing additional samples as in SNIS may be better than HIS. To test this, we experimented with a smaller VAE as the proposal distribution. As we expected, HIS outperformed SNIS when the proposal was weaker, especially on the more complex datasets, as shown in Table 2. 5 Variational Inference with EIMs To provide a tractable lower bound on the log-likelihood of EIMs, we used the ELBO (Eq. (1)). More generally, this variational lower bound has been used to optimize deep generative models with latent variables following the influential work by Kingma and Welling [38], Rezende et al. [61], and models optimized with this bound have been successfully used to model data such as natural images [60, 39, 11, 27], speech and music time-series [12, 23, 40], and video [2, 29, 17]. Due to the usefulness of such a bound, there has been an intense effort to provide improved bounds [9, 49, 52, 42, 73, 51, 65]. The tightness of the ELBO is determined by the expressiveness of the variational family [74], so it is natural to consider using flexible EIMs as the variational family. As we explain, EIMs provide a conceptual framework to understand many of the recent improvements in variational lower bounds. In particular, suppose we use a conditional EIM q(z|x) as the variational family (i.e., q(z|x) = R q(z, λ|x) dλ is the marginalized sampling process). Then, we can use the ELBO lower bound on log p(x) (Eq. (1)), however, the density of the EIM q(z|x) is intractable. Agakov and Barber [1], Salimans et al. [63], Ranganath et al. [59], Maaløe et al. [47] develop an auxiliary variable variational bound log p(x, z) = Eq(z,λ|x) log p(x, z)r(λ|z, x) + Eq(z|x) [DKL (q(λ|z, x)||r(λ|z, x)] log p(x, z)r(λ|z, x) where r(λ|z, x) is a variational distribution meant to model q(λ|z, x), and the identity follows from the fact that q(z|x) = q(z,λ|x) q(λ|z,x). Similar to Eq. (1), Eq. (4) shows the gap introduced by using r(λ|z, x) to deal with the intractability of q(z|x). We can form a lower bound on the original ELBO and thus a lower bound on the log marginal by omitting the positive DKL term. This provides a tractable lower bound on the log-likelihood using flexible EIMs as the variational family and precisely characterizes the bound gap as the sum of DKL terms in Eq. (1) and Eq. (4). For different choices of EIM, this bound recovers many of the recently proposed variational lower bounds. Furthermore, the bound in Eq. (4) is closely related to partition function estimation because p(x,z)r(λ|z,x) q(z,λ|x) is an unbiased estimator of p(x) when z, λ q(z, λ|x). To first order, the bound gap is related to the variance of this partition function estimator (e.g., [49]), which motivates sampling algorithms used in lower variance partition function estimators such as SMC [21] and AIS [53]. 5.1 Importance Weighted Auto-encoders (IWAE) To tighten the ELBO without explicitly expanding the variational family, Burda et al. [9] introduced the importance weighted autoencoder (IWAE) bound, log p(x). (5) The IWAE bound reduces to the ELBO when K = 1, is non-decreasing as K increases, and converges to log p(x) as K under mild conditions [9]. Bachman and Precup [3] introduced the idea of viewing IWAE as auxiliary variable variational inference and Naesseth et al. [52], Cremer et al. [13], Domke and Sheldon [20] formalized the notion. Consider the variational family defined by the EIM based on SNIS (Algorithm 2). We use a learned, tractable distribution q(z|x) as the proposal π(z|x) and set U(z|x) = log q(z|x) log p(x, z) motivated by the fact that p(z|x) q(z|x) exp(log p(x, z) log q(z|x)) is the optimal variational distribution. Similar to the variational distribution used in Section 3.2, setting r(z1:K, i|z, x) = 1 j =i q(zj|x) (6) yields the IWAE bound Eq. (5) when plugged into to Eq. (4) (see Appendix A for details). From Eq. (4), it is clear that IWAE is a lower bound on the standard ELBO for the EIM q(z|x) and the gap is due to DKL(q(z1:K, i|z, x)||r(z1:K, i|z, x)). The choice of r(z1:K, i|z, x) in Eq. (6) was for convenience and is suboptimal. The optimal choice of r is q(z1:K, i|z, x) = q(i|z, x)q(z1:K|i, z, x) = 1 K δzi(z)q(z i|i, z, x). Compared to the optimal choice, Eq. (6) makes the approximation q(z i|i, z, x) Q j =i q(zj|x) which ignores the influence of z on z i and the fact that z i are not independent given z. A simple extension could be to learn a factored variational distribution conditional on z: r(z1:k, i|z, x) = 1 K δzi(z) Q j =i r(zj|z, x). Learning such an r could improve the tightness of the bound, and we leave exploring this to future work. 5.2 Semi-implicit variational inference As a way of increasing the flexibility of the variational family, Yin and Zhou [73] introduce the idea of semi-implicit variational families. That is they define an implicit distribution q(λ|x) by transforming a random variable ϵ q(ϵ|x) with a differentiable deterministic transformation (i.e., λ = g(ϵ, x)). However, Sobolev and Vetrov [65] keenly note that q(z, λ|x) = q(z|λ, x)q(λ|x) can be equivalently written as q(z|ϵ, x)q(ϵ|x) with two explicit distributions. As a result, semi-implicit variational inference is simply auxiliary variable variational inference by another name. Additionally, Yin and Zhou [73] provide a multi-sample lower bound on the log likelihood which is generally applicable to auxiliary variable variational inference. log p(x) Eq(λ1:K 1|x)q(z,λ|x) log p(x, z) 1 K (q(z|λ, x) + P i q(z|λi, x)) We can interpret this bound as using an EIM for r(λ|z, x) in Eq. (4). Generally, if we introduce additional auxiliary random variables γ into r(λ, γ|z, x), we can tractably bound the objective log p(x, z)r(λ|z, x) Eq(z,λ|x)s(γ|z,λ,x) log p(x, z)r(λ, γ|z, x) q(z, λ|x)s(γ|z, λ, x) where s(γ|z, λ, x) is a variational distribution. Analogously to the previous section, we set r(λ|z, x) as an EIM based on the self-normalized importance sampling process with proposal q(λ|x) and U(λ|x, z) = log q(z|λ, x). If we choose s(λ1:K, i|z, λ, x) = 1 j =i q(λj|x), with γ = (λ1:K, i), then Eq. 8 recovers the bound in [73] (see Appendix B for details). In a similar manner, we can continue to recursively augment the variational distribution s (i.e., add auxiliary latent variables to s). This view reveals that the multi-sample bound from [73] is simply one approach to choosing a flexible variational r(λ|z, x). Alternatively, Ranganath et al. [59] use a learned variational r(λ|z, x). It is unclear when drawing additional samples is preferable to learning a more complex variational distribution. Furthermore, the two approaches can be combined by using a learned proposal r(λi|z, x) instead of q(λi|x), which results in a bound described in [65]. 5.3 Additional Bounds Finally, we can also use the self-normalized importance sampling procedure to extend a proposal family q(z, λ|x) to a larger family (instead of solely extending r(λ|z, x)) [65]. Self-normalized importance sampling is a particular choice of taking a proposal distribution and moving it closer to a target. Hamiltonian Monte Carlo [55] is another choice which can also be embedded in this framework as done by [63, 10]. Similarly, SMC can be used as a sampling procedure in an EIM and when used as the variational family, it succinctly derives variational SMC [49, 52, 42] without any instance specific tricks. In this way, more elaborate variational bounds can be constructed by specific choices of EIMs without additional derivation. 6 Discussion We proposed a flexible, yet tractable family of distributions by treating the approximate sampling procedure of energy-based models as the model of interest, referring to them as energy-inspired models. The proposed EIMs bridge the gap between learning and inference in EBMs. We explore three instantiations of EIMs induced by truncated rejection sampling, self-normalized importance sampling, and Hamiltonian importance sampling and we demonstrate comparably or stronger performance than recently proposed generative models. The results presented in this paper use simple architectures on relatively small datasets. Future work will scale up both the architectures and size of the datasets. Interestingly, as a by-product, exploiting the EIMs to define the variational family provides a unifying framework for recent improvements in variational bounds, which simplifies existing derivations, reveals potentially suboptimal choices, and suggests ways to form novel bounds. Concurrently, Nijkamp et al. [56] investigated a similar model to our models based on HIS, although the training algorithm was different. Combining insights from their study with our approach is a promising future direction. Acknowledgments We thank Ben Poole, Abhishek Kumar, and Diederick Kingma for helpful comments. We thank Matthias Bauer for answering implementation questions about LARS. [1] Felix V Agakov and David Barber. An auxiliary variational method. In International Conference on Neural Information Processing, pages 561 566. Springer, 2004. [2] Mohammad Babaeizadeh, Chelsea Finn, Dumitru Erhan, Roy H Campbell, and Sergey Levine. Stochastic variational video prediction. International Conference on Learning Representations, 2017. [3] Philip Bachman and Doina Precup. Training deep generative models: Variations on a theme. In NIPS Approximate Inference Workshop, 2015. [4] David Barber and Felix Agakov. The im algorithm: a variational approach to information maximization. In Proceedings of the 16th International Conference on Neural Information Processing Systems, pages 201 208. MIT Press, 2003. [5] Matthias Bauer and Andriy Mnih. Resampled priors for variational autoencoders. ar Xiv preprint ar Xiv:1810.11428, 2018. [6] Julian Besag. Statistical analysis of non-lattice data. Journal of the Royal Statistical Society: Series D (The Statistician), 24(3):179 195, 1975. [7] David M Blei, Alp Kucukelbir, and Jon D Mc Auliffe. Variational inference: A review for statisticians. Journal of the American Statistical Association, 2017. [8] Lawrence D Brown. Fundamentals of statistical exponential families: with applications in statistical decision theory. Ims, 1986. [9] Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. nternational Conference on Learning Representations, 2015. [10] Anthony L Caterini, Arnaud Doucet, and Dino Sejdinovic. Hamiltonian variational auto-encoder. In Advances in Neural Information Processing Systems, pages 8167 8177, 2018. [11] Xi Chen, Diederik P Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, and Pieter Abbeel. Variational lossy autoencoder. International Conference on Learning Representations, 2016. [12] Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. A recurrent latent variable model for sequential data. In Advances in neural information processing systems, pages 2980 2988, 2015. [13] Chris Cremer, Quaid Morris, and David Duvenaud. Reinterpreting importance-weighted autoencoders. ar Xiv preprint ar Xiv:1704.02916, 2017. [14] Bo Dai, Hanjun Dai, Arthur Gretton, Le Song, Dale Schuurmans, and Niao He. Kernel exponential family estimation via doubly dual embedding. ar Xiv preprint ar Xiv:1811.02228, 2018. [15] Zihang Dai, Amjad Almahairi, Philip Bachman, Eduard Hovy, and Aaron Courville. Calibrating energy-based generative adversarial networks. ar Xiv preprint ar Xiv:1702.01691, 2017. [16] Peter Dayan, Geoffrey E Hinton, Radford M Neal, and Richard S Zemel. The helmholtz machine. Neural computation, 7(5):889 904, 1995. [17] Emily Denton and Rob Fergus. Stochastic video generation with a learned prior. International Conference on Machine Learning, 2018. [18] Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. ar Xiv preprint ar Xiv:1605.08803, 2016. [19] Justin Domke and Daniel Sheldon. Divide and couple: Using monte carlo variational objectives for posterior approximation. ar Xiv preprint ar Xiv:1906.10115, 2019. [20] Justin Domke and Daniel R Sheldon. Importance weighting and variational inference. In Advances in Neural Information Processing Systems, pages 4471 4480, 2018. [21] Arnaud Doucet, Nando De Freitas, and Neil Gordon. An introduction to sequential monte carlo methods. In Sequential Monte Carlo methods in practice, pages 3 14. Springer, 2001. [22] Yilun Du and Igor Mordatch. Implicit generation and generalization in energy-based models. ar Xiv preprint ar Xiv:1903.08689, 2019. [23] Marco Fraccaro, Søren Kaae Sønderby, Ulrich Paquet, and Ole Winther. Sequential neural models with stochastic layers. In Advances in neural information processing systems, pages 2199 2207, 2016. [24] Yoav Freund and David Haussler. A fast and exact learning rule for a restricted class of boltzmann machines. Advances in Neural Information Processing Systems, 4:912 919, 1992. [25] Ruiqi Gao, Yang Lu, Junpei Zhou, Song-Chun Zhu, and Ying Nian Wu. Learning generative convnets via multi-grid modeling and sampling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9155 9164, 2018. [26] Anirudh Goyal Alias Parth Goyal, Nan Rosemary Ke, Surya Ganguli, and Yoshua Bengio. Variational walkback: Learning a transition operator as a stochastic recurrent net. In Advances in Neural Information Processing Systems, pages 4392 4402, 2017. [27] Ishaan Gulrajani, Kundan Kumar, Faruk Ahmed, Adrien Ali Taiga, Francesco Visin, David Vazquez, and Aaron Courville. Pixelvae: A latent variable model for natural images. International Conference on Learning Representations, 2016. [28] Michael Gutmann and Aapo Hyv arinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pages 297 304, 2010. [29] David Ha and J urgen Schmidhuber. World models. Advances in neural information processing systems, 2018. [30] Geoffrey E Hinton. Training products of experts by minimizing contrastive divergence. Neural computation, 14(8):1771 1800, 2002. [31] Aapo Hyv arinen. Estimation of non-normalized statistical models by score matching. Journal of Machine Learning Research, 6(Apr):695 709, 2005. [32] Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. ar Xiv preprint ar Xiv:1611.01144, 2016. [33] Michael I Jordan, Zoubin Ghahramani, Tommi S Jaakkola, and Lawrence K Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183 233, 1999. [34] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. ar Xiv preprint ar Xiv:1602.02410, 2016. [35] Taesup Kim and Yoshua Bengio. Deep directed generative models with energy-based probability estimation. ar Xiv preprint ar Xiv:1606.03439, 2016. [36] R. Kinderman and S.L. Snell. Markov random fields and their applications. American mathematical society, 1980. [37] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ar Xiv preprint ar Xiv:1412.6980, 2014. [38] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. nternational Conference on Learning Representations, 2013. [39] Diederik P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improved variational inference with inverse autoregressive flow. In Advances in Neural Information Processing Systems, pages 4743 4751, 2016. [40] Rahul G Krishnan, Uri Shalit, and David Sontag. Deep kalman filters. ar Xiv preprint ar Xiv:1511.05121, 2015. [41] John D Lafferty, Andrew Mc Callum, and Fernando CN Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, pages 282 289. Morgan Kaufmann Publishers Inc., 2001. [42] Tuan Anh Le, Maximilian Igl, Tom Rainforth, Tom Jin, and Frank Wood. Auto-encoding sequential monte carlo. International Conference on Learning Representations, 2017. [43] Yann Le Cun. The mnist database of handwritten digits. http://yann. lecun. com/exdb/mnist/, 1998. [44] Yann Le Cun, Sumit Chopra, and Raia Hadsell. A tutorial on energy-based learning. 2006. [45] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), 2015. [46] Zhuang Ma and Michael Collins. Noise contrastive estimation and negative sampling for conditional models: Consistency and statistical efficiency. ar Xiv preprint ar Xiv:1809.01812, 2018. [47] Lars Maaløe, Casper Kaae Sønderby, Søren Kaae Sønderby, and Ole Winther. Auxiliary deep generative models. ar Xiv preprint ar Xiv:1602.05473, 2016. [48] Chris J Maddison, Andriy Mnih, and Yee Whye Teh. The concrete distribution: A continuous relaxation of discrete random variables. ar Xiv preprint ar Xiv:1611.00712, 2016. [49] Chris J Maddison, Dieterich Lawson, George Tucker, Nicolas Heess, Mohammad Norouzi, Andriy Mnih, Arnaud Doucet, and Yee Teh. Filtering variational objectives. In Advances in Neural Information Processing Systems, pages 6573 6583, 2017. [50] Andriy Mnih and Danilo J Rezende. Variational inference for monte carlo objectives. International Conference on Machine Learning, 2016. [51] Dmitry Molchanov, Valery Kharitonov, Artem Sobolev, and Dmitry Vetrov. Doubly semiimplicit variational inference. ar Xiv preprint ar Xiv:1810.02789, 2018. [52] Christian Naesseth, Scott Linderman, Rajesh Ranganath, and David Blei. Variational sequential monte carlo. In International Conference on Artificial Intelligence and Statistics, pages 968 977, 2018. [53] Radford M Neal. Annealed importance sampling. Statistics and computing, 11(2):125 139, 2001. [54] Radford M Neal. Hamiltonian importance sampling. In In talk presented at the Banff International Research Station (BIRS) workshop on Mathematical Issues in Molecular Dynamics, 2005. [55] Radford M Neal et al. Mcmc using hamiltonian dynamics. Handbook of Markov Chain Monte Carlo, 2(11):2, 2011. [56] Erik Nijkamp, Mitch Hill, Song-Chun Zhu, and Ying Nian Wu. Learning non-convergent non-persistent short-run mcmc toward energy-based model. In Advances in Neural Information Processing Systems, pages 5233 5243, 2019. [57] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. ar Xiv preprint ar Xiv:1807.03748, 2018. [58] George Papamakarios, Theo Pavlakou, and Iain Murray. Masked autoregressive flow for density estimation. In Advances in Neural Information Processing Systems, pages 2338 2347, 2017. [59] Rajesh Ranganath, Dustin Tran, and David Blei. Hierarchical variational models. In International Conference on Machine Learning, pages 324 333, 2016. [60] Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In International Conference on Machine Learning, pages 1530 1538, 2015. [61] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In International Conference on Machine Learning, pages 1278 1286, 2014. [62] Ruslan Salakhutdinov and Iain Murray. On the quantitative analysis of deep belief networks. In Proceedings of the 25th international conference on Machine learning, pages 872 879. ACM, 2008. [63] Tim Salimans, Diederik Kingma, and Max Welling. Markov chain monte carlo and variational inference: Bridging the gap. In International Conference on Machine Learning, pages 1218 1226, 2015. [64] Paul Smolensky. Information processing in dynamical systems: Foundations of harmony theory. Technical report, Colorado Univ at Boulder Dept of Computer Science, 1986. [65] Artem Sobolev and Dmitry Vetrov. Importance weighted hierarchical variational inference. In Bayesian Deep Learning Workshop, 2018. [66] Jascha Sohl-Dickstein, Peter Battaglino, and Michael R De Weese. Minimum probability flow learning. In Proceedings of the 28th International Conference on International Conference on Machine Learning, pages 905 912. Omnipress, 2011. [67] Tijmen Tieleman. Training restricted boltzmann machines using approximations to the likelihood gradient. In Proceedings of the 25th international conference on Machine learning, pages 1064 1071. ACM, 2008. [68] Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229 256, 1992. [69] Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017. [70] Jianwen Xie, Yang Lu, Song-Chun Zhu, and Yingnian Wu. A theory of generative convnet. In International Conference on Machine Learning, pages 2635 2644, 2016. [71] Jianwen Xie, Song-Chun Zhu, and Ying Nian Wu. Synthesizing dynamic patterns by spatialtemporal generative convnet. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7093 7101, 2017. [72] Jianwen Xie, Zilong Zheng, Ruiqi Gao, Wenguan Wang, Song-Chun Zhu, and Ying Nian Wu. Learning descriptor networks for 3d shape synthesis and analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8629 8638, 2018. [73] Mingzhang Yin and Mingyuan Zhou. Semi-implicit variational inference. ar Xiv preprint ar Xiv:1805.11183, 2018. [74] Arnold Zellner. Optimal information processing and bayes s theorem. The American Statistician, 42(4):278 280, 1988. [75] Song Chun Zhu, Yingnian Wu, and David Mumford. Filters, random fields and maximum entropy (frame): Towards a unified theory for texture modeling. International Journal of Computer Vision, 27(2):107 126, 1998.