# bidirectional_helmholtz_machines__d06edd17.pdf Bidirectional Helmholtz Machines J org Bornschein BORNJ@IRO.UMONTREAL.CA Samira Shabanian SHABANIS@IRO.UMONTREAL.CA Asja Fischer FISCHER@IRO.UMONTREAL.CA Yoshua Bengio BENGIOY@IRO.UMONTREAL.CA Dept. Computer Science and Operations Research, University of Montreal Canadian Institute for Advanced Research (CIFAR) Efficient unsupervised training and inference in deep generative models remains a challenging problem. One basic approach, called Helmholtz machine or Variational Autoencoder, involves training a top-down directed generative model together with a bottom-up auxiliary model used for approximate inference. Recent results indicate that better generative models can be obtained with better approximate inference procedures. Instead of improving the inference procedure, we here propose a new model, the bidirectional Helmholtz machine, which guarantees that the top-down and bottom-up distributions can efficiently invert each other. We achieve this by interpreting both the top-down and the bottom-up directed models as approximate inference distributions and by defining the model distribution to be the geometric mean of these two. We present a lower-bound for the likelihood of this model and we show that optimizing this bound regularizes the model so that the Bhattacharyya distance between the bottom-up and top-down approximate distributions is minimized. This approach results in state of the art generative models which prefer significantly deeper architectures while it allows for orders of magnitude more efficient likelihood estimation. 1. Introduction and background Training good generative models and fitting them to complex and high dimensional training data with probability mass in multiple disjunct locations remains a major challenge. This is especially true for models with multiple layers of deterministic or stochastic variables, which is un- Proceedings of the 33 rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 48. Copyright 2016 by the author(s). fortunate because it has been argued previously (Hinton et al., 2006; Bengio, 2009) that deeper generative models have the potential to capture higher-level abstractions and thus generalize better. Although there has been progress in dealing with continous-valued latent variables (Kingma & Welling, 2014), building a hierarchy of representations, especially with discrete-valued latent variables, remains a challenge. With the Helmholtz machine (Hinton et al., 1995; Dayan et al., 1995), a concept was introduced that proposed to not only fit a powerful but intractable generative model p(x, h) to the training data, but also to jointly train a parametric approximate inference model q(h|x). The distribution q is used to perform approximate inference over the latent variables h of the generative model given an observed example x, i.e. to approximate p(h|x). This basic idea has been applied and enhanced many times; initially with the wakesleep algorithm (WS, Hinton et al. (1995); Dayan & Hinton (1996)) and more recently with the variational autoencoder (VAE, Kingma & Welling (2014)), stochastic backpropagation and approximate inference in deep generative models (Rezende et al., 2014), neural variational inference and learning (NVIL, Mnih & Gregor (2014)) and reweighted wake-sleep (RWS, Bornschein & Bengio (2015)). Recent results indicate that significant improvements can be made when better approximate inference methods are used: Salimans et al. (2014) for example presented an iterative inference procedure that improves the samples from q by employing a learned MCMC transition operator. In (Hjelm et al., 2015) the authors propose an iterative, gradient based procedure to refine the approximate posterior. Burda et al. (2015) present the importance weighted auto encoder (IWAE), an improved VAE that, similarly to RWS, uses multiple samples from q to calculate gradients. And RWS already reported that autoregressive q distributions lead to noticable improvements. In contrast to these previous approaches that aimed at incorporating more powerful inference methods to gain better approximations of p(h|x), we here propose to regularize the top-down model p such Bidirectional Helmholtz Machines that its posterior stays close to the approximate inference distribution q (and vice versa). We archive this by interpreting both p and q as approximate inference models for our actual generative model p , which is defined to be the geometric mean over the top-down and bottom-up approximate inference models, i.e., p (x, h) = 1/Z p p(x, h)q(x, h). In Section 2 we show that this model definition leads to an objective that can be interpreted as using a regularization term that encurages solutions where p and q are close to each other in terms of the Bhattacharyya distance. In Section 3 we will explain how to perform importance sampling based training and inference. The ability to model complex distributions and the computational efficiency of this approach are demonstrated empirically in Section 4. 2. Model definition and properties We introduce the bidirectional Helmholtz Machine (Bi HM) by defining a joint probability distribution over three variable vectors, an observed vector x and two latent variable vectors h1 and h2. Analogous to a Deep Boltzmann Machine (DBM, Salakhutdinov & Hinton (2009)), we think of these as layers in a neural network with links between x and h1 on the one side, and h1 and h2 on the other side. We will present our approach for the specific case of an architecture with two hidden layers, but it can be applied to arbitrary graphs of variables without loops. It can especially be used to train architectures with more than two stacked layers of latent variables. Let p (x, h1, h2) be a joint probability distribution constructed in a specific way from two constituent distributions p(x, h1, h2) and q(x, h1, h2), p (x, h1, h2) = 1 p(x, h1, h2) q(x, h1, h2) , where Z is a normalization constant and p and q are directed graphical models from h2 to x and vice versa, p(x, h1, h2) = p(h2) p(h1|h2) p(x|h1) and q(x, h1, h2) = q(x) q(h1|x) q(h2|h1) . We assume that the prior distribution p(h2) and all conditional distributions belong to parametrized families of distributions which can be evaluated and sampled from efficiently. For q(x) we do not assume an explicit form but define it to be the marginal q(x) = p (x) = X h1,h2 p (x, h1, h2) p(x, h1, h2) q(h1|x)q(h2|h1) p(x, h1, h2) q(h1|x)q(h2|h1) 2 . The normalization constant Z guarantees that P x,h1,h2 p (x, h1, h2) = 1. Using the Cauchy-Schwarz inequality | P y f(y)g(y)|2 P y |f(y)|2 P and identifying p p(x, h1, h2) with f(y) and p q(x, h1, h2) with g(y), it becomes clear that Z = P p(x, h1, h2)q(x, h1, h2) 1 for arbitrary p and q. Furthermore, we see that Z = 1 if and only if p(x, h1, h2) = q(x, h1, h2). We can therefore obtain a lower bound on the marginal probability p (x) by defining p(x, h1, h2)q(h1|x) q(h2|h1) 2 = Z2 p (x) p (x) . (1) This suggests that the model distribution p (x) can be fitted to some training data by maximizing the bound of the loglikelihood (LL) log p (x) instead of log p (x), as we elaborate in the following section. Since log p (x) can reach the maximum only when Z 1, the model is implicitly pressured to find a maximum likelihood solution that yields p(x, h1, h2) q(x, h1, h2) p (x, h1, h2). 2.1. Alternative view based on the Bhattacharyya distance Recalling the Bhattacharyya distance DB(p, q) = log P p(y)q(y) (for which holds DB(p, q) 0 for arbitrary distributions p, q and DB(p, q) = 0 only if p = q) the model LL log p (x) can be decomposed into log p (x) = 2 log X p(x, h1, h2)q(h1|x)q(h2|h1) p(x , h 1, h 2)q(x , h 1, h 2) = log p (x) + 2 DB(p, q) log p (x) , (2) where we clearly see that the proposed training objective log p (x) corresponds to the LL log p (x) minus 2 times the Bhattacharyya distance DB(p, q), i.e., it is maximzing the true LL and minimizing the distance between p and q. We can compare this to the variational approach, where the marginal probability log p(x) of some model containing latent variables h is rewritten in terms of the KL-divergence DKL(q(h|x) || p(h|x)) = P h q(h|x) log q(h|x) p(h|x) 0 to obtain a lower bound log p(x) = E h q(h|x) [log p(x, h) log q(h|x)] + DKL(q(h|x) || p(h|x)) E h q(h|x) [log p(x, h) log q(h|x)] . (3) Analogous to variational methods that maximize the lower bound (3), we can thus maximize log p (x), and it will Bidirectional Helmholtz Machines tighten the bound as DB(p, q) approaches zero. While this seems very similar to the variational lower bound, we should highlight that there are some important conceptual differences: 1) The KL-divergence in variational methods measures the distance between distributions given some training data. The Bhattacharyya distance here in contrast quantifies a property of the model p (x, h1, h2) independently of any training data. In fact, we saw that DB(p, q) = log Z. 2) The variational lower bound is typically used to construct approximate inference algorithms. We here use our bound p (x) just to remove the normalization constant Z from our target distribution p (x). Even after applying the lower-bound, we still have to tackle the inference problem which manifests itself in form of the full combinatorial sum over h1 and h2 in equation (1). Although it seems intuitively reasonable to use a variational approximation on top of the bound p (x) we will here not follow this direction but rather use importance sampling to perform approximate inference and learning (see section 3). Combining a variational method with the bound p (x) is therefore subject to future work. We can also argue that optimizing log p (x) instead of log p (x) is beneficial in the light of the original goal we formulated in section 1: To learn a generative model p that is regularized to be close to the model q which we use to perform approximate inference for p . Let us assume we have two equally well trained models p θ1 and p θ2, i.e., in expectation over the empirical distribution E log p θ1(x) = E log p θ2(x) , but the expected bound p (x) for the first model is closer to the LL than the expected bound for the second model: E log p θ1(x) > E [log p θ2(x)]. Using equation (2) we see that DB(pθ1, qθ1) < DB(pθ2, qθ2) which indicates that qθ1 is closer to p θ1 than qθ2 is to p θ2 (when we measure their distance using the Bhattacharyya distance). According to our original goal, we thus prefer solution p θ1, where the bound p (x) is maximized and the distance DB(p, q) minimized. Note that the decomposition (2) also emphasizes why our recursive definition q(x) = P h1,h2 p (x, h1, h2) is a consistent and reasonable one: minimizing DB(p, q) during learning means that the joint distributions p(x, h1, h2) and q(x, h1, h2) approach each other. This implies that the marginals p(hl) and q(hl) for all layers l become more similar. This also implies p(x) q(x) in the limit of DB(p, q) 0; a requirement that most simple parametrized distributions q(x) could never fulfill. 3. Inference and training Based on the construction of p (x) outlined in the previous section we can define a wide range of possible models. Furthermore, we have a wide range of potential training and Algorithm 1 Training p (x) using K importance samples for number of training iterations do Sample x from the training distribution (i.e. x D) for k = 1, 2, . . . , K do Sample h(k) 1 q(h(k) 1 |x); h(k) l q(h(k) l |h(k) l 1) (for layers l = 2 to L) Compute q(h(k)|x) and p(x, h(k)) (for h(k) = (h(k) 1 , . . . , h(k) L )) end for Compute unnormalized importance weights ωk = p p(x, h(k))/q(h(k)|x) Normalize the weights ωk = ωk/P k ωk Update parameters of p and q: gradient descent with gradient estimator P k ωk θ log p(x, h(k))q(h(k)|x) end for appropriate inference methods we could employ to maximize log p (x). In this text we concentrate on binary latent and observed variables x, h1, h2 and model all our conditional distributions by simple sigmoid belief network layers, e.g., p(x | h1) = Q i B(xi | σ(Wi h1 + bi)) where B(xi | c) refers to the Bernoulli distribution with P(xi = 1) = c, Wi are the connection weights between the latent variables h1 and the visible variable xi; bi is the bias of xi, and σ( ) is the sigmoid function. For our top-level prior p(h2), we use a factorized Bernoulli distribution: p(h2) = Q i B(h2,i | σ(b2,i)). We form an estimate of p (x) by using importance sampling instead of the exhaustive sum over h1 and h2 in equation (1). We use q(h1|x)q(h2|h1) as the proposal distribution which is by construction easy to evaluate and to sample from: p(x, h1, h2) q(h1|x) q(h2|h1) 2 = E h2 q(h2|h1) h1 q(h1|x) p(x, h1, h2) q(h1|x) q(h2|h1) v u u t p(x, h(k) 1 , h(k) 2 ) q(h(k) 1 |x) q(h(k) 2 |h(k) 1 ) with h(k) 1 q(h1|x) and h(k) 2 q(h2|h(k) 1 ). Using the same approach, we can also derive the well known estimator for the marginal probability of a datapoint under the Bidirectional Helmholtz Machines Figure 1. MNIST experiments: A) Random samples from top-down model p(x). B) Generally improved samples after running 10 iterations of Gibbs sampling to obtain approximate samples from the joint model p (x). In A and B we show expected samples instead of sampling from the bottom Bernoulli distribution, as usual. C) Sensitivity of the test set LL estimates to the number of samples K. We plot the test set log p(x) and log p (x) estimates of our best Bi HM model together with the log p(x) estimates of our best RWS and VAE models. Figure 2. Inpainting of binarized MNIST digits. The left column in each block shows the original digit randomly sampled from the MNIST test set; the first column shows the masked version presented to the algorithm, and the next three columns show different, independent reconstructions after 100 Gibbs iterations. (see section 3). All images were selected randomly. top-down generative model p: p(x) = E h2 q(h2|h1) h1 q(h1|x) p(x, h1, h2) q(h1|x) q(h2|h1) p(x, h(k) 1 , h(k) 2 ) q(h(k) 1 |x) q(h(k) 2 |h(k) 1 ) . (5) Comparing (4) and (5) and making use of Jensen s inequality it becomes clear that p(x) p (x). Analogous to the parameter updates in RWS (Bornschein & Bengio, 2015), we can derive an importance sampling based estimate for the LL gradient with respect to the parameters of p and q (jointly denoted by θ) and use it to optimize our objective (we use h to jointly denote the latent variables of all layers) θ log p (x) = p(x, h)q(h|x) 2 p(x, h)q(h|x) p(x, h)q(h|x) P p(x, h )q(h |x) k=1 ωk θ log p(x, h(k))q(h(k)|x) , (6) with h(k) q(h | x) and importance weights k ωk where ωk = q(h(k)|x) , for k = 1, . . . , K. In contrast to VAEs and IWAEs, the updates do not require any form of backpropagation through layers because, as far as the gradient computation θ log p p(x, h(k))q(h(k)|x) is concerned, these samples are considered fully observed. The gradient approximation (6) computes the weighted average over the individual gradients. These properties are basically inherited from the RWS training algorithm. But in contrast to RWS, and in contrast to most other algorithms which employ a generative model p and an approximate inference model q, we here automatically obtain parameter updates for both p and q because we optimize p which contains both. The resulting training method is summarized in algorithm 1. Estimating the partition function Z To compute p (x) = 1 Z2 p (x) and to monitor the training progress it is desirable to estimate the normalization constant Z. In stark contrast to undirected models like RBMs or DBMs, we can here derive an unbiased importance sampling based estimator for Z2: Z2 = E x,h p(x,h) q(h|x) p(x, h) E h q(h|x) = E x,h p(x,h) h q(h |x) p(x, h )q(h|x) p(x, h)q(h |x) We denote the number of samples used to approximate the outer expectation and the inner expectation with Kouter and Kinner respectively. In the experimental section, we show that we obtain high quality estimates for Z2 with Kinner=1 Bidirectional Helmholtz Machines and a relatively small number of samples Kouter. By taking the logarithm, we obtain a biased estimator for 2 log Z, which will, unfortunately, underestimate 2 log Z on average due to the concavity of the logarithm and the variance of the Z2 estimate. This can lead to overestimates for log p (x) (see equation (4)) if we are not careful. Fortunately, the bias on the estimated log Z is induced only by the concavity of the logarithm; the underlying estimator for Z2 is unbiased. We can thus effectively minimize the bias by minimizing the variance of the Z2 estimate (e.g. by taking more samples). This is a much better situation than for Z-estimating methods that rely on Markov chains in high dimensional spaces, which might miss entire modes because of mixing issues. Sampling and inpainting We now discuss two general approaches for approximate sampling from a Bi HM. One can either easily and efficiently sample from the directed model p, or one can use Gibbs sampling to draw higherquality samples from the undirected model p . For the latter, importance resampling is used to approximately draw samples from the conditional distributions, e.g. from: p (h(k) 1 |x, h2) = q p(h(k) 1 |h2)p(x|h(k) 1 )q(h(k) 1 |x)q(h2|h(k) 1 ) P p(h1|h2)p(x|h1)q(h1|x)q(h2|h1) . Here we choose to draw the proposal samples from the mixture distribution 1/2 p(h1|h2) + 1/2 q(h1|x), which ensures that we have a symmetric chance of covering the high probability configurations of p (h1|x, h2) induced by p and q. We resample a final sample from p (h1|x, h2) propotionally to their importance weights which are thus given by p(h(k) 1 |h2)p(x|h(k) 1 )q(h(k) 1 |x)q(h2|h(k) 1 ) p(h(k) 1 |h2) + q(h(k) 1 |x) , where h(k) 1 is randomly drawn from p(h1|h2) or q(h1|x) and the constant C collects all terms not containing h(k) 1 and can be ignored. For p (x|h1) we choose to sample by drawing the proposal samples from p(x|h1). We iteratively update all odd layers followed by all even layers until we consider the chain to be in equilibrium. Equipped with approximate sampling procedures for the conditional distributions, it is straightforward to construct an algorithm for inpainting: Given a corrupted input datapoint x, we first initialize a Markov chain by drawing h1, h2 q(h1, h2|x) and then run the Gibbs sampling procedure. Whenever we sample the bottom layer x p (x|h1) , we keep the non-corrupted elements of x fixed. 4. Experimental results In this section we present experimental results obtained when applying the algorithm to various binary datasets. Our main goal is to ensure that the theoretical properties discussed in section 2 translate into a robust algorithm that yields competitive results even when used with simple sigmoid belief network layers as conditional distributions. We train all models using Adam (Kingma & Ba, 2014) with a mini-batch size of 100. We initialize the weights according to Glorot & Bengio (2010), set the biases to -1, and use L1 regularization λ=10 3 on all the weights. Our implementation is available at https://github.com/jbornschein/bihm. UCI binary datasets To ascertain that importance sampling based training of Bi HMs works in general, we applied it to the 8 binary datasets from the UCI dataset repository that were evaluated e.g. in (Larochelle & Murray, 2011). We use a learning rate of 10 2 or 10 3 for all the experiments. The architectures, layer sizes and final LL estimates can be found in tables 3 and 4. Binarized MNIST We use the MNIST dataset that was binarized according to (Murray & Salakhutdinov, 2009) and which we downloaded in binarized form (Larochelle, 2011). Compared to RWS, we again observe that Bi HMs prefer significantly deeper and narrower models. Our best model consists of 1 visible and 12 latent layers with 300,200,100,75,50,35,30,25,20,15,10,10 latent variables. We follow the same experimental procedure as in the RWS paper: First train the model with K=10 samples and a learning rate of 10 3 until convergence and then fine-tune the parameters with K=100 samples and a learning rate of 3 10 4. All layers are actually used to model the empirical distribution; we confirmed that training shallower models (obtained by leaving out individual layers) decreases the performance. We obtain test set log-loglikelihoods of log p (x) -84.8 0.23 and log p(x) -84.5 0.22 1. The next section presents a more detailed analysis of these estimates and their dependency on the number of samples from the proposal distribution q(h|x). Note that even though this model is relatively deep, it is not particularly large, with about 700, 000 parameters in total. The DBMs in (Salakhutdinov & Hinton, 2009) contain about 900, 000 and 1.1 million parameters; a variational autoencoder with two deterministic, 500 units wide encoder and decoder layers, and with 100 top level latent units contains more than 1.4 million parameters. To highlight the models ability to generate crisp (non- 1Recently, a version of MNIST where binary observed variables are resampled during training has been used (e.g., Burda et al. (2015)). On this dataset we obtain log p(x) -82.9 Bidirectional Helmholtz Machines Figure 3. log Z2 estimates for different values of Kinner as a function of A) the number of samples Kouter, B) the total number of samples Kinner Kouter for the Bi HM trained on MNIST; the gray region shows the mean and the standard deviation for 10 runs with Kinner=1. This shows that, from the point of view of total computation, convergence is fastest with Kinner=1; and that we obtain a high quality estimate of the partition function with only a few million samples. C) Evolution of the estimates of log p(x), log p (x), and 2 log Z during training on MNIST. Model ADULT CONNECT4 DNA MUSHROOMS NIPS-0-12 OCR-LETTERS RCV1 WEB auto regressive NADE 13.19 11.99 84.81 9.81 273.08 27.22 46.66 28.39 Eo NADE 13.19 12.58 82.31 9.68 272.38 27.31 46.12 27.87 DARN 13.19 11.91 81.04 9.55 274.68 28.17 46.10 28.83 RWS - NADE 13.16 11.68 84.26 9.71 271.11 26.43 46.09 27.92 non AR RBM 16.26 22.66 96.74 15.15 277.37 43.05 48.88 29.38 RWS - SBN 13.65 12.68 90.63 9.90 272.54 29.99 46.16 28.18 Bi HM -log p(x) 13.78 12.43 86.49 9.40 272.66 27.10 46.12 28.14 -log p (x) 13.82 12.31 86.92 9.40 272.71 27.30 46.98 28.22 -2 log Z 0.20 0.27 0.56 0.09 1.97 1.87 0.41 0.54 ess 81.5% 89.1% 11.2% 92.5% 16.8% 22.5% 70.6% 55.9% Table 1. Negative log-likelihood (NLL) on various binary datasets from the UCI repository: The top rows quote results from shallow models with autoregressive weights between their units within one layer. The second block shows results from non-autoregressive models (quoted from Bornschein & Bengio (2015)). In the third block we show the results obtained by training a Bi HMs. We report the estimated test set NLL when evaluating just the top-down model, log p(x), and when evaluating log p (x). Our Bi HM models consistently obtain similar or better results than RWS while they prefer deeper architectures. All effective sample size (see 4.1) estimates are with error bounds smaller than 0.2%. Dataset Bi HM layer sizes RWS layer sizes ADULT 100, 70, 50, 25 100, 20, 5 CONNECT4 300, 110, 50, 20 150, 50, 10 DNA 200,150,130,100,70,50,30,20,10 150, 10 MUSHROOM 150, 100, 90, 60, 40, 20 150, 50, 10 NIPS-0-12 200, 100, 50, 25 150, 50, 10 OCR 600, 500, 100, 50, 30, 10 300, 100, 10 RCV1 500, 120, 70, 30 200, 50, 10 WEB 650, 580, 70, 30, 10 300, 50, 10 Table 2. Architectures for our best UCI Bi HM models compared to our best RWS models. We observe that Bi HMs prefer significantly deeper architectures than RWS. blurry) digits we draw approximate samples from p (x) which are visualized in Fig. 1 B. Fig. 1 A shows samples obtained when drawing from the top-down generative model p(x) before running any Gibbs iterations. Fig. 2 visualizes the results when running the inpainting algorihm to reconstruct partially occluded images. Our sourcecode package contains additional results and animations. Model - log p(x) - log p(x) autoregressive models NADE 88.9 DARN (1 latent layer) 88.3 84.1 continuous latent variables VAE 96.2 88.7 VAE + HMC (8 iterations) 88.3 85.5 IWAE (2 latent layers) 96.1 85.3 binary latent variables NVIL (2 latent layers) 99.6 RWS (5 latent layers) 96.3 85.4 Bi HM (12 latent layers) 89.2 84.3 Table 3. Comparison of Bi HMs to other recent methods in the literature. We report the lower bounds and estimates for the marginal log probability on the binarized MNIST test set. Toronto Face Database We also trained models on the 98,058 examples from the unlabeled section of the Toronto face database (TFD, Susskind et al. (2010)). Each training example is of size 48 48 pixels and we interpret the gray-level as Bernoulli probability for the bottom layer. We observe that training proceeds rapidly during the first Bidirectional Helmholtz Machines few epochs but mostly only learns the mean-face. During the next few hundred epochs training proceeds much slower but the estimated log-likelihood log p (x) increases steadily. Fig. 5 A shows random samples from a model with respectively 1000,700,700,300 latent variables in the 4 hidden layers. It was trained with a learning rate of 3 10 5; all other hyperparameters were set to the same values as before. Fig. 5 B shows the results from inpainting experiments with this model. 4.1. Analysis of importance sampling-based estimates Estimating the partition function In Fig. 3 A we plot 2 log Z estimates (equation (7)) over the number of outer samples Kouter for our best MNIST model and for 3 different choices of Kinner, i.e.,Kinner {1, 10, 100}. In Fig. 3 B we plot the estimates over the total number of samples Kouter Kinner. We observe that choosing Kinner =1 and using only about 10 million samples results in high quality estimates for 2 log Z with an standard error far below 0.1 nats. Estimating based on 10 million samples takes less than 2 minutes on a GTX980 GPU. Fig. 3 C shows the development of the 2 log Z estimate during learning and in relation to the LL estimates. Importance sampling efficiency A widely used metric to estimate the quality of an importance sampling estimator is the effective sampling size (ESS), given by c ess = (PK k=1 ωk)2/(PK k=1 ω2 k) (see, e.g., Robert & Casella (2009)). Larger values indicate more efficient sampling (more information extracted per sample). We compute the ESS over the MNIST test set for K=100,000 proposal samples from q(h|x). For our best RWS model, a model with 5 stochastic layers (400,300,200,100,10), we obtain c ess 0.10% 0.06; for the Bi HM model we obtain c ess 11.9% 1.1. When we estimate the ESS for using q(h|x) from the Bi HM as a proposal distribution for p(h|x), we obtain c ess=1.2% 0.2. The estimated ESS values indicate that training Bi HM models indeed results in distributions whose intractable posterior p (h|x) as well as top-down model p(h|x) are much better modeled by the learned q(h|x). We also estimated the ESS for a VAE with two determninistic, 500 units wide Re LU layers in the encoder and decoder. This model has a single stochastic layer with 100 continuous variables at the top; it reaches a final estimated test set LL of log p(x) 88.9 0.28. The final variational lower bound, which corresponds exactly to the importance sampling estimate of log p(x) with K=1 sample, is 95.8. For this model we obtain an ESS of 0.07% 0.02. These results indicate that we need thousands of samples to obtain reliable LL estimates with low approximation error. In Fig. 1 C we plot the estimated test set LL over the number of samples K used to estimate log p (x) and log p(x). For all the models and for small a number of samples K we significantly underestimate the LL; but, in comparison to RWS, the estimates for the Bi HM model are much higher and less sensitive to K. E.g, using K=10 samples to evaluate the Bi HM model results in a higher LL estimate than using K=10,000 samples to evaluate the RWS model. Computational cost To demonstrate the computational efficiency of our approach we show typical MNIST learning curves in Fig. 4. For Bi HM, RWS and VAE the learning rate was chosen within a factor of 2 to obtain optimal results after 106 update steps (5 10 4 for Bi HM and RWS, 3 10 3 for VAE; K=10 for Bi HM and RWS). For the IWAE experiment we use the original code, hyperparameters and learning rate schedule from (Burda et al., 2015): This experiment thus uses a mini-batch size of 20 instead of 100, K=5 training samples and and 8 different learning rates over the course of 3300 epochs. In all cases we used K=1000 samples to evaluate the test set log-likelihoods. We generally observe that Bi HM show very good convergence in terms of progress per update step and competitive performance in terms of total training time. Note that Bi HMs and RWS allow for an efficient distributed implementation in the future: per sample, only the binary activations and a single floating point number (the importance weight) need to be communicated between layers. VAEs and IWAEs need to communicate continuous activations during the forward pass and continuous partial gradients during the backward pass. At test time Bi HMs are typically much more effective than the other methods: Bi HMs obtain good LL estimates with K=10 or 100 samples per datapoint while VAE, RWS and IWAE models require 10,000 samples to obtain competitive results (compare Fig. 1 C). 5. Conclusion and future work We introduced a new scheme to construct probabilistic generative models which are automatically regularized to be close to approximate inference distributions we have at our disposal. Using the Bhattacharyya distance we derived a lower-bound on the log-likelihood, and we demonstrated that the bound can be used to fit deep generative models with many layers of latent variables to complex training distributions. Compared to RWS, Bi HM models typically prefer many more latent layers. After training a Bi HM, the directed top-down model p shows better performance than a RWS trained model; both in terms of log-likelihood and sample quality. Sample quality can be further improved by approximately sampling from the full undirected Bi HM model p . The high similarity between p and q, enforced by the training objective, allows Bi HMs to be evaluated orders of mag- Bidirectional Helmholtz Machines Figure 4. Learning curves for MNIST experiments: For Bi HM, RWS and VAE we chose the learning rate such that we get optimal results after 106 update steps; for IWAE we use the original learning rate schedule published in (Burda et al., 2015). Bi HM and RWS use K=10 samples per datapoint; IWAE uses K=5 and a batch size of 20 (according to the original publication). We generally observe that Bi HM show very good convergence in terms of progress per update step and in terms of total training time. Figure 5. Results after training on TFD: A) Random selection of 12 samples drawn from p (x) (10 iterations of Gibbs sampling). B) The left column in each block shows the input; the right column shows a random output sample generated by the inpaiting algorithm (see section 3). nitude more efficiently than RWS, VAE and IWAE models. Possible directions for future research could involve semisupervised learning: the symmetric nature of the generative model p (it is always close to the bottom-up and top-down directed models q and p) might make it particularly interesting for learning tasks that require inference given changing sets of observed and hidden variables. We also have a wide range of potential choices for our parametrized conditional distributions. Assuming continuous latent variables for example and eventually choosing an alternative inference method might make p a better suited model for some training distributions. Acknowledgments: We thank the developers of Theano (Bergstra et al., 2010) and Blocks (Van Merri enboer et al., 2015) for their great work. We thank NSERC, Compute Canada, CIFAR and Samsung for their support. Bengio, Yoshua. Learning deep architectures for AI. Now Publishers, 2009. Bergstra, James, Breuleux, Olivier, Bastien, Fr ed eric, Lamblin, Pascal, Pascanu, Razvan, Desjardins, Guillaume, Turian, Joseph, Warde-Farley, David, and Bengio, Yoshua. Theano: a CPU and GPU math expression compiler. In Proc. Sci Py, 2010. Bornschein, Jorg and Bengio, Yoshua. Reweighted wakesleep. In International Conference on Learning Representations (ICLR 2015), 2015. Burda, Yuri, Grosse, Roger, and Salakhutdinov, Ruslan. Importance weighted autoencoders. ar Xiv preprint ar Xiv:1509.00519, 2015. Dayan, Peter and Hinton, Geoffrey E. Varieties of helmholtz machine. Neural Networks, 9(8):1385 1403, 1996. Dayan, Peter, Hinton, Geoffrey E, Neal, Radford M, and Zemel, Richard S. The Helmholtz machine. Neural computation, 7(5):889 904, 1995. Glorot, Xavier and Bengio, Yoshua. Understanding the difficulty of training deep feedforward neural networks. In AISTATS 2010, 2010. Hinton, Geoffrey E., Dayan, Peter, Frey, Brendan J., and Neal, Radford M. The wake-sleep algorithm for unsupervised neural networks. Science, 268:1558 1161, 1995. Hinton, Geoffrey E., Osindero, Simon, and Teh, Yee Whye. A fast learning algorithm for deep belief nets. Neural Computation, 18:1527 1554, 2006. Hjelm, R Devon, Cho, Kyunghyun, Chung, Junyoung, Salakhutdinov, Russ, Calhoun, Vince, and Jojic, Nebojsa. Iterative refinement of approximate posterior Bidirectional Helmholtz Machines for training directed belief networks. ar Xiv preprint ar Xiv:1511.06382, 2015. Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. ar Xiv preprint ar Xiv:1412.6980, 2014. Kingma, Durk P. and Welling, Max. Auto-encoding variational bayes. In Proceedings of the International Conference on Learning Representations (ICLR), 2014. Larochelle, Hugo. Binarized mnist dataset, 2011. URL http://www.cs.toronto.edu/ larocheh/ public/datasets/binarized_mnist/ binarized_mnist_train.amat. Larochelle, Hugo and Murray, Ian. The Neural Autoregressive Distribution Estimator. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics (AISTATS 2011), volume 15 of JMLR: W&CP, 2011. Mnih, Andriy and Gregor, Karol. Neural variational inference and learning in belief networks. In Proceedings of the 31st International Conference on Machine Learning (ICML 2014), 2014. to appear. Murray, Iain and Salakhutdinov, Ruslan. Evaluating probabilities under high-dimensional latent variable models. In NIPS 08, volume 21, pp. 1137 1144, 2009. Rezende, Danilo J., Mohamed, Shakir, and Wierstra, Daan. Stochastic backpropagation and approximate inference in deep generative models. In ICML 2014, 2014. Robert, Christian and Casella, George. Introducing Monte Carlo Methods with R. Springer Science & Business Media, 2009. Salakhutdinov, Ruslan and Hinton, Geoffrey E. Deep boltzmann machines. In International Conference on Artificial Intelligence and Statistics, pp. 448 455, 2009. Salimans, T., Kingma, D. P., and Welling, M. Markov Chain Monte Carlo and Variational Inference: Bridging the Gap. ar Xiv:1410.6460, 2014. Susskind, Joshua, Anderson, Adam, and Hinton, Geoffrey E. The Toronto face dataset. Technical Report UTML TR 2010-001, U. Toronto, 2010. Van Merri enboer, Bart, Bahdanau, Dzmitry, Dumoulin, Vincent, Serdyuk, Dmitriy, Warde-Farley, David, Chorowski, Jan, and Bengio, Yoshua. Blocks and fuel: Frameworks for deep learning. Ar Xiv e-prints, (1506.00619), June 2015. URL http://adsabs. harvard.edu/abs/2015ar Xiv150600619V.