# bounding_the_test_loglikelihood_of_generative_models__a958232a.pdf Bounding the Test Log-Likelihood of Generative Models Yoshua Bengio 1,2, Li Yao 2, and Kyunghyun Cho 3 1CIFAR Senior Fellow 2D epartement d Informatique et de Recherche Op erationelle , Universit e de Montr eal 3Department of Information and Computer Science , Aalto University School of Science May 13, 2014 Several interesting generative learning algorithms involve a complex probability distribution over many random variables, involving intractable normalization constants or latent variable marginalization. Some of them may not have even an analytic expression for the unnormalized probability function and no tractable approximation. This makes it difficult to estimate the quality of these models, once they have been trained, or to monitor their quality (e.g. for early stopping) while training. A previously proposed method is based on constructing a non-parametric density estimator of the model s probability function from samples generated by the model. We revisit this idea, propose a more efficient estimator, and prove that it provides a lower bound on the true test log-likelihood and an unbiased estimator as the number of generated samples goes to infinity, although one that incorporates the effect of poor mixing. We further propose a biased variant of the estimator that can be used reliably with a finite number of samples for the purpose of model comparison. 1 Motivating Sampling-Based Estimators of Generative Models Quality Since researchers have begun considering more and more powerful models of data distributions, they have been facing the difficulty of estimating the quality of these models. yoshua.bengio@umontreal.ca li.yao@umontreal.ca kyunghyun.cho@aalto.fi ar Xiv:1311.6184v4 [cs.LG] 9 May 2014 In some cases, the probability distribution of a model involves many latent variables, and it is intractable to marginalize over those latent variables or to compute the normalization constant (partition function). There exist approximation algorithms that were proposed to overcome these intractabilities. One such example is Annealed Importance Sampling (AIS, Neal, 2001; Salakhutdinov and Murray, 2008; Murray and Salakhutdinov, 2009; Salakhutdinov and Hinton, 2009). AIS, however, tends to provide optimistic estimates most of the time, just like any estimator based on importance sampling. This optimistic estimation happens when the samples from the AIS proposal distribution miss many important modes of the distribution. This is problematic because when we want to compare learning algorithms, we often prefer a conservative estimate of performance than an optimistic estimate that tends to over-estimate the value of the model. This issue can be particularly troubling when the amount of over-estimation depends on the model, which makes model comparisons based on an optimistic estimator dangerous. In other cases, one has a generative model but there is no explicit formulat corresponding to the probability function estimated by the model. That includes Herding Welling (2009), the non-zero temperature version of Herding (Breuleux et al., 2011), and the recently proposed generative procedures for contractive auto-encoders (CAE, Rifai et al., 2012a), denoising auto-encoders (DAE, Bengio et al., 2013b), and generative stochastic networks (GSN, Bengio et al., 2014). For this reason, in this paper, we discuss a way to assess the quality of a generative model simply by considering the samples it generates. In the next section, we discuss the general idea of estimating a probability function (or a density function) from the samples generated by a generative model. We first review a previously proposed estimator that aimed to solve this goal. We show that the estimate by this estimator, in expectation over the generated samples from a generative model, is a lower bound on the true test log-likelihood and unbiased asymptotically. We then propose a more efficient variant of this estimator that has lower variance. 2 Previous Work As far as we know, Breuleux et al. (2010) and Breuleux et al. (2011) first proposed this kind of estimator. The estimator computes the estimate of a geneartive model by the following three steps: 1. Generate a set of samples S from the model. 2. Construct a non-parametric estimator ˆf of the probability distribution f of the model distribution. 3. Compute the log-likelihood of test data under ˆf. In the case where the data are continuous, a Parzen density estimator is constructed by ˆf(x) = meanx SN(x; µ = x , σI), (1) where S is a set of samples from the model collected by a Markov chain, and N(x; µ, Σ) is the probability density of x under a Gaussian distribution with mean µ and covariance Σ. In this case, the bandwidth hyper-parameter σ must be tuned according to, for instance, the log-likelihood of a validation set. This estimator was recently used to assess the qualities of the generative models such as stacked CAE (Rifai et al., 2012b), restricted Boltzmann machines (RBM, Desjardins et al., 2010), deep belief networks (Bengio et al., 2013a) as well as DAEs (Bengio et al., 2013c) and GSNs (Bengio et al., 2014). As noted in Breuleux et al. (2010), such an estimator measures not only the quality of the model but also that of the generative procedure. Any variant of this estimator will tend to estimate the log-likelihood to be lower than the true one when the generative procedure used to collecte samples from a model does not mix well. Another way to evaluate the quality of a generative model whose probability is neither tractable nor easily approximated is to use a non-parametric two-sample test (Gretton et al., 2012). Unlike the approach by Breuleux et al. (2010), this approach compares the (smoothed) distribution of the generated samples to test samples using an L2 measure (the squared error in estimated probability). Since in this paper we are more interested in the case of using Kullback-Leibler divergence as a measure, we do not discuss this approach of two-sample test any further. 3 Conservative Sampling-based Likelihood Estimator We propose in this section a new estimator of the log-likelihood of a generative model, called Conservative Sampling-based Log-likelihood (CSL) estimator. The proposed CSL estimator does not require tuning a non-parametric estimator to fit samples generated from a model. It only requires that a Markov chain is defined for the model and used to collect samples from the generative model. Furthermore, we assume that the Markov chain alternatively samples from latent variables and observed variables such that the conditional distribution P(x|h) is well defined. This assumption holds for many widely used generative models. An RBM using a block Gibbs sampling is one, and a generalized denoising autoencoder whose sampling procedure was proposed recently by Bengio et al. (2013d) is another. Multi-layered generative models such as DBNs and deep Boltzmann machines (DBM, Salakhutdinov and Hinton, 2009). Given the conditional probability P(x|h) of a model and a set S of samples h of the latent variables collected from a Markov chain, the CSL estimate is computed by log ˆf(x) = log meanh SP(x|h ). (2) The overall procedure of the CSL estimator is presented in Alg. 1. Unlike the original estimator described in Sec. 2, the CSL estimator utilizes the conditional probability P(x|h ) rather than the actual sample x of observed variables generated from the Markov chain. This has the effect of considerably reducing the variance of the CSL estimator. In the case of a Gaussian conditional P(x|h ), whose mean µ is a function of h , for instance, this has the consequence of centering the Gaussian components of the Parzen density estimator on the mean µ rather than on the Algorithm 1 CSL requires a set S of samples of the latent variables h from a Markov chain, a conditional distribution P(x|h), and a set X of test samples. 2: for x in X do 3: r = 0 4: for h in S do 5: r r + P(x|h ) 6: end for 7: ˆf S(x) = r |S| 8: LL LL + log ˆf S(x) 9: end for 10: Return LL/|X| actual sample x . Since each mean µ can summarize a very large number of potential samples x (one could have obtained by fixing h and considering many draws from P(x|h )), the CSL estimator is a much more efficient estimator with lower variance than other estimators obtained purely from the generated samples, such as the one described in previous section. The other important consequence of using the conditional distribution of the observed variables is that it allows us to get rid of the bandwidth hyper-parameter. Indeed, a natural choice of bandwidth (in the case of Gaussian conditional P(x|h )) is precisely the standard deviation of the Gaussian conditional distribution. This allows us to prove that the CSL estimator is asymptotically consistent and conservative in average later. 4 Asymptotically Unbiased, Conservative Estimator We first prove that the CSL estimator log ˆf S(x) is asymptotically unbiased, i.e., that as the number of generated samples increases, it approaches the ground truth loglikelihood log f(x) associated with the stationary distribution of a generating Markov chain. Proposition 1. If the samples in S are taken from chains of length L , the CSL estimator log ˆf S(x) (Algorithm 1, Eq. 2) converges to the ground truth probability f(x) as the number of samples |S| , i.e., lim |S| log ˆf S(x) = log f(x) (3) Proof. According to the hypothesis on the samples in S, we have that Monte-Carlo estimates obtained from S converge to their expectation, i.e., the distribution of h converges to P(h ) under the stationary distribution of the Markov chain. Since f(x) = R P(h )P(x|h )dh is the marginal distribution over x associated with this stationary distribution, its Monte-Carlo estimator ˆf(x) = meanh in SP(x|h ) converges in the limit of |S| to its expectation under P(h ), i.e., f(x). Finally, note that the log of the limit equals the limit of the log. We then prove that in the finite sample case, the CSL estimator log ˆf S(x) tends to underestimate the ground truth log-probability log f(x). Proposition 2. The expected value of the log of the CSL estimator log ˆf S(x) (Algorithm 1) over samples S from the generative model is a lower bound on the true loglikelihood log f(x), i.e., ES[log ˆf S(x)] log f(x) (4) Proof. We simply take advantage of the concavity of the log and Jensen s inequality: ES[log ˆf S(x)] log ES[ ˆf S(x)] = log EH[P(x|H)] Another interesting question is the rate of convergence of the CSL estimator to its asymptotic (ground truth) value. That rate is governed by the variance of the estimator, which decreases linearly with the number of samples in S, up to a factor which corresponds to the effective sample size associated with the Markov chain, just like any other Monte-Carlo average associated with the chain. 5 Empirical Validation In this section, we empirically evaluate the CSL estimator on a real dataset to investigate the rate at which the estimator converges. We report here the experimental result on denoising auto-encoders (DAE), generative stochastic networks (GSN), restricted Boltzmann machines (RBM), deep Boltzmann machines (DBM), and deep belief nets (DBNs). DAEs and GSNs themselves define generative (sampling) procedures, and for RBMs, DBMs and DBNs we used block Gibbs sampling to generate samples of latent variables. One interesting aspect of these experiments is that they highlight the dependency of the estimator on the effective sample size of the Markov chain, i.e., on the mixing rate of the chain. For a fixed number of samples |S|, chains that mix faster provide an estimator that is closer to its asymptote. In particular, these results confirm the previously reported observations of poor mixing rate of block Gibbs sampling for RBMs, DBMs and DBNs that are very well trained. Indeed, these models are able to capture a sharper estimated distribution than their less-well trained counterparts. However, Gibbs sampling on these less-well trained models tends to mix better (Bengio et al., 2013a), because the major modes of the learned distribution are not separated by vast zones of tiny probability. All models in these experiments were trained on the binarized MNIST data (thresholding at 0.5). The CSL estimates of the test set on the following models were evaluated. For each model, every 100-th sample from a Markov chain was collected to compute the CSL estimate. For more details on the architecture and training procedure of each model, see Appendix A. Note that although on the RBM/DBN/DBM the testset log-likelihood is estimated by AIS (or its lower bound), there is no such AIS estimator for DAEs and GSNs, which is where the CSL estimator may become more useful. The number of generated samples was varied between 10,000 and 150,000. The resulting CSL estimates are presented in Table 1. Table 1: The CSL estimates obtained using different numbers of samples of latent variables. Note that samples are collected once every 100 sampling steps of the Markov chain. Where available, an AIS-based estimate is also shown. # samples GSN-1 GSN-2 DBN-2 DBM-2 RBM 10k -142 -108 -446 -173 -233 50k -126 -101 -370 -144 -192 100k -120 -98 -340 -135 -177 150k -117 -97 -325 -132 -170 AIS -57 -76.5 -64.1 In the following experiment, we trained an RBM with only 5 hidden units on MNIST, for which the exact log-likelihood can be computed easily. In this case, we observed that the CSL estimate matched the AIS and true likelihood closely as the number of samples grew. The CSL estimates for varying numbers of generated samples are shown in Table 2. Table 2: The CSL estimate converges to the true loglikelihood on a small RBM with only 5 hidden units. # of Samples Log-likelihood 1k -188.49 2k -186.18 5k -182.26 10k -181.58 20k -180.65 30k -180.71 exact -180.24 AIS -180.22 6 Biased CSL Estimator for Model Comparison Although the CSL estimator is unbiased asymptotically as shown in Sec. 4, it may be desirable in practice to obtain a biased, but readily available, estimator. Hence, in this section, we describe an algorithm, called biased CSL, that works with a finite number of samples. This algorithm is biased, but we show at the end of this section, that the Figure 1: The estimates of the log-probabilities of the test samples for 12 RBMs with varying numbers of latent variables. The curves represent the log-probabilities estimated using AIS (blue), the biased CSL with a single step of 10 parallel Markov chains (red), the biased CSL with 300 steps of 10 parallel Markov chains (cyan) and the true log-probabilities (only for the small models, green). estimate correlate well with the exact log-likelihood or the AIS-based estimate and that it may be used for model comparison. The biased CSL aims at estimating the log-probability of a single, test sample x at a time. As with the original algorithm in Algorithm 1, this estimator requires only that there are a computable conditional distribution P(x|h) and a Markov chain from where the latent variable of a model can be sampled. Unlike the unbiased CSL estimator, the biased CSL collected a small set Sx of consecutive latent samples h from a Markov chain that starts from the test sample x. This procedure ensures that the set Sx will always include at least a few samples that correspond to the neighborhood of the test samples x. Furthermore, by collecting consecutive samples, we ensure that the samples do not deviate too far away from the starting point x. Although the locality and correlatedness of the consecutive samples Sx starting from the test sample induce a bias, we find this to be beneficial in the case of finite samples, since the lack of any latent sample that is close to the test sample x makes the estimate highly unreliable. The biased CSL ensures that the estimate of the probability of x will be reliable and have less variance. One consequence of the induced bias is that the biased CSL estimator is not anymore conservative, but tends to over-estimate the probability of the test samples. Fig. 5 shows how well the biased CSL estimates correlate with either the true logprobabilities or the AIS-based estimates. We computed the biased CSL estimate by running 10 parallel Gibbs sampling chains per test sample for either a single step or 30 steps. It is clear from the figure that the biased CSL estimates correlate very well with the true or AIS-based estimated log-probabilities. As expected we see that the biased CSL estimator tends to overestimate the log-probabilities of the test samples. Nevertheless, we can see that the biased CSL estimator correctly orders the model performances with only a very small amount of samples. This result suggests that in practice the biased CSL estimator, which requires only a few samples per test sample, may safely be used for the purpose of model comparison. This is especially useful when a model does not have an explicit probability function, such as DAEs and GSNs. We leave more in-depth investigation on how the biased CSL estimator works with those models that do not have an explicit probability function for the future. 7 Conclusion We have proposed a novel sample-based estimator for estimating the probability that a trained model assigns to an example, called conservative sampling-based log-likelihood (CSL) estimator. We have justified its theoretical consistency and empirically validated it on recently popular generative models including restricted Boltzmann machines (RBM), deep Boltzmann machines (DBM), deep belief networks (DBN), denoising autoencoders (DAE) and generative stochastic networks (GSN). The proposed CSL estimator uses only a set of samples of latent variables generated from a model by a Markov chain. This make the estimator useful for generative models that do not have an explicit probability distribution but only define a generative procedure. Also, this property of using only samples from a model makes the estimator reflect, not only the generative performance of the mode, but also the mixing property of the generative procedure used to generate samples from the model. We observed this interesting phenomenon empirically by computing the CSL estimates on well-trained RBMs, DBNs and DBMs by generating samples using Gibbs sampling which is known to have a poor mixing behavior in these models. In addition to the unbiased CSL estimator, we also proposed a biased variant of the estimator that requires only a few consecutive samples to approximate the probability of a single test sample, called biased CSL estimator. The empirically evidence suggested that the biased CSL estimator can be used to compare models of varying complexities correctly, which makes the CSL estimator more useful for those models without an explicit probability function, such as GSNs, DAEs and contractive autoencoders (CAE). In the future, more systematic study of how the proposed CSL estimator, both unbiased and biased, behaves with different generative models. Especially, more empirical investigation of applying the CSL estimator to those models without an explicit probability distribution but only with a generative procedure will be required. Acknowledgements We would like to thank the developers of Theano (Bergstra et al., 2010; Bastien et al., 2012), as well NSERC, CIFAR, Compute Canada, and Calcul Qu ebec for funding. Bastien, F., Lamblin, P., Pascanu, R., Bergstra, J., Goodfellow, I. J., Bergeron, A., Bouchard, N., and Bengio, Y. (2012). Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop. Bengio, Y. (2013). Estimating or propagating gradients through stochastic neurons. Technical Report ar Xiv:1305.2982, Universite de Montreal. Bengio, Y., Mesnil, G., Dauphin, Y., and Rifai, S. (2013a). Better mixing via deep representations. In ICML 13. Bengio, Y., Yao, L., Alain, G., and Vincent, P. (2013b). Generalized denoising autoencoders as generative models. In NIPS26. Nips Foundation. Bengio, Y., Li, Y., Alain, G., and Vincent, P. (2013c). Generalized denoising autoencoders as generative models. Technical Report ar Xiv:1305.6663v1, Universite de Montreal. Bengio, Y., Yao, L., Alain, G., and Vincent, P. (2013d). Generalized denoising autoencoders as generative models. In Advances in Neural Information Processing Systems 26 (NIPS 13). Bengio, Y., Thibodeau-Laufer, E., and Yosinski, J. (2014). Deep generative stochastic networks trainable by backprop. Technical Report ar Xiv:1306.1091. Bergstra, J., Breuleux, O., Bastien, F., Lamblin, P., Pascanu, R., Desjardins, G., Turian, J., Warde-Farley, D., and Bengio, Y. (2010). Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (Sci Py). Oral Presentation. Breuleux, O., Bengio, Y., and Vincent, P. (2010). Unlearning for better mixing. Technical Report 1349, Universit e de Montr eal/DIRO. Breuleux, O., Bengio, Y., and Vincent, P. (2011). Quickly generating representative samples from an RBM-derived process. Neural Computation, 23(8), 2053 2073. Cho, K., Raiko, T., and Ilin, A. (2013). Enhanced gradient for training restricted boltzmann machines. Neural computation, 25(3), 805 831. Desjardins, G., Courville, A., Bengio, Y., Vincent, P., and Delalleau, O. (2010). Tempered Markov chain Monte Carlo for training of restricted Boltzmann machine. In AISTATS 2010, volume 9, pages 145 152. Gretton, A., Borgwardt, K., Rasch, M., Schoelkopf, B., and Smola, A. (2012). A kernel two-sample test. J. Mach. Learning Res., 13, 723 773. Murray, I. and Salakhutdinov, R. (2009). Evaluating probabilities under highdimensional latent variable models. In NIPS 08, volume 21, pages 1137 1144. Neal, R. M. (2001). Annealed importance sampling. Statistics and Computing, 11(2), 125 139. Rifai, S., Bengio, Y., Dauphin, Y., and Vincent, P. (2012a). A generative process for sampling contractive auto-encoders. In ICML 12. Rifai, S., Bengio, Y., Dauphin, Y., and Vincent, P. (2012b). A generative process for sampling contractive auto-encoders. In ICML 2012. Salakhutdinov, R. and Hinton, G. E. (2009). Deep Boltzmann machines. In AISTATS 2009, volume 5, pages 448 455. Salakhutdinov, R. and Murray, I. (2008). On the quantitative analysis of deep belief networks. In W. W. Cohen, A. Mc Callum, and S. T. Roweis, editors, ICML 2008, volume 25, pages 872 879. ACM. Welling, M. (2009). Herding dynamic weights for partially observed random field models. In UAI 09. Morgan Kaufmann. A Model Descriptions Here we describe the architecture and training procedure of each model: GSN-1 (DAE): Architecture: 784 (input) - 2000 (tanh) Noise: (input) 0.28 salt-and-pepper, (hidden) no noise Learning: 9-step walkback (Bengio, 2013), learning rate 0.05, cross-entropy cost, 200 epochs Early-stop: visual inspection of generated samples Architecture: 784 (input) - 1500 (tanh) - 1500 (tanh) Noise: (input) 0.4 salt-and-pepper, (hidden 1) no noise, (hidden 2) white Gaussian noise with std. 2.0 Learning: learning rate 0.1, cross-entropy cost, 300 epochs Early-stop: visual inspection of generated samples Architecture: 784 (input) - 4000 (sigmoid) - 1000 (sigmoid) Learning: (1st layer) RBM from (Cho et al., 2013) (2nd layer) RBM with PCD-9 Architecture: 784 (input) - 500 (sigmoid) - 1000 (sigmoid) Learning: procedure from (Salakhutdinov and Hinton, 2009) Architecture: 784 (input) - 4000 (sigmoid) Learning: procedure from (Cho et al., 2013) (enhanced gradient, adaptive learning rate and parallel tempering)