# tighter_variational_bounds_are_not_necessarily_better__65fa301a.pdf Tighter Variational Bounds are Not Necessarily Better Tom Rainforth 1 Adam R. Kosiorek 1 2 Tuan Anh Le 2 Chris J. Maddison 1 Maximilian Igl 2 Frank Wood 3 Yee Whye Teh 1 Abstract We provide theoretical and empirical evidence that using tighter evidence lower bounds (ELBOs) can be detrimental to the process of learning an inference network by reducing the signal-to-noise ratio of the gradient estimator. Our results call into question common implicit assumptions that tighter ELBOs are better variational objectives for simultaneous model learning and inference amortization schemes. Based on our insights, we introduce three new algorithms: the partially importance weighted auto-encoder (PIWAE), the multiply importance weighted auto-encoder (MIWAE), and the combination importance weighted autoencoder (CIWAE), each of which includes the standard importance weighted auto-encoder (IWAE) as a special case. We show that each can deliver improvements over IWAE, even when performance is measured by the IWAE target itself. Furthermore, our results suggest that PIWAE may be able to deliver simultaneous improvements in the training of both the inference and generative networks. 1 Introduction Variational bounds provide tractable and state-of-the-art objectives for training deep generative models (Kingma & Welling, 2014; Rezende et al., 2014). Typically taking the form of a lower bound on the intractable model evidence, they provide surrogate targets that are more amenable to optimization. In general, this optimization requires the generation of approximate posterior samples during the model training and so a number of methods simultaneously learn an inference network alongside the target generative network. As well as assisting the training process, this inference network is often also of direct interest itself. For example, variational bounds are often used to train auto-encoders (Bourlard 1Department of Statistics, University of Oxford 2Department of Engineering, University of Oxford 3Department of Computer Science, University of British Columbia. Correspondence to: Tom Rainforth . Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s). & Kamp, 1988; Hinton & Zemel, 1994; Gregor et al., 2016; Chen et al., 2017), for which the inference network forms the encoder. Variational bounds are also used in amortized and traditional Bayesian inference contexts (Hoffman et al., 2013; Ranganath et al., 2014; Paige & Wood, 2016; Le et al., 2017), for which the generative model is fixed and the inference network is the primary target for the training. The performance of variational approaches depends upon the choice of evidence lower bound (ELBO) and the formulation of the inference network, with the two often intricately linked to one another; if the inference network formulation is not sufficiently expressive, this can have a knock-on effect on the generative network (Burda et al., 2016). In choosing the ELBO, it is often implicitly assumed that using tighter ELBOs is universally beneficial, at least whenever this does not in turn lead to higher variance gradient estimates. In this work we question this implicit assumption by demonstrating that, although using a tighter ELBO is typically beneficial to gradient updates of the generative network, it can be detrimental to updates of the inference network. Remarkably, we find that it is possible to simultaneously tighten the bound, reduce the variance of the gradient updates, and arbitrarily deteriorate the training of the inference network. Specifically, we present theoretical and empirical evidence that increasing the number of importance sampling particles, K, to tighten the bound in the importance-weighted autoencoder (IWAE) (Burda et al., 2016), degrades the signal-tonoise ratio (SNR) of the gradient estimates for the inference network, inevitably deteriorating the overall learning process. In short, this behavior manifests because even though increasing K decreases the standard deviation of the gradient estimates, it decreases the magnitude of the true gradient faster, such that the relative variance increases. Our results suggest that it may be best to use distinct objectives for learning the generative and inference networks, or that when using the same target, it should take into account the needs of both networks. Namely, while tighter bounds are typically better for training the generative network, looser bounds are often preferable for training the inference network. Based on these insights, we introduce three new algorithms: the partially importance-weighted auto-encoder (PIWAE), the multiply importance-weighted auto-encoder (MIWAE), and the combination importance- Tighter Variational Bounds are Not Necessarily Better weighted auto-encoder (CIWAE). Each of these include IWAE as a special case and are based on the same set of importance weights, but use these weights in different ways to ensure a higher SNR for the inference network. We demonstrate that our new algorithms can produce inference networks more closely representing the true posterior than IWAE, while matching the training of the generative network, or potentially even improving it in the case of PIWAE. Even when treating the IWAE objective itself as the measure of performance, all our algorithms are able to demonstrate clear improvements over IWAE. 2 Background and Notation Let x be an X-valued random variable defined via a process involving an unobserved Z-valued random variable z with joint density pθ(x, z). Direct maximum likelihood estimation of θ is generally intractable if pθ(x, z) is a deep generative model due to the marginalization of z. A common strategy is to instead optimize a variational lower bound on log pθ(x), defined via an auxiliary inference model qφ(z|x): ELBOVAE(θ, φ, x) := Z qφ(z|x) log pθ(x, z) = log pθ(x) KL(qφ(z|x)||pθ(z|x)). (1) Typically, qφ is parameterized by a neural network, for which the approach is known as the variational auto-encoder (VAE) (Kingma & Welling, 2014; Rezende et al., 2014). Optimization is performed with stochastic gradient ascent (SGA) using unbiased estimates of θ,φ ELBOVAE(θ, φ, x). If qφ is reparameterizable, then given a reparameterized sample z qφ(z|x), the gradients θ,φ(log pθ(x, z) log qφ(z|x)) can be used for the optimization. The VAE objective places a harsh penalty on mismatch between qφ(z|x) and pθ(z|x); optimizing jointly in θ, φ can confound improvements in log pθ(x) with reductions in the KL (Turner & Sahani, 2011). Thus, research has looked to develop bounds that separate the tightness of the bound from the expressiveness of the class of qφ. For example, the IWAE objectives (Burda et al., 2016), which we denote as ELBOIS(θ, φ, x), are a family of bounds defined by QIS(z1:K|x) := YK k=1 qφ(zk|x), ˆZIS(z1:K, x) := 1 k=1 pθ(x, zk) qφ(zk|x) , (2) ELBOIS(θ, φ, x) := Z QIS(z1:K|x) log ˆZIS(z1:K, x) dz1:K log pθ(x). The IWAE objectives generalize the VAE objective (K = 1 corresponds to the VAE) and the bounds become strictly tighter as K increases (Burda et al., 2016). When the family of qφ contains the true posteriors, the global optimum parameters {θ , φ } are independent of K, see e.g. (Le et al., 2018). Nonetheless, except for the most trivial models, it is not usually the case that qφ contains the true posteriors, and Burda et al. (2016) provide strong empirical evidence that setting K > 1 leads to significant empirical gains over the VAE in terms of learning the generative model. Optimizing tighter bounds is usually empirically associated with better models pθ in terms of marginal likelihood on held out data. Other related approaches extend this to sequential Monte Carlo (SMC) (Maddison et al., 2017; Le et al., 2018; Naesseth et al., 2018) or change the lower bound that is optimized to reduce the bias (Li & Turner, 2016; Bamler et al., 2017). A second, unrelated, approach is to tighten the bound by improving the expressiveness of qφ (Salimans et al., 2015; Tran et al., 2015; Rezende & Mohamed, 2015; Kingma et al., 2016; Maaløe et al., 2016; Ranganath et al., 2016). In this work, we focus on the former, algorithmic, approaches to tightening bounds. 3 Assessing the Signal-to-Noise Ratio of the Gradient Estimators Because it is not feasible to analytically optimize any ELBO in complex models, the effectiveness of any particular choice of ELBO is linked to our ability to numerically solve the resulting optimization problem. This motivates us to examine the effect K has on the variance and magnitude of the gradient estimates of IWAE for the two networks. More generally, we study IWAE gradient estimators constructed as the average of M estimates, each built from K independent particles. We present a result characterizing the asymptotic signal-to-noise ratio in M and K. For the standard case of M = 1, our result shows that the signal-to-noise ratio of the reparameterization gradients of the inference network for the IWAE decreases with rate O(1/ K). As estimating the ELBO requires a Monte Carlo estimation of an expectation over z, we have two sample sizes to tune for the estimate: the number of samples M used for Monte Carlo estimation of the ELBO and the number of importance samples K used in the bound construction. Here M does not change the true value of θ,φ ELBO, only our variance in estimating it, while changing K changes the ELBO itself, with larger K leading to tighter bounds (Burda et al., 2016). Presuming that reparameterization is possible, we can express our gradient estimate in the general form m=1 θ,φ log 1 k=1 wm,k, (3) where wm,k = pθ(zm,k, x) qφ(zm,k|x) and zm,k i.i.d. qφ(zm,k|x). Thus, for a fixed budget of T = MK samples, we have a family of estimators with the cases K = 1 and M = 1 corresponding respectively to the VAE and IWAE objectives. We will use M,K (θ) to refer to gradient estimates with respect to θ and M,K (φ) for those with respect to φ. Variance is not always a good barometer for the effectiveness of a gradient estimation scheme; estimators with small expected values need proportionally smaller variances to be estimated accurately. In the case of IWAE, when changes in K Tighter Variational Bounds are Not Necessarily Better simultaneously affect both the variance and expected value, the quality of the estimator for learning can actually worsen as the variance decreases. To see why, consider the marginal likelihood estimates ˆZm,K = PK k=1 wm,k. Because these become exact (and thus independent of the proposal) as K , it must be the case that lim K M,K(φ) = 0. Thus as K becomes large, the expected value of the gradient must decrease along with its variance, such that the variance relative to the problem scaling need not actually improve. To investigate this formally, we introduce the signal-tonoise-ratio (SNR), defining it to be the absolute value of the expected estimate scaled by its standard deviation: SNRM,K(θ) = |E [ M,K(θ)] /σ [ M,K(θ)]| (4) where σ[ ] denotes the standard deviation of a random variable. The SNR is defined separately on each dimension of the parameter vector and similarly for SNRM,K(φ). It provides a measure of the relative accuracy of the gradient estimates. Though a high SNR does not always indicate a good SGA scheme (as the target objective itself might be poorly chosen), a low SNR is always problematic as it indicates that the gradient estimates are dominated by noise: if SNR 0 then the estimates become completely random. We are now ready to state our main theoretical result: SNRM,K(θ) = O( MK) and SNRM,K(φ) = O( p Theorem 1. Assume that when M = K = 1, the expected gradients; the variances of the gradients; and the first four moments of w1,1, θw1,1, and φw1,1 are all finite and the variances are also non-zero. Then the signal-to-noise ratios of the gradient estimates converge at the following rates SNRM,K(θ) = (5) K θ Var[w1,1] Z2 + O 1 K3/2 E h w2 1,1 ( θ log w1,1 θ log Z)2i + O 1 SNRM,K(φ) = φVar [w1,1] + O 1 K σ [ φw1,1] + O 1 (6) where Z := pθ(x) is the true marginal likelihood. Proof. We give an intuitive demonstration of the result here and provide a formal proof in Appendix A. The effect of M on the SNR follows from using the law of large numbers on the random variable θ,φ log ˆZm,K. Namely, the overall expectation is independent of M and the variance reduces at a rate O(1/M). The effect of K is more complicated but is perhaps most easily seen by noting that (Burda et al., 2016) θ,φ log ˆZm,K = wm,k PK ℓ=1 wm,k θ,φ log wm,k , such that θ,φ log ˆZm,K can be interpreted as a selfnormalized importance sampling estimate. We can, there- fore, invoke the known result (see e.g. Hesterberg (1988)) that the bias of a self-normalized importance sampler converges at a rate O(1/K) and the standard deviation at a rate O(1/ K). We thus see that the SNR converges at a rate O((1/K)/(1/ K) if the asymptotic gradient is 0 and O((1)/(1/ K) otherwise, giving the convergence rates in the φ and θ cases respectively. The implication of these rates is that increasing M is monotonically beneficial to the SNR for both θ and φ, but that increasing K is beneficial to the former and detrimental to the latter. We emphasize that this means the SNR for the IWAE inference network gets worse as we increase K: this is not just an opportunity cost from the fact that we could have increased M instead, increasing the total number of samples used in the estimator actually worsens the SNR! 3.1 Asymptotic Direction An important point of note is that the dependence of the true inference network gradients becomes independent of K as K becomes large. Namely, because we have as an intermediary result from deriving the SNRs that E [ M,K(φ)] = φVar [w1,1] we see that expected gradient points in the direction of φVar [w1,1] as K . This direction is rather interesting: it implies that as K , the optimal φ is that which minimizes the variance of the weights. This is well known to be the optimal importance sampling distribution in terms of estimating the marginal likelihood (Owen, 2013). Given that the role of the inference network during training is to estimate the marginal likelihood, this is thus arguably exactly what we want to optimize for. As such, this result, which complements those of (Cremer et al., 2017), suggests that increasing K provides a preferable target in terms of the direction of the true inference network gradients. We thus see that there is a trade-off with the fact that increasing K also diminishes the SNR, reducing the estimates to pure noise if K is set too high. In the absence of other factors, there may thus be a sweet-spot for setting K. 3.2 Multiple Data Points Typically when training deep generative models, one does not optimize a single ELBO but instead its average over multiple data points, i.e. J (θ, φ) := 1 n=1 ELBOIS(θ, φ, x(n)). (8) Our results extend to this setting because the z are drawn independently for each x(n), so n=1 (n) M,K n=1 E h (n) M,K i , (9) n=1 (n) M,K n=1 Var h (n) M,K i . (10) We thus also see that if we are using mini-batches such that N is a chosen parameter and the x(n) are drawn from the Tighter Variational Bounds are Not Necessarily Better -0.1 0 0.1 0.2 0.3 0.4 0 (a) IWAE inference network gradient estimates -0.05 0 0.05 0.1 0.15 0 (b) IWAE generative network gradient estimates Figure 1: Histograms of gradient estimates M,K for the generative network and the inference network using the IWAE (M = 1) objective with different values of K. empirical data distribution, then the SNRs of N,M,K := 1 N PN n=1 (n) M,K scales as N, i.e. SNRN,M,K(θ) = O( NMK) and SNRN,M,K(φ) = O( p NM/K). Therefore increasing N has the same ubiquitous benefit as increasing M. In the rest of the paper, we will implicitly be considering the SNRs for N,M,K, but will omit the dependency on N to simplify the notation. 4 Empirical Confirmation Our convergence results hold exactly in relation to M (and N) but are only asymptotic in K due to the higher order terms. Therefore their applicability should be viewed with a healthy degree of skepticism in the small K regime. With this in mind, we now present empirical support for our theoretical results and test how well they hold in the small K regime using a simple Gaussian model, for which we can analytically calculate the ground truth. Consider a family of generative models with RD valued latent variables z and observed variables x: z N(z; µ, I), x|z N(x; z, I), (11) which is parameterized by θ := µ. Let the inference network be parameterized by φ = (A, b), A RD D, b RD where qφ(z|x) = N(z; Ax + b, 2 3I). Given a dataset (x(n))N n=1, we can analytically calculate the optimum of our target J (θ, φ) as explained in Appendix B, giving θ := µ = 1 N PN n=1 x(n) and φ := (A , b ), where A = I/2 and b = µ /2. Though this will not be the case in general, for this particular problem, the optimal proposal is independent of K. However, the expected gradients for the inference network still change with K. To conduct our investigation, we randomly generated a synthetic dataset from the model with D = 20 dimensions, N = 1024 data points, and a true model parameter value µtrue that was itself randomly generated from a unit Gaussian, i.e. µtrue N(µtrue; 0, I). We then considered the gradient at a random point in the parameter space close to optimum (we also consider a point far from the optimum in Appendix C.3). Namely each dimension of each parameter was randomly offset from its optimum value using a zero-mean Gaussian with standard deviation 0.01. We then calculated empirical estimates of the ELBO gradients for IWAE, where M = 1 is held fixed and we increase K, and for VAE, where K = 1 is held fixed and we increase M. In all cases we calculated 104 such estimates and used these samples to provide empirical estimates for, amongst other things, the mean and standard deviation of the estimator, and thereby an empirical estimate for the SNR. We start by examining the qualitative behavior of the different gradient estimators as K increases as shown in Figure 1. This shows histograms of the IWAE gradient estimators for a single parameter of the inference network (left) and generative network (right). We first see in Figure 1a that as K increases, both the magnitude and the standard deviation of the estimator decrease for the inference network, with the former decreasing faster. This matches the qualitative behavior of our theoretical result, with the SNR ratio diminishing as K increases. In particular, the probability that the gradient is positive or negative becomes roughly equal for larger values of K, meaning the optimizer is equally likely to increase as decrease the inference network parameters at the next iteration. By contrast, for the generative network, IWAE converges towards a non-zero gradient, such that, even though the SNR initially decreases with K, it then rises again, with a very clear gradient signal for K = 1000. To provide a more rigorous analysis, we next directly examine the convergence of the SNR. Figure 2 shows the convergence of the estimators with increasing M and K. The observed rates for the inference network (Figure 2a) correspond to our theoretical results, with the suggested rates observed all the way back to K = M = 1. As expected, we see that as M increases, so does SNRM,K(b), but as K increases, SNRM,K(b) reduces. In Figure 2b, we see that the theoretical convergence for SNRM,K(µ) is again observed exactly for variations in M, but a more unusual behavior is seen for variations in K, where the SNR initially decreases before starting to increase again for large enough K, eventually exhibiting behavior consistent with the theoretical result for large enough K. The driving factor for this is that here E[ M, (µ)] has a smaller magnitude than (and opposite sign to) E[ M,1(µ)] Tighter Variational Bounds are Not Necessarily Better 100 101 102 103 10-3 10-2 10-1 100 101 102 103 (a) Convergence of SNR for inference network 100 101 102 103 10-3 10-2 10-1 100 101 102 103 (b) Convergence of SNR for generative network Figure 2: Convergence of signal-to-noise ratios of gradient estimates with increasing M and K. Different lines correspond to different dimensions of the parameter vectors. Shown in blue is the IWAE where we keep M = 1 fixed and increase K. Shown in red is the VAE where K = 1 is fixed and we increase M. The black and green dashed lines show the expected convergence rates from our theoretical results, representing gradients of 1/2 and 1/2 respectively. (see Figure 1b). If we think of the estimators for all values of K as biased estimates for E[ M, (µ)], we see from our theoretical results that this bias decreases faster than the standard deviation. Consequently, while the magnitude of this bias remains large compared to E[ M, (µ)], it is the predominant component in the true gradient and we see similar SNR behavior as in the inference network. Note that this does not mean that the estimates are getting worse for the generative network. As we increase K our bound is getting tighter and our estimates closer to the true gradient for the target that we actually want to optimize µ log Z. See Appendix C.2 for more details. As we previously discussed, it is also the case that increasing K could be beneficial for the inference network even if it reduces the SNR by improving the direction of the expected gradient. However, as we will now show, the SNR is, for this problem, the dominant effect for the inference network. 4.1 Directional Signal-to-Noise Ratio As a reassurance that our chosen definition of the SNR is appropriate for the problem at hand and to examine the effect of multiple dimensions explicitly, we now also consider an alternative definition of the SNR that is similar (though distinct) to that used in (Roberts & Tedrake, 2009). We refer to this as the directional SNR (DSNR). At a high-level, we define the DSNR by splitting each gradient estimate into two component vectors, one parallel to the true gradient and one perpendicular, then taking the expectation of ratio of their magnitudes. More precisely, we define u = E [ M,K] / E [ M,K] 2 as being the true normalized gradient direction and then the DSNR as DSNRM,K = E 2 = T M,Ku u and = M,K . The DSNR thus provides a measure of the expected proportion of the gradient that will point in the true direction. For perfect estimates of the gradients, then DSNR , but unlike the SNR, arbitrarily bad estimates do not have DSNR = 0 because even random vectors will have a component of their gradient in the true direction. The convergence of the DSNR is shown in Figure 3, for which the true normalized gradient u has been estimated empirically, noting that this varies with K. We see a similar qualitative behavior to the SNR, with the gradients of IWAE for the inference network degrading to having the same directional accuracy as drawing a random vector. Interestingly, the DSNR seems to be following the same asymptotic convergence behavior as the SNR for both networks in M (as shown by the dashed lines), even though we have no theoretical result to suggest this should occur. As our theoretical results suggest that the direction of the true gradients correspond to targeting an improved objective as K increases, we now examine whether this or the changes in the SNR is the dominant effect. To this end, we repeat our calculations for the DSNR but take u as the target direction of the gradient for K = 1000. This provides a measure of how varying M and K affects the quality of the gradient directions as biased estimators for E [ 1,1000] / E [ 1,1000] 2. As shown in Figure 4, increasing K is still detrimental for the inference network by this metric, even though it brings the expected gradient estimate closer to the target gradient. By contrast, increasing K is now monotonically beneficial for the generative network. Increasing M leads to initial improvements for the inference network before plateauing due to the bias of the estimator. For the generative network, increasing M has little impact, with the bias being the dominant factor throughout. Though this metric is not an absolute measure of performance of the SGA scheme, e.g. because high bias may be more detrimental than high variance, it is nonetheless a powerful result in suggesting that increasing K can be detrimental to learning the inference network. 5 New Estimators Based on our theoretical results, we now introduce three new algorithms that address the issue of diminishing SNR for the inference network. Our first, MIWAE, is exactly equivalent to the general formulation given in (3), the distinction from Tighter Variational Bounds are Not Necessarily Better (a) Convergence of DSNR for inference network (b) Convergence of DSNR for generative network Figure 3: Convergence of the directional SNR of gradients estimates with increasing M and K. The solid lines show the estimated DSNR and the shaded regions the interquartile range of the individual ratios. Also shown for reference is the DSNR for a randomly generated vector where each component is drawn from a unit Gaussian. (a) Convergence of DSNR for inference network (b) Convergence of DSNR for generative network Figure 4: Convergence of the DSNR when the target gradient is taken as u = E [ 1,1000]. Conventions as per Figure 3. previous approaches coming from the fact that it takes both M > 1 and K > 1. The motivation for this is that because our inference network SNR increases as O( p M/K), we should be able to mitigate the issues increasing K has on the SNR by also increasing M. For fairness, we will keep our overall budget T = MK fixed, but we will show that given this budget, the optimal value for M is often not 1. In practice, we expect that it will often be beneficial to increase the mini-batch size N rather than M for MIWAE; as we showed in Section 3.2 this has the same effect on the SNR. Nonetheless, MIWAE forms an interesting reference method for testing our theoretical results and, as we will show, it can offer improvements over IWAE for a given N. Our second algorithm, CIWAE uses a convex combination of the IWAE and VAE bounds, namely ELBOCIWAE = β ELBOVAE +(1 β) ELBOIWAE (13) where β [0, 1] is a combination parameter. It is trivial to see that ELBOCIWAE is a lower bound on the log marginal that is tighter than the VAE bound but looser than the IWAE bound. We then employ the following estimator C K,β = (14) k=1 log wk + (1 β) log where we use the same wk for both terms. The motivation for CIWAE is that, if we set β to a relatively small value, the objective will behave mostly like IWAE, except when the expected IWAE gradient becomes very small. When this happens, the VAE component should take-over and alleviate SNR issues: the asymptotic SNR of C K,β for φ is O( MK) because the VAE component has non-zero expectation in the limit K . Our results suggest that what is good for the generative network, in terms of setting K, is often detrimental for the inference network. It is therefore natural to question whether it is sensible to always use the same target for both the inference and generative networks. Motivated by this, our third method, PIWAE, uses the IWAE target when training the generative network, but the MIWAE target for training the inference network. We thus have C K,β(θ) = θ log 1 k=1 wk (15a) C M,K,β(φ) = 1 m=1 φ log 1 ℓ=1 wm,ℓ(15b) where we will generally set K = ML so that the same weights can be used for both gradients. 5.1 Experiments We now use our new estimators to train deep generative models for the MNIST digits dataset (Le Cun et al., 1998). For this, we duplicated the architecture and training schedule outlined in Burda et al. (2016). In particular, all networks were trained and evaluated using their stochastic binarization. For all methods we set a budget of T = 64 weights in the target estimate for each datapoint in the minibatch. Tighter Variational Bounds are Not Necessarily Better 0 500 1000 1500 2000 2500 3000 Epoch IWAE CIWAE β = 0.5 PIWAE 8/8 MIWAE 8/8 VAE 0 500 1000 1500 2000 2500 3000 Epoch (b) log ˆp(x) 0 500 1000 1500 2000 2500 3000 Epoch (c) KL(Qφ(z|x)||Pθ(z|x)) Figure 5: Convergence of evaluation metrics on the test set with increased training time. All lines show mean standard deviation over 4 runs with different random initializations. Larger values are preferable for each plot. (1, 64) (4, 16) (8, 8) (16, 4) (64, 1) (M, K) (a) Comparing MIWAE and IWAE 0.0 0.2 0.4 0.6 0.8 1.0 β (b) Comparing CIWAE and IWAE (1, 64) (2, 32) (4, 16) (8, 8) (16, 4) (32, 2) (64, 1) (M, K) (c) Comparing PIWAE and IWAE Figure 6: Test set performance of MIWAE, CIWAE, and PIWAE relative to IWAE in terms of the IWAE-64 (top), log ˆp(x) (middle), and KL(Qφ(z|x)||Pθ(z|x)) (bottom) metrics. All dots are the difference in the metric to that of IWAE. Dotted line is the IWAE baseline. Note that in all cases, the far left of the plot correspond to settings equivalent to the IWAE. Table 1: Mean final test set performance standard deviation over 4 runs. Numbers in brackets in indicate (M, K). The best result is shown in red, while bold results are not statistically significant to best result at the 5% level of a Welch s t-test. Metric IWAE PIWAE (4, 16) PIWAE (8, 8) MIWAE (4, 16) MIWAE (8, 8) CIWAE β = 0.05 CIWAE β = 0.5 VAE IWAE-64 86.11 0.10 85.68 0.06 85.74 0.07 85.60 0.07 85.69 0.04 85.91 0.11 86.08 0.08 86.69 0.08 log ˆp(x) 84.52 0.02 84.40 0.17 84.46 0.06 84.56 0.05 84.97 0.10 84.57 0.09 85.24 0.08 86.21 0.19 KL(Q||P ) 1.59 0.10 1.27 0.18 1.28 0.09 1.04 0.08 0.72 0.11 1.34 0.14 0.84 0.11 0.47 0.20 To assess different aspects of the training performance, we consider three different metrics: ELBOIWAE with K = 64, ELBOIWAE with K = 5000, and the latter of these minus the former. All reported metrics are evaluated on the test data. The motivation for the ELBOIWAE with K = 64 metric, denoted as IWAE-64, is that this is the target used for training the IWAE and so if another method does better on this metric than the IWAE, this is a clear indicator that SNR issues of the IWAE estimator have degraded its performance. In fact, this would demonstrate that, from a practical perspective, using the IWAE estimator is sub-optimal, even if our explicit aim is to optimize the IWAE bound. The second metric, ELBOIWAE with K = 5000, denoted log ˆp(x), is used as a surrogate for estimating the log marginal likelihood and thus provides an indicator for fidelity of the learned generative model. The third metric is an estimator for the divergence implicitly targeted by the IWAE. Namely, as shown by Le et al. (2018), the ELBOIWAE can be interpreted as ELBOIWAE = log pθ(x) KL(Qφ(z|x)||Pθ(z|x)) (16) where Qφ(z|x) := YK k=1 qφ(zk|x), and (17) Pθ(z|x) := 1 QK ℓ=1 qφ(zℓ|x) qφ(zk|x) pθ(zk|x). (18) Thus we can estimate KL(Qφ(z|x)||Pθ(z|x)) using log ˆp(x) IWAE-64, to provide a metric for divergence between the inference network and the proposal network. We use this instead of KL(qφ(z|x)||pθ(z|x)) because the latter can be deceptive metric for the inference network fidelity. For example, it tends to prefer qφ(z|x) that cover only one of the posterior modes, rather than encompassing all of them. As we showed in Section 3.1, the implied target of the true gradients for the inference network improves as K increases and so KL(Qφ(z|x)||Pθ(z|x)) should be a more reliable metric of inference network performance. Figure 5 shows the convergence of these metrics for each algorithm. Here we have considered the middle value for each of the parameters, namely K = M = 8 for PIWAE and MIWAE, and β = 0.5 for CIWAE. We see that PIWAE and MIWAE both comfortably outperformed, and CIWAE Tighter Variational Bounds are Not Necessarily Better IWAE CIWAE PIWAE MIWAE VAE 0 Normalized Effective Sample Size Figure 7: Violin plots of ESS estimates for each image of MNIST, normalized by the number of samples drawn. A violin plot uses a kernel density plot on each side thicker means more MNIST images whose qφ achieves that ESS. slightly outperformed, IWAE in terms of IWAE-64 metric, despite IWAE being directly trained on this target. In terms of log ˆp(x), PIWAE gave the best performance, followed by IWAE. For the KL, we see that the VAE performed best followed by MIWAE, with IWAE performing the worst. We note here that the KL is not an exact measure of the inference network performance as it also depends on the generative model. As such, the apparent superior performance of the VAE may be because it produces a simpler model, as per the observations of Burda et al. (2016), which in turn is easier to learn an inference network for. Critically though, PIWAE improves this metric whilst also improving generative network performance, such that this reasoning no longer applies. Similar behavior is observed for MIWAE and CIWAE for different parameter settings (see Appendix D). We next considered tuning the parameters for each of our algorithms as shown in Figure 6, for which we look at the final metric values after training. Table 1 further summarizes the performance for certain selected parameter settings. For MIWAE we see that as we increase M, the log ˆp(x) metric gets worse, while the KL gets better. The IWAE-64 metric initially increases with M, before reducing again from M = 16 to M = 64, suggesting that intermediate values for M (i.e. M = 1, K = 1) give a better trade-off. For PIWAE, similar behavior to MIWAE is seen for the IWAE64 and KL metrics. However, unlike for MIWAE, we see that log ˆp(x) initially increases with M, such that PIWAE provides uniform improvement over IWAE for the M = 2, 4, 8, and 16 cases. CIWAE exhibits similar behavior in increasing β as increasing M for MIWAE, but there appears to be a larger degree of noise in the evaluations, while the optimal value of β, though non-zero, seems to be closer to IWAE than for the other algorithms. As an additional measure of the performance of the inference network that is distinct to any of the training targets, we also considered the effective sample size (ESS) (Owen, 2013) for the fully trained networks, defined as ESS = (PK k=1 wk)2/ PK k=1 w2 k. (19) 0 500 1000 1500 2000 2500 3000 Epoch IWAE CIWAE β = 0.5 PIWAE 8/8 MIWAE 8/8 VAE Figure 8: SNR of inference network weights during training. All lines are mean standard deviation over 20 randomly chosen weights per layer. The ESS is a measure of how many unweighted samples would be equivalent to the weighted sample set. A low ESS indicates that the inference network is struggling to perform effective inference for the generative network. The results, given in Figure 7, show that the ESSs for CIWAE, MIWAE, and the VAE were all significantly larger than for IWAE and PIWAE, with IWAE giving a particularly poor ESS. Our final experiment looks at the SNR values for the inference networks during training. Here we took a number of different neural network gradient weights at different layers of the network and calculated empirical estimates for their SNRs at various points during the training. We then averaged these estimates over the different network weights, the results of which are given in Figure 8. This clearly shows the low SNR exhibited by the IWAE inference network, suggesting that our results from the simple Gaussian experiments carry over to the more complex neural network domain. 6 Conclusions We have provided theoretical and empirical evidence that algorithmic approaches to increasing the tightness of the ELBO independently to the expressiveness of the inference network can be detrimental to learning by reducing the signal-to-noise ratio of the inference network gradients. Experiments on a simple latent variable model confirmed our theoretical findings. We then exploited these insights to introduce three estimators, PIWAE, MIWAE, and CIWAE and showed that each can deliver improvements over IWAE, even when the metric used for this assessment is the IWAE target itself. In particular, each was able to deliver improvement in the training of the inference network, without any reduction in the quality of the learned generative network. Whereas MIWAE and CIWAE mostly allow for balancing the requirements of the inference and generative networks, PI- WAE appears to be able to offer simultaneous improvements to both, with the improved training of the inference network having a knock-on effect on the generative network. Key to achieving this is, is its use of separate targets for the two networks, opening up interesting avenues for future work. Tighter Variational Bounds are Not Necessarily Better Acknowledgments TR and YWT are supported in part by the European Research Council under the European Union s Seventh Framework Programme (FP7/2007 2013) / ERC grant agreement no. 617071. TAL is supported by a Google studentship, project code DF6700. MI is supported by the UK EPSRC CDT in Autonomous Intelligent Machines and Systems. CJM is funded by a Deep Mind Scholarship. FW is supported under DARPA PPAML through the U.S. AFRL under Cooperative Agreement FA8750-14-2-0006, Sub Award number 61160290-111668. Bamler, R., Zhang, C., Opper, M., and Mandt, S. Perturbative black box variational inference. ar Xiv preprint ar Xiv:1709.07433, 2017. Bourlard, H. and Kamp, Y. Auto-association by multilayer perceptrons and singular value decomposition. Biological cybernetics, 59(4-5):291 294, 1988. Burda, Y., Grosse, R., and Salakhutdinov, R. Importance weighted autoencoders. In ICLR, 2016. Chen, X., Kingma, D. P., Salimans, T., Duan, Y., Dhariwal, P., Schulman, J., Sutskever, I., and Abbeel, P. Variational lossy autoencoder. In ICLR, 2017. Cremer, C., Morris, Q., and Duvenaud, D. Reinterpreting importance-weighted autoencoders. ar Xiv preprint ar Xiv:1704.02916, 2017. Fort, G., Gobet, E., and Moulines, E. Mcmc design-based non-parametric regression for rare event. application to nested risk computations. Monte Carlo Methods and Applications, 23(1):21 42, 2017. Gregor, K., Besse, F., Rezende, D. J., Danihelka, I., and Wierstra, D. Towards conceptual compression. In NIPS, 2016. Hesterberg, T. C. Advances in importance sampling. Ph D thesis, Stanford University, 1988. Hinton, G. E. and Zemel, R. S. Autoencoders, minimum description length and helmholtz free energy. In NIPS, 1994. Hoffman, M. D., Blei, D. M., Wang, C., and Paisley, J. Stochastic variational inference. The Journal of Machine Learning Research, 14(1):1303 1347, 2013. Kingma, D. and Ba, J. Adam: A method for stochastic optimization. ar Xiv preprint ar Xiv:1412.6980, 2014. Kingma, D. P. and Welling, M. Auto-encoding variational Bayes. In ICLR, 2014. Kingma, D. P., Salimans, T., and Welling, M. Improving variational inference with inverse autoregressive flow. ar Xiv preprint ar Xiv:1606.04934, 2016. Le, T. A., Baydin, A. G., and Wood, F. Inference compi- lation and universal probabilistic programming. In AISTATS, 2017. Le, T. A., Igl, M., Rainforth, T., Jin, T., and Wood, F. Autoencoding sequential Monte Carlo. In ICLR, 2018. Le Cun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradientbased learning applied to document recognition. Proceedings of the IEEE, 86(11):2278 2324, 1998. Li, Y. and Turner, R. E. R enyi divergence variational inference. In NIPS, 2016. Maaløe, L., Sønderby, C. K., Sønderby, S. K., and Winther, O. Auxiliary deep generative models. ar Xiv preprint ar Xiv:1602.05473, 2016. Maddison, C. J., Lawson, D., Tucker, G., Heess, N., Norouzi, M., Mnih, A., Doucet, A., and Teh, Y. W. Filtering variational objectives. ar Xiv preprint ar Xiv:1705.09279, 2017. Naesseth, C. A., Linderman, S. W., Ranganath, R., and Blei, D. M. Variational sequential Monte Carlo. 2018. Owen, A. B. Monte Carlo theory, methods and examples. 2013. Paige, B. and Wood, F. Inference networks for sequential Monte Carlo in graphical models. In ICML, 2016. Rainforth, T. Automating Inference, Learning, and Design using Probabilistic Programming. Ph D thesis, 2017. Rainforth, T., Cornish, R., Yang, H., Warrington, A., and Wood, F. On nesting Monte Carlo estimators. In ICML, 2018. Ranganath, R., Gerrish, S., and Blei, D. Black box variational inference. In AISTATS, 2014. Ranganath, R., Tran, D., and Blei, D. Hierarchical variational models. In ICML, 2016. Rezende, D. and Mohamed, S. Variational inference with normalizing flows. In International Conference on Machine Learning, pp. 1530 1538, 2015. Rezende, D. J., Mohamed, S., and Wierstra, D. Stochastic backpropagation and approximate inference in deep generative models. In ICML, 2014. Roberts, J. W. and Tedrake, R. Signal-to-noise ratio analysis of policy gradient algorithms. In NIPS, 2009. Salimans, T., Kingma, D., and Welling, M. Markov chain Monte Carlo and variational inference: Bridging the gap. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pp. 1218 1226, 2015. Tran, D., Ranganath, R., and Blei, D. M. The variational Gaussian process. ar Xiv preprint ar Xiv:1511.06499, 2015. Turner, R. E. and Sahani, M. Two problems with variational expectation maximisation for time-series models. Bayesian Time series models, pp. 115 138, 2011.