# amortized_inference_regularization__d81fa4aa.pdf Amortized Inference Regularization Rui Shu Stanford University ruishu@stanford.edu Hung H. Bui Deep Mind buih@google.com Shengjia Zhao Stanford University sjzhao@stanford.edu Mykel J. Kochenderfer Stanford University mykel@stanford.edu Stefano Ermon Stanford University ermon@cs.stanford.edu The variational autoencoder (VAE) is a popular model for density estimation and representation learning. Canonically, the variational principle suggests to prefer an expressive inference model so that the variational approximation is accurate. However, it is often overlooked that an overly-expressive inference model can be detrimental to the test set performance of both the amortized posterior approximator and, more importantly, the generative density estimator. In this paper, we leverage the fact that VAEs rely on amortized inference and propose techniques for amortized inference regularization (AIR) that control the smoothness of the inference model. We demonstrate that, by applying AIR, it is possible to improve VAE generalization on both inference and generative performance. Our paper challenges the belief that amortized inference is simply a mechanism for approximating maximum likelihood training and illustrates that regularization of the amortization family provides a new direction for understanding and improving generalization in VAEs. 1 Introduction Variational autoencoders are a class of generative models with widespread applications in density estimation, semi-supervised learning, and representation learning [1, 2, 3, 4]. A popular approach for the training of such models is to maximize the log-likelihood of the training data. However, maximum likelihood is often intractable due to the presence of latent variables. Variational Bayes resolves this issue by constructing a tractable lower bound of the log-likelihood and maximizing the lower bound instead. Classically, Variational Bayes introduces per-sample approximate proposal distributions that need to be optimized using a process called variational inference. However, per-sample optimization incurs a high computational cost. A key contribution of the variational autoencoding framework is the observation that the cost of variational inference can be amortized by using an amortized inference model that learns an efficient mapping from samples to proposal distributions. This perspective portrays amortized inference as a tool for efficiently approximating maximum likelihood training. Many techniques have since been proposed to expand the expressivity of the amortized inference model in order to better approximate maximum likelihood training [5, 6, 7, 8]. In this paper, we challenge the conventional role that amortized inference plays in variational autoencoders. For datasets where the generative model is prone to overfitting, we show that having an amortized inference model actually provides a new and effective way to regularize maximum likelihood training. Rather than making the amortized inference model more expressive, we propose instead to restrict the capacity of the amortization family. Through amortized inference regularization (AIR), we show that it is possible to reduce the inference gap and increase the log-likelihood performance on the test set. We propose several techniques for AIR and provide extensive theoretical and empirical analyses of our proposed techniques when applied to the variational autoencoder and the 32nd Conference on Neural Information Processing Systems (Neur IPS 2018), Montréal, Canada. importance-weighted autoencoder. By rethinking the role of the amortized inference model, amortized inference regularization provides a new direction for studying and improving the generalization performance of latent variable models. 2 Background and Notation 2.1 Variational Inference and the Evidence Lower Bound Consider a joint distribution p (x, z) parameterized by , where x 2 X is observed and z 2 Z is latent. Given a uniform distribution ˆp(x) over the dataset D = {x(i)}, maximum likelihood estimation performs model selection using the objective Eˆp(x) ln p (x) = max p (x, z)dz. (1) However, marginalization of the latent variable is often intractable; to address this issue, it is common to employ the variational principle to maximize the following lower bound ln p (x) min q2Q D(q(z) k p (z | x)) q2Q Eq(z) ln p (x, z) where D is the Kullback-Leibler divergence and Q is a variational family. This lower bound, commonly called the evidence lower bound (ELBO), converts log-likelihood estimation into a tractable optimization problem. Since the lower bound holds for any q, the variational family Q can be chosen to ensure that q(z) is easily computable, and the lower bound is optimized to select the best proposal distribution q x(z) for each x 2 D. 2.2 Amortization and Variational Autoencoders [1, 9] proposed to construct p(x | z) using a parametric function g 2 G(P) : Z ! P, where P is some family of distributions over x, and G is a family of functions indexed by parameters . To expedite training, they observed that it is possible to amortize the computational cost of variational inference by framing the per-sample optimization process as a regression problem; rather than solving for the optimal proposal q x(z) directly, they instead use a recognition model fφ 2 F(Q) : X ! Q to predict q x(z). The functions (fφ, g ) can be concisely represented as conditional distributions, where p (x | z) = g (z)(x) (3) qφ(z | x) = fφ(x)(z). (4) The use of amortized inference yields the variational autoencoder, which is trained to maximize the variational autoencoder objective Eqφ(z|x) ln p(z)p (x | z) = max f2F(Q),g2G(P) Eˆp(x) Ez f(x) ln p(z)g(z)(x) We omit the dependency of (p(z), g) on and f on φ for notational simplicity. In addition to the typical presentation of the variational autoencoder objective (LHS), we also show an alternative formulation (RHS) that reveals the influence of the model capacities F, G and distribution family capacities Q, P on the objective function. In this paper, we use (qφ, f) interchangeably, depending on the choice of emphasis. To highlight the relationship between the ELBO in Eq. (2) and the standard variational autoencoder objective in Eq. (5), we shall also refer to the latter as the amortized ELBO. 2.3 Amortized Inference Suboptimality For a fixed generative model, the optimal unamortized and amortized inference models are x = arg max ln p (x, z) , for each x 2 D (6) f = arg max Ez f(x) ln p (x, z) A notable consequence of using an amortization family to approximate variational inference is that Eq. (5) is a lower bound of Eq. (2). This naturally raises the question of whether the learned inference model can accurately approximate the mapping x 7! q x(z). To address this question, [10] defined the inference, approximation, and amortization gaps as inf(ˆp) = Eˆp(x)D(f (x) k p (z | x)) (8) ap(ˆp) = Eˆp(x)D(q x(z) k p (z | x)) (9) am(ˆp) = inf(ˆp) ap(ˆp), (10) Studies have found that the inference gap is non-negligible [11] and primarily attributable to the presence of a large amortization gap [10]. The amortization gap raises two critical considerations. On the one hand, we wish to reduce the training amortization gap am(ˆptrain). If the family F is too low in capacity, then it is unable to approximate x 7! q x and will thus increase the amortization gap. Motivated by this perspective, [5, 12] proposed to reduce the training amortization gap by performing stochastic variational inference on top of amortized inference. In this paper, we take the opposing perspective that an over-expressive F hurts generalization (see Appendix A) and that restricting the capacity of F is a form of regularization that can prevent both the inference and generative models from overfitting to the training set. 3 Amortized Inference Regularization in Variational Autoencoders Many methods have been proposed to expand the variational and amortization families in order to better approximate maximum likelihood training [5, 6, 7, 8, 13, 14]. We argue, however, that achieving a better approximation to maximum likelihood training is not necessarily the best training objective, even if the end goal is test set density estimation. In general, it may be beneficial to regularize the maximum likelihood training objective. Importantly, we observe that the evidence lower bound in Eq. (2) admits a natural interpretation as implicitly regularizing maximum likelihood training log-likelihood z }| { Eˆp(x) [ln p (x)] regularizer R( ;Q) z }| { Eˆp(x) min q2Q D(q(z) k p (z | x)) This formulation exposes the ELBO as a data-dependent regularized maximum likelihood objective. For infinite capacity Q, R( ; Q) is zero for all 2 , and the objective reduces to maximum likelihood. When Q is the set of Gaussian distributions (as is the case in the standard VAE), then R( ; Q) is zero only if p (z | x) is Gaussian for all x 2 D. In other words, a Gaussian variational family regularizes the true posterior p (z | x) toward being Gaussian [10]. Careful selection of the variational family to encourage p (z | x) to adopt certain properties (e.g. unimodality, fully-factorized posterior, etc.) can thus be considered a special case of posterior regularization [15, 16]. Unlike traditional variational techniques, the variational autoencoder introduces an amortized inference model f 2 F and thus a new source of posterior regularization. log-likelihood z }| { Eˆp(x) [ln p (x)] regularizer R( ;Q,F) z }| { min f2F(Q) Eˆp(x) [D(f(x) k p (z | x))] In contrast to unamortized variational inference, the introduction of the amortization family F forces the inference model to consider the global structure of how X maps to Q. We thus define amortized inference regularization as the strategy of restricting the inference model capacity F to satisfy certain desiderata. In this paper, we explore a special case of AIR where a candidate model f 2 F is penalized if it is not sufficiently smooth. We propose two models that encourage inference model smoothness and demonstrate that they can reduce the inference gap and increase log-likelihood on the test set. 3.1 Denoising Variational Autoencoder In this section, we propose using random perturbation training for amortized inference regularization. The resulting model the denoising variational autoencoder (DVAE) modifies the variational autoencoder objective by injecting " noise into the inference model Eˆp(x) [ln p (x)] min f2F(Q) Eˆp(x)E" [D(f(x + ") k p (z | x))] Note that the noise term only appears in the regularizer term. We consider the case of zero-mean isotropic Gaussian noise " N(0, σI) and denote the denoising regularizer as R( ; σ). At this point, we note that the DVAE was first described in [17]. However, our treatment of DVAE differs from [17] s in both theoretical analysis and underlying motivation. We found that [17] incorrectly stated the tightness of the DVAE variational lower bound (see Appendix B). In contrast, our analysis demonstrates that the denoising objective smooths the inference model and necessarily lower bounds the original variational autoencoder objective (see Theorem 1 and Proposition 1). We now show that 1) the optimal DVAE amortized inference model is a kernel regression model and that 2) the variance of the noise " controls the smoothness of the optimal inference model. Lemma 1. For fixed ( , σ, Q) and infinite capacity F, the inference model that optimizes the DVAE objective in Eq. (13) is the kernel regression model σ(x) = arg min wσ(x, x(i)) D(q(z) k p (z | x(i))), (14) where wσ(x, x(i)) = Kσ(x,x(i)) P j Kσ(x,x(j)) and Kσ(x, y) = exp is the RBF kernel. Lemma 1 shows that the optimal denoising inference model f σ is dependent on the noise level σ. The output of f σ(x) is the proposal distribution that minimizes the weighted Kullback-Leibler (KL) divergence from f σ(x) to each p (z | x(i)), where the weighting wσ(x, x(i)) depends on the distance kx x(i)k and the bandwidth σ. When σ > 0, the amortized inference model forces neighboring points (x(i), x(j)) to have similar proposal distributions. Note that as σ increases, wσ(x, x(i)) ! 1 n, where n is the number of training samples. Controlling σ thus modulates the smoothness of f σ (we say that f σ is smooth if it maps similar inputs to similar outputs under some suitable measure of similarity). Intuitively, the denoising regularizer R( ; σ) approximates the true posteriors with a σ-smoothed inference model and penalizes generative models whose posteriors cannot easily be approximated by such an inference model. This intuition is formalized in Theorem 1. Theorem 1. Let Q be a minimal exponential family with corresponding natural parameter space . With a slight abuse of notation, consider f 2 F : X ! . Under the simplifying assumption that p (z | x(i)) is contained within Q and parameterized by (i) 2 , and that F has infinite capacity, then the optimal inference model in Lemma 1 returns f σ(x) = 2 , where wσ(x, x(i)) (i) (15) and Lipschitz constant of f σ is bounded by O(1/σ2). We wish to address Theorem 1 s assumption that the true posteriors lie in the variational family. Note that for sufficiently large exponential families, this assumption is likely to hold. But even in the case where the variational family is Gaussian (a relatively small exponential family), the small approximation gap observed in [10] suggests that it is plausible that posterior regularization would encourage the true posteriors to be approximately Gaussian. Given that σ modulates the smoothness of the inference model, it is natural to suspect that a larger choice of σ results in a stronger regularization. To formalize this notion of regularization strength, we introduce a way to partially order a set of regularizers {Ri( )}. Definition 1. Suppose two regularizers R1( ) and R2( ) share the same minimum min R1( ) = min R2( ). We say that R1 is a stronger regularizer than R2 if R1( ) R2( ) for all 2 . Note that any two regularizers can be modified via scalar addition to share the same minimum. Furthermore, if R1 is stronger than R2, then R1 and R2 share at least one minimizer. We now apply Definition 1 to characterize the regularization strength of R( ; σ) as σ increases. Definition 2. We say that F is closed under input translation if f 2 F =) fa 2 F for all a 2 X, where fa(x) = f(x + a). Proposition 1. Consider the denoising regularizer R( ; σ). Suppose F is closed under input translation and that, for any 2 , there exists f 2 F such that f(x) maps to the prior p (z) all x 2 X. Furthermore, assume that there exists 2 such that p (x, z) = p (z)p (x). Then R( ; σ1) is stronger R( ; σ2) when σ1 σ2; i.e., min R( ; σ1) = min R( ; σ2) = 0 and R( ; σ1) R( ; σ2) for all 2 . Lemma 1 and Proposition 1 show that as we increase σ, the optimal inference model is forced to become smoother and the regularization strength increases. Figure 1 is consistent with this analysis, showing the progression from under-regularized to over-regularized models as we increase σ. It is worth noting that, in addition to adjusting the denoising regularizer strength via σ, it is also possible to adjust the strength by taking a convex combination of the VAE and DVAE objectives. In particular, we can define the partially denoising regularizer R( ; σ, ) as min f2F(Q) Eˆp(x) E" [D(f(x + ") k p (z | x))] + (1 ) D(f(x) k p (z | x)) Importantly, we note that R( ; σ, ) is still strictly non-negative and, when combined with the log-likelihood term, still yields a tractable variational lower bound. 3.2 Weight-Normalized Amortized Inference In addition to DVAE, we propose an alternative method that directly restricts F to the set of smooth functions. To do so, we consider the case where the inference model is a neural network encoder parameterized by weight matrices {Wi} and leverage [18] s weight normalization technique, which proposes to reparameterize the columns wi of each weight matrix W as wi = vi kvik si, (17) where vi 2 Rd, si 2 R are trainable parameters. Since it is possible to modulate the smoothness of the encoder by capping the magnitude of si, we introduce a new parameter ui 2 R and define H 1 + exp( ui) The norm kwik is thus bounded by the hyperparameter H. We denote the weight-normalized regularizer as R( ; FH), where FH is the amortization family induced by a H-weight-normalized encoder. Under similar assumptions as Proposition 1, it is easy to see that min R( ; FH) = 0 for any H 0 and that R( ; FH1) R( ; FH2) for all 2 when H1 H2 (since FH1 FH2). We refer to the resulting model as the weight-normalized inference VAE (WNI-VAE) and show in Table 1 that weight-normalized amortized inference can achieve similar performance as DVAE. 3.3 Experiments We conducted experiments on statically binarized MNIST, statically binarized OMNIGLOT, and the Caltech 101 Silhouettes datasets. These datasets have a relatively small amount of training data and are thus susceptible to model overfitting. For each dataset, we used the same decoder architecture across all four models (VAE, DVAE ( = 0.5), DVAE ( = 1.0), WNI-VAE) and only modified the encoder, and trained all models using Adam [19] (see Appendix E for more details). To approximate the log-likelihood, we proposed to use importance-weighted stochastic variational inference (IW-SVI), an extension of SVI [20] which we describe in detail in Appendix C. Hyperparameter tuning of DVAE s σ and WNI-VAE s FH is described in Table 7. Table 1 shows the performance of VAE, DVAE, and WNI-VAE. Regularizing the inference model consistently improved the test set log-likelihood performance. On the MNIST and Caltech 101 Silhouettes datasets, the results also show a consistent reduction of the test set inference gap when the inference model is regularized. We observed differences in the performance of DVAE versus WNI-VAE on the Caltech 101 Silhouettes dataset, suggesting a difference in how denoising and weight normalization regularizes the inference model; an interesting consideration would thus be to combine DVAE and WNI. As a whole, Table 1 demonstrates that AIR benefits the generative model. The denoising and weight normalization regularizers have respective hyperparameters σ and H that control the regularization strength. In Figure 1, we performed an ablation analysis of how adjusting Table 1: Test set evaluation of VAE, DVAE, and WNI-VAE. The performance metrics are loglikelihood ln p (x), the amortized ELBO L(x), and the inference gap inf = ln p (x) L(x). All three proposed models out-perform VAE across most metrics. MNIST OMNIGLOT CALTECH ln p (x) inf L(x) ln p (x) inf L(x) ln p (x) inf L(x) VAE 86.93 0.04 8.54 0.14 95.48 0.07 110.32 0.16 12.03 0.25 122.35 0.33 109.14 0.28 28.90 0.42 138.05 0.15 DVAE ( = 0.5) 86.46 0.02 6.34 0.05 92.80 0.07 109.31 0.19 12.56 0.18 121.87 0.37 108.64 0.19 23.40 0.19 132.04 0.37 DVAE ( = 1.0) 86.51 0.02 6.83 0.04 93.35 0.06 110.12 0.18 12.44 0.16 122.56 0.34 108.66 0.23 23.94 0.15 132.60 0.15 WNI-VAE 86.42 0.01 6.68 0.01 93.10 0.02 109.16 0.12 11.39 0.10 120.55 0.20 108.94 0.31 28.88 0.29 137.82 0.25 Figure 1: Evaluation of the log-likelihood performance of all three proposed models as we vary the regularization parameter value. The regularization parameter is defined in Table 7. When the parameter value is too small, the model overfits and the test set performance degrades. When the parameter value is too high, the model underfits. the regularization strength impacts the test set log-likelihood. In almost all cases, we see a transition from overfitting to underfitting as we adjust the strength of AIR. For well-chosen regularization strength, however, it is possible to increase the test set log-likelihood performance by 0.5 1.0 nats a non-trivial improvement. 3.4 How Does Amortized Inference Regularization Affect the Generator? Table 1 shows that regularizing the inference model empirically benefits the generative model. We now provide some initial theoretical characterization of how a smoothed amortized inference model affects the generative model. Our analysis rests on the following proposition. Proposition 2. Let P be an exponential family with corresponding mean parameter space M and sufficient statistic function T( ). With a slight abuse of notation, consider g 2 G : Z ! M. Define q(x, z) = ˆp(x)q(z | x), where q(z | x) is a fixed inference model. Supposing G has infinite capacity, then the optimal generative model in Eq. (5) returns g (z) = µ 2 M, where q(x(i) | z) T(x(i)) = q(z | x(i)) P j q(z | x(j)) T(x(i)) Proposition 2 generalizes the analysis in [21] which determined the optimal generative model when P is Gaussian. The key observation is that the optimal generative model outputs a convex combination of {φ(x(i))}, weighted by q(x(i) | z). Furthermore, the weights q(x(i) | z) are simply density ratios of the proposal distributions {q(z | x(i))}. As we increase the smoothness of the amortized inference model, the weight q(x(i) | z) should tend toward 1 n for all z 2 Z. This suggests that a smoothed inference model provides a natural way to smooth (and thus regularize) the generative model. 4 Amortized Inference Regularization in Importance-Weighted Autoencoders In this section, we extend AIR to importance-weighted autoencoders (IWAE-k). Although the application is straightforward, we demonstrate a noteworthy relationship between the number of importance samples k and the effect of AIR. To begin our analysis, we consider the IWAE-k objective ,φ Ez1...zk qφ(z|x) p (x, zi) qφ(zi | x) where {z1 . . . zk} are k samples from the proposal distribution qφ(z | x) to be used as importancesamples. Analysis by [22] allows us to rewrite it as a regularized maximum likelihood objective Eˆp(x) [ln p (x)] Rk( ) z }| { min f2F(Q) Eˆp(x)Ez2...zk f(x) D( fk(x, z1 . . . zk) k p (z | x)), (21) where fk (or equivalently qk) is the unnormalized distribution fk(x, z2 . . . zk)(z1) = p (x, z1) p (x,zi) f(x)(zi) = qk(z1 | x, z2 . . . zk) (22) and D(q k p) = q(z) [ln q(z) ln p(z)] dz is the Kullback-Leibler divergence extended to unnormalized distributions. For notational simplicity, we omit the dependency of fk on (z2 . . . zk). Importantly, [22] showed that the IWAE with k importance samples drawn from the amortized inference model f is, on expectation, equivalent to a VAE with 1 importance sample drawn from the more expressive inference model fk. 4.1 Importance Sampling Attenuates Amortized Inference Regularization We now consider the interaction between importance sampling and AIR. We introduce the regularizer Rk( ; σ, FH) as follows Rk( ; σ, FH) = min f2FH(Q) Eˆp(x)E"Ez2...zk f(x+") D( fk(x + ") k p (z | x)), (23) which corresponds to a regularizer where weight normalization, denoising, and importance sampling are simultaneously applied. By adapting Theorem 1 from [8], we can show that Proposition 3. Consider the regularizer Rk( ; σ, FH). Under similar assumptions as Proposition 1, then Rk1 is stronger than Rk2 when k1 k2; i.e., min Rk1( ; σ, FH) = min Rk2( ; σ, FH) = 0 and Rk1( ; σ, FH) Rk2( ; σ, FH) for all 2 . A notable consequence of Proposition 3 is that as k increases, AIR exhibits a weaker regularizing effect on the posterior distributions {p (z | x(i))}. Intuitively, this arises from the phenomenon that although AIR is applied to f, the subsequent importance-weighting procedure can still create a flexible fk. Our analysis thus predicts that AIR is less likely to cause underfitting of IWAE-k s generative model as k increases, which we demonstrate in Figure 2. In the limit of infinite importance samples, we also predict AIR to have zero regularizing effect since f1 (under some assumptions) can always approximate any posterior. However, for practically feasible values of k, we show in Tables 2 and 3 that AIR is a highly effective regularizer. 4.2 Experiments Table 2: Test set evaluation of the four models when trained with 8 importance samples. L8(x) denotes the amortized ELBO using 8 importance samples. inf = ln p (x) L8(x). MNIST OMNIGLOT CALTECH ln p (x) inf L8(x) ln p (x) inf L8(x) ln p (x) inf L8(x) IWAE 86.21 0.01 6.13 0.03 92.34 0.02 108.18 0.24 8.69 0.39 116.87 0.16 108.65 0.11 21.52 0.13 130.17 0.09 DIWAE ( = 0.5) 85.78 0.02 4.47 0.02 90.25 0.03 107.01 0.11 8.64 0.07 115.66 0.17 107.34 0.17 17.61 0.18 124.96 0.14 DIWAE ( = 1.0) 85.78 0.03 4.21 0.03 90.00 0.06 107.47 0.06 8.57 0.14 116.04 0.18 107.54 0.11 17.06 0.35 124.60 0.29 WNI-IWAE 85.81 0.01 4.33 0.03 90.14 0.04 107.15 0.08 8.78 0.17 115.93 0.10 107.98 0.19 22.18 0.33 130.16 0.14 Table 3: Test set evaluation of the four models when trained with 64 importance samples. inf = ln p (x) L64(x). MNIST OMNIGLOT CALTECH ln p (x) inf L64(x) ln p (x) inf L64(x) ln p (x) inf L64(x) IWAE 86.06 0.03 4.41 0.10 90.48 0.07 107.31 0.14 6.66 0.22 113.97 0.10 108.89 0.35 16.51 0.32 125.40 0.25 DIWAE ( = 0.5) 85.55 0.02 3.01 0.01 88.56 0.02 106.02 0.01 6.98 0.06 113.00 0.07 106.94 0.11 12.28 0.14 119.22 0.11 DIWAE ( = 1.0) 85.55 0.02 3.15 0.02 88.70 0.04 106.15 0.03 6.70 0.05 112.85 0.07 106.96 0.11 12.94 0.22 119.87 0.16 WNI-IWAE 85.64 0.03 3.10 0.01 88.74 0.03 106.17 0.07 7.11 0.07 113.28 0.13 108.15 0.11 14.42 0.20 122.57 0.10 Tables 2 and 3 extends the model evaluation to IWAE-8 and IWAE-64. We see that the denoising IWAE (DIWAE) and weight-normalized inference IWAE (WNI-IWAE) consistently out-perform the standard IWAE on test set log-likelihood evaluations. Furthermore, the regularized models frequently reduced the inference gap as well. Our results demonstrate that AIR is a highly effective regularizer even when a large number of importance samples are used. Our main experimental contribution in this section is the verification that increasing the number of importance samples results in less underfitting when the inference model is over-regularized. In contrast to k = 1, where aggressively increasing the regularization strength can cause considerable underfitting, Figure 2 shows that increasing the number of importance samples to k = 8 and k = 64 makes the models much more robust to mis-specified choices of regularization strength. Interestingly, we also observed that the optimal regularization strength (determined using the validation set) increases with k (see Table 7 for details). The robustness of importance sampling when paired with amortized inference regularization makes AIR an effective and practical way to regularize IWAE. Figure 2: Evaluation of the log-likelihood performance of all three proposed models as we vary the regularization parameter (see Table 7 for definition) and number of importance samples k. To compare across different k s, the performance without regularization (IWAE-k baseline) is subtracted. We see that IWAE-64 is the least likely to underfit when the regularization parameter value is high. 4.3 Are High Signal-to-Noise Ratio Gradients Necessarily Better? We note the existence of a related work [23] that also concluded that approximating maximum likelihood training is not necessarily better. However, [23] focused on increasing the signal-to-noise ratio of the gradient updates and analyzed the trade-off between importance sampling and Monte Carlo sampling under budgetary constraints. An in-depth discussion of these two works within the context of generalization is provided in Appendix D. 5 Conclusion In this paper, we challenged the conventional role that amortized inference plays in training deep generative models. In addition to expediting variational inference, amortized inference introduces new ways to regularize maximum likelihood training. We considered a special case of amortized inference regularization (AIR) where the inference model must learn a smoothed mapping from X ! Q and showed that the denoising variational autoencoder (DVAE) and weight-normalized inference (WNI) are effective instantiations of AIR. Promising directions for future work include replacing denoising with adversarial training [24] and weight normalization with spectral normalization [25]. Furthermore, we demonstrated that AIR plays a crucial role in the regularization of IWAE, and that higher levels of regularization may be necessary due to the attenuating effects of importance sampling on AIR. We believe that variational family expansion by Monte Carlo methods [26] may exhibit the same attenuating effect on AIR and recommend this as an additional research direction. Acknowledgements This research was supported by TRI, NSF (#1651565, #1522054, #1733686 ), ONR, Sony, and FLI. Toyota Research Institute provided funds to assist the authors with their research but this article solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity. [1] Diederik P Kingma and Max Welling. Auto-Encoding Variational Bayes. ar Xiv preprint ar Xiv:1312.6114, 2013. [2] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi- Supervised Learning With Deep Generative Models. In Advances In Neural Information Processing Systems, pages 3581 3589, 2014. [3] Hyunjik Kim and Andriy Mnih. Disentangling By Factorising. ar Xiv preprint ar Xiv:1802.05983, [4] Tian Qi Chen, Xuechen Li, Roger Grosse, and David Duvenaud. Isolating Sources Of Disentan- glement In Variational Autoencoders. ar Xiv preprint ar Xiv:1802.04942, 2018. [5] Yoon Kim, Sam Wiseman, Andrew C Miller, David Sontag, and Alexander M Rush. Semi- Amortized Variational Autoencoders. ar Xiv preprint ar Xiv:1802.02550, 2018. [6] Diederik P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improved Variational Inference With Inverse Autoregressive Flow. In Advances In Neural Information Processing Systems, pages 4743 4751, 2016. [7] Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. Ladder Variational Autoencoders. In Advances In Neural Information Processing Systems, pages 3738 3746, 2016. [8] Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance Weighted Autoencoders. ar Xiv preprint ar Xiv:1509.00519, 2015. [9] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic Backpropagation And Approximate Inference In Deep Generative Models. ar Xiv preprint ar Xiv:1401.4082, 2014. [10] Chris Cremer, Xuechen Li, and David Duvenaud. Inference Suboptimality In Variational Autoencoders. ar Xiv preprint ar Xiv:1801.03558, 2018. [11] Yuhuai Wu, Yuri Burda, Ruslan Salakhutdinov, and Roger Grosse. On The Quantitative Analysis Of Decoder-Based Generative Models. ar Xiv preprint ar Xiv:1611.04273, 2016. [12] Rahul G Krishnan, Dawen Liang, and Matthew Hoffman. On the challenges of learning with inference networks on sparse, high-dimensional data. ar Xiv preprint ar Xiv:1710.06085, 2017. [13] Lars Maaløe, Casper Kaae Sønderby, Søren Kaae Sønderby, and Ole Winther. Auxiliary Deep Generative Models. ar Xiv preprint ar Xiv:1602.05473, 2016. [14] Rajesh Ranganath, Dustin Tran, and David Blei. Hierarchical Variational Models. In Interna- tional Conference On Machine Learning, pages 324 333, 2016. [15] Kuzman Ganchev, Jennifer Gillenwater, Ben Taskar, et al. Posterior Regularization For Struc- tured Latent Variable Models. Journal of Machine Learning Research, 11(Jul):2001 2049, 2010. [16] Jun Zhu, Ning Chen, and Eric P Xing. Bayesian Inference With Posterior Regularization And Applications To Infinite Latent Svms. The Journal of Machine Learning Research, 15(1):1799 1847, 2014. [17] Daniel Jiwoong Im, Sungjin Ahn, Roland Memisevic, Yoshua Bengio, et al. Denoising Criterion For Variational Auto-Encoding Framework. In AAAI, pages 2059 2065, 2017. [18] Tim Salimans and Diederik P Kingma. Weight Normalization: A Simple Reparameterization To Accelerate Training Of Deep Neural Networks. In Advances In Neural Information Processing Systems, pages 901 909, 2016. [19] Diederik P Kingma and Jimmy Ba. Adam: A method For Stochastic Optimization. ar Xiv preprint ar Xiv:1412.6980, 2014. [20] Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. Stochastic Variational Inference. The Journal of Machine Learning Research, 14(1):1303 1347, 2013. [21] Olivier Bousquet, Sylvain Gelly, Ilya Tolstikhin, Carl-Johann Simon-Gabriel, and Bernhard Schoelkopf. From Optimal Transport To Generative Modeling: The VEGAN Cookbook. ar Xiv preprint ar Xiv:1705.07642, 2017. [22] Chris Cremer, Quaid Morris, and David Duvenaud. Reinterpreting Importance-Weighted Autoencoders. ar Xiv preprint ar Xiv:1704.02916, 2017. [23] Tom Rainforth, Adam R Kosiorek, Tuan Anh Le, Chris J Maddison, Maximilian Igl, Frank Wood, and Yee Whye Teh. Tighter Variational Bounds Are Not Necessarily Better. ar Xiv preprint ar Xiv:1802.04537, 2018. [24] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining And Harnessing Adver- sarial Examples. ar Xiv preprint ar Xiv:1412.6572, 2014. [25] Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral Normaliza- tion For Generative Adversarial Networks. ar Xiv preprint ar Xiv:1802.05957, 2018. [26] Matthew D Hoffman. Learning Deep Latent Gaussian Models With Markov Chain Monte Carlo. In International Conference On Machine Learning, pages 1510 1519, 2017. [27] Yingzhen Li and Richard E Turner. Rényi Divergence Variational Inference. In Advances In Neural Information Processing Systems, pages 1073 1081, 2016. [28] Jakub M Tomczak and Max Welling. VAE With A Vampprior. ar Xiv preprint ar Xiv:1705.07120, [29] Samuel L. Smith and Quoc V. Le. A bayesian Perspective On Generalization And Stochastic Gradient Descent. In International Conference On Learning Representations, 2018. [30] Laurent Dinh, Razvan Pascanu, Samy Bengio, and Yoshua Bengio. Sharp Minima Can General- ize For Deep Nets. ar Xiv preprint ar Xiv:1703.04933, 2017. [31] Dominic Masters and Carlo Luschi. Revisiting Small Batch Training For Deep Neural Networks. ar Xiv preprint ar Xiv:1804.07612, 2018. [32] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding Deep Learning Requires Rethinking Generalization. ar Xiv preprint ar Xiv:1611.03530, 2016. [33] Arindam Banerjee, Srujana Merugu, Inderjit S Dhillon, and Joydeep Ghosh. Clustering With Bregman Divergences. Journal of machine learning research, 6(Oct):1705 1749, 2005.