# vae_learning_via_stein_variational_gradient_descent__372a6b99.pdf VAE Learning via Stein Variational Gradient Descent Yunchen Pu, Zhe Gan, Ricardo Henao, Chunyuan Li, Shaobo Han, Lawrence Carin Department of Electrical and Computer Engineering, Duke University {yp42, zg27, r.henao, cl319, shaobo.han, lcarin}@duke.edu A new method for learning variational autoencoders (VAEs) is developed, based on Stein variational gradient descent. A key advantage of this approach is that one need not make parametric assumptions about the form of the encoder distribution. Performance is further enhanced by integrating the proposed encoder with importance sampling. Excellent performance is demonstrated across multiple unsupervised and semi-supervised problems, including semi-supervised analysis of the Image Net data, demonstrating the scalability of the model to large datasets. 1 Introduction There has been significant recent interest in the variational autoencoder (VAE) [11], a generalization of the original autoencoder [33]. VAEs are typically trained by maximizing a variational lower bound of the data log-likelihood [2, 10, 11, 12, 18, 21, 22, 23, 30, 34, 35]. To compute the variational expression, one must be able to explicitly evaluate the associated distribution of latent features, i.e., the stochastic encoder must have an explicit analytic form. This requirement has motivated design of encoders in which a neural network maps input data to the parameters of a simple distribution, e.g., Gaussian distributions have been widely utilized [1, 11, 27, 25]. The Gaussian assumption may be too restrictive in some cases [28]. Consequently, recent work has considered normalizing flows [28], in which random variables from (for example) a Gaussian distribution are fed through a series of nonlinear functions to increase the complexity and representational power of the encoder. However, because of the need to explicitly evaluate the distribution within the variational expression used when learning, these nonlinear functions must be relatively simple, e.g., planar flows. Further, one may require many layers to achieve the desired representational power. We present a new approach for training a VAE. We recognize that the need for an explicit form for the encoder distribution is only a consequence of the fact that learning is performed based on the variational lower bound. For inference (e.g., at test time), we do not need an explicit form for the distribution of latent features, we only require fast sampling from the encoder. Consequently, rather than directly employing the traditional variational lower bound, we seek to minimize the Kullback Leibler (KL) distance between the true posterior of model and latent parameters. Learning then becomes a novel application of Stein variational gradient descent (SVGD) [15], constituting its first application to training VAEs. We extend SVGD with importance sampling [1], and also demonstrate its novel use in semi-supervised VAE learning. The concepts developed here are demonstrated on a wide range of unsupervised and semi-supervised learning problems, including a large-scale semi-supervised analysis of the Image Net dataset. These experimental results illustrate the advantage of SVGD-based VAE training, relative to traditional approaches. Moreover, the results demonstrate further improvements realized by integrating SVGD with importance sampling. Independent work by [3, 6] proposed the similar models, in which the aurthers incorporated SVGD with VAEs [3] and importance sampling [6] for unsupervised learning tasks. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 2 Stein Learning of Variational Autoencoder (Stein VAE) 2.1 Review of VAE and Motivation for Use of SVGD Consider data D = {xn}N n=1, where xn are modeled via decoder xn|zn p(x|zn; θ). A prior p(z) is placed on the latent codes. To learn parameters θ, one typically is interested in maximizing the empirical expected log-likelihood, 1 N PN n=1 log p(xn; θ). A variational lower bound is often employed: L(θ, φ; x) = Ez|x;φ log hp(x|z; θ)p(z) i = KL(q(z|x; φ) p(z|x; θ)) + log p(x; θ) , (1) with log p(x; θ) L(θ, φ; x), and where Ez|x;φ[ ] is approximated by averaging over a finite number of samples drawn from encoder q(z|x; φ). Parameters θ and φ are typically iteratively optimized via stochastic gradient descent [11], seeking to maximize PN n=1 L(θ, φ; xn). To evaluate the variational expression in (1), we require the ability to sample efficiently from q(z|x; φ), to approximate the expectation. We also require a closed form for this encoder, to evaluate log[p(x|z; θ)p(z)/q(z|x; φ)]. In the proposed VAE learning framework, rather than maximizing the variational lower bound explicitly, we focus on the term KL(q(z|x; φ) p(z|x; θ)), which we seek to minimize. This can be achieved by leveraging Stein variational gradient descent (SVGD) [15]. Importantly, for SVGD we need only be able to sample from q(z|x; φ), and we need not possess its explicit functional form. In the above discussion, θ is treated as a parameter; below we treat it as a random variable, as was considered in the Appendix of [11]. Treatment of θ as a random variable allows for model averaging, and a point estimate of θ is revealed as a special case of the proposed method. The set of codes associated with all xn D is represented Z = {zn}N n=1. The prior on {θ, Z} is here represented as p(θ, Z) = p(θ) QN n=1 p(zn). We desire the posterior p(θ, Z|D). Consider the revised variational expression L1(q; D) = Eq(θ,Z) log hp(D|Z, θ)p(θ, Z) i = KL(q(θ, Z) p(θ, Z|D)) + log p(D; M) , (2) where p(D; M) is the evidence for the underlying model M. Learning q(θ, Z) such that L1 is maximized is equivalent to seeking q(θ, Z) that minimizes KL(q(θ, Z) p(θ, Z|D)). By leveraging and generalizing SVGD, we will perform the latter. 2.2 Stein Variational Gradient Descent (SVGD) Rather than explicitly specifying a form for p(θ, Z|D), we sequentially refine samples of θ and Z, such that they are better matched to p(θ, Z|D). We alternate between updating the samples of θ and samples of Z, analogous to how θ and φ are updated alternatively in traditional VAE optimization of (1). We first consider updating samples of θ, with the samples of Z held fixed. Specifically, assume we have samples {θj}M j=1 drawn from distribution q(θ), and samples {zjn}M j=1 drawn from distribution q(Z). We wish to transform {θj}M j=1 by feeding them through a function, and the corresponding (implicit) transformed distribution from which they are drawn is denoted as q T (θ). It is desired that, in a KL sense, q T (θ)q(Z) is closer to p(θ, Z|D) than was q(θ)q(Z). The following theorem is useful for defining how to best update {θj}M j=1. Theorem 1 Assume θ and Z are Random Variables (RVs) drawn from distributions q(θ) and q(Z), respectively. Consider the transformation T(θ) = θ + ϵψ(θ; D) and let q T (θ) represent the distribution of θ = T(θ). We have ϵ KL(q T p) |ϵ=0 = Eθ q(θ) trace(Ap(θ; D)) , (3) where q T = q T (θ)q(Z), p = p(θ, Z|D), Ap(θ; D) = θ log p(θ; D)ψ(θ; D)T + θψ(θ; D), log p(θ; D) = EZ q(Z)[log p(D, Z, θ)], and p(D, Z, θ) = p(D|Z, θ)p(θ, Z). The proof is provided in Appendix A. Following [15], we assume ψ(θ; D) lives in a reproducing kernel Hilbert space (RKHS) with kernel k( , ). Under this assumption, the solution for ψ(θ; D) that maximizes the decrease in the KL distance (3) is ψ ( ; D) = Eq(θ)[k(θ, ) θ log p(θ; D) + θk(θ, )] . (4) Theorem 1 concerns updating samples from q(θ) assuming fixed q(Z). Similarly, to update q(Z) with q(θ) fixed, we employ a complementary form of Theorem 1 (omitted for brevity). In that case, we consider transformation T(Z) = Z + ϵψ(Z; D), with Z q(Z), and function ψ(Z; D) is also assumed to be in a RKHS. The expectations in (3) and (4) are approximated by samples θ(t+1) j = θ(t) j + ϵ θ(t) j , with θ(t) j 1 M PM j =1 kθ(θ(t) j , θ(t) j ) θ(t) j log p(θ(t) j ; D) + θ(t) j kθ(θ(t) j , θ(t) j )) , (5) with θ log p(θ; D) 1 M PN n=1 PM j=1 θ log p(xn|zjn, θ)p(θ). A similar update of samples is manifested for the latent variables z(t+1) jn = z(t) jn + ϵ z(t) jn: z(t) jn = 1 M PM j =1 kz(z(t) j n, z(t) jn) z(t) j n log p(z(t) j n; D) + z(t) j nkz(z(t) j n, z(t) jn) , (6) where zn log p(zn; D) 1 M PM j=1 zn log p(xn|zn, θ j)p(zn). The kernels used to update samples of θ and zn are in general different, denoted respectively kθ( , ) and kz( , ), and ϵ is a small step size. For notational simplicity, M is the same in (5) and (6), but in practice a different number of samples may be used for θ and Z. If M = 1 for parameter θ, indices j and j are removed in (5). Learning then reduces to gradient descent and a point estimate for θ, identical to the optimization procedure used for the traditional VAE expression in (1), but with the (multiple) samples associated with Z sequentially transformed via SVGD (and, importantly, without the need to assume a form for q(z|x; φ)). Therefore, if only a point estimate of θ is desired, (1) can be optimized wrt θ, while for updating Z SVGD is applied. 2.3 Efficient Stochastic Encoder At iteration t of the above learning procedure, we realize a set of latent-variable (code) samples {z(t) jn}M j=1 for each xn D under analysis. For large N, training may be computationally expensive. Further, the need to evolve (learn) samples {zj }M j=1 for each new test sample, x , is undesirable. We therefore develop a recognition model that efficiently computes samples of latent codes for a data sample of interest. The recognition model draws samples via zjn = f η(xn, ξjn) with ξjn q0(ξ). Distribution q0(ξ) is selected such that it may be easily sampled, e.g., isotropic Gaussian. After each iteration of updating the samples of Z, we refine recognition model f η(x, ξ) to mimic the Stein sample dynamics. Assume recognition-model parameters η(t) have been learned thus far. Using η(t), latent codes for iteration t are constituted as z(t) jn = f η(t)(xn, ξjn), with ξjn q0(ξ). These codes are computed for all data xn Bt, where Bt D is the minibatch of data at iteration t. The change in the codes is z(t) jn, as defined in (6). We then update η to match the refined codes, as η(t+1) = arg minη P xn Bt PM j=1 f η(xn, ξjn) z(t+1) jn 2 . (7) The analytic solution of (7) is intractable. We update η with K steps of gradient descent as η(t,k) = η(t,k 1) δ P xn Bt PM j=1 η(t,k 1) jn , where η(t,k 1) jn = ηf η(xn, ξjn)(f η(xn, ξjn) z(t+1) jn )|η=η(t,k 1), δ is a small step size, η(t) = η(t,0), η(t+1) = η(t,K), and ηf η(xn, ξjn) is the transpose of the Jacobian of f η(xn, ξjn) wrt η. Note that the use of minibatches mitigates challenges of training with large training sets, D. The function f η(x, ξ) plays a role analogous to q(z|x; φ) in (1), in that it yields a means of efficiently drawing samples of latent codes z, given observed x; however, we do not impose an explicit functional form for the distribution of these samples. 3 Stein Variational Importance Weighted Autoencoder (Stein VIWAE) 3.1 Multi-sample importance-weighted KL divergence Recall the variational expression in (1) employed in conventional VAE learning. Recently, [1, 19] showed that the multi-sample (k samples) importance-weighted estimator Lk(x) = Ez1,...,zk q(z|x) h log 1 k Pk i=1 p(x,zi) q(zi|x) i , (8) provides a tighter lower bound and a better proxy for the log-likelihood, where z1, . . . , zk are random variables sampled independently from q(z|x). Recall from (3) that the KL divergence played a key role in the Stein-based learning of Section 2. Equation (8) motivates replacement of the KL objective function with the multi-sample importance-weighted KL divergence KLk q,p(Θ; D) EΘ1:k q(Θ) h log 1 k Pk i=1 p(Θi|D) q(Θi) i , (9) where Θ = (θ, Z) and Θ1:k = Θ1, . . . , Θk are independent samples from q(θ, Z). Note that the special case of k = 1 recovers the standard KL divergence. Inspired by [1], the following theorem (proved in Appendix A) shows that increasing the number of samples k is guaranteed to reduce the KL divergence and provide a better approximation of target distribution. Theorem 2 For any natural number k, we have KLk q,p(Θ; D) KLk+1 q,p (Θ; D) 0, and if q(Θ)/p(Θ|D) is bounded, then limk KLk q,p(Θ; D) = 0. We minimize (9) with a sample transformation based on a generalization of SVGD and the recognition model (encoder) is trained in the same way as in Section 2.3. Specifically, we first draw samples {θ1:k j }M j=1 and {z1:k jn }M j=1 from a simple distribution q0( ), and convert these to approximate draws from p(θ1:k, Z1:k|D) by minimizing the multi-sample importance weighted KL divergence via nonlinear functional transformation. 3.2 Importance-weighted SVGD for VAEs The following theorem generalizes Theorem 1 to multi-sample weighted KL divergence. Theorem 3 Let Θ1:k be RVs drawn independently from distribution q(Θ) and KLk q,p(Θ, D) is the multi-sample importance weighted KL divergence in (9). Let T(Θ) = Θ + ϵψ(Θ; D) and q T (Θ) represent the distribution of Θ = T(Θ). We have ϵ KLk q,p(Θ ; D) |ϵ=0 = EΘ1:k q(Θ)(Ak p(Θ1:k; D)) . (10) The proof and detailed definition is provided in Appendix A. The following corollaries generalize Theorem 1 and (4) via use of importance sampling, respectively. Corollary 3.1 θ1:k and Z1:k are RVs drawn independently from distributions q(θ) and q(Z), respectively. Let T(θ) = θ + ϵψ(θ; D), q T (θ) represent the distribution of θ = T(θ), and Θ = (θ , Z) . We have ϵ KLk q T ,p(Θ ; D) |ϵ=0 = Eθ1:k q(θ)(Ak p(θ1:k; D)) , (11) where Ak p(θ1:k; D) = 1 ω Pk i=1 ωi Ap(θi; D), ωi = EZi q(Z) h p(θi,Zi,D) q(θi)q(Zi) i , ω = Pk i=1 ωi; Ap(θ; D) and log p(θ; D) are as defined in Theorem 1. Corollary 3.2 Assume ψ(θ; D) lives in a reproducing kernel Hilbert space (RKHS) with kernel kθ( , ). The solution for ψ(θ; D) that maximizes the decrease in the KL distance (11) is ψ ( ; D) = Eθ1:k q(θ) h 1 ω Pk i=1 ωi θikθ(θi, ) + kθ(θi, ) θi log p(θi; D) i . (12) Corollary 3.1 and Corollary 3.2 provide a means of updating multiple samples {θ1:k j }M j=1 from q(θ) via T(θi) = θi + ϵψ(θi; D). The expectation wrt q(Z) is approximated via samples drawn from q(Z). Similarly, we can employ a complementary form of Corollary 3.1 and Corollary 3.2 to update multiple samples {Z1:k j }M j=1 from q(Z). This suggests an importance-weighted learning procedure that alternates between update of particles {θ1:k j }M j=1 and {Z1:k j }M j=1, which is similar to the one in Section 2.2. Detailed update equations are provided in Appendix B. 4 Semi-Supervised Learning with Stein VAE Consider labeled data as pairs Dl = {xn, yn}Nl n=1, where the label yn {1, . . . , C} and the decoder is modeled as (xn, yn|zn) p(x, y|zn; θ, θ) = p(x|zn; θ)p(y|zn; θ), where θ represents the parameters of the decoder for labels. The set of codes associated with all labeled data are represented as Zl = {zn}Nl n=1. We desire to approximate the posterior distribution on the entire dataset p(θ, θ, Z, Zl|D, Dl) via samples, where D represents the unlabeled data, and Z is the set of codes associated with D. In the following, we will only discuss how to update the samples of θ, θ and Zl. Updating samples Z is the same as discussed in Sections 2 and 3.2 for Stein VAE and Stein VIWAE, respectively. Assume {θj}M j=1 drawn from distribution q(θ), { θj}M j=1 drawn from distribution q( θ), and samples {zjn}M j=1 drawn from (distinct) distribution q(Zl). The following corollary generalizes Theorem 1 and (4), which is useful for defining how to best update {θj}M j=1. Corollary 3.3 Assume θ, θ, Z and Zl are RVs drawn from distributions q(θ), q( θ), q(Z) and q(Zl), respectively. Consider the transformation T(θ) = θ + ϵψ(θ; D, Dl) where ψ(θ; D, Dl) lives in a RKHS with kernel kθ( , ). Let q T (θ) represent the distribution of θ = T(θ). For q T = q T (θ)q(Z)q( θ) and p = p(θ, θ, Z|D, Dl), we have ϵ KL(q T p) |ϵ=0 = Eθ q(θ)(Ap(θ; D, Dl)) , (13) where Ap(θ; D, Dl) = θψ(θ; D, Dl) + θ log p(θ; D, Dl)ψ(θ; D, Dl)T , log p(θ; D, Dl) = EZ q(Z)[log p(D|Z, θ)] + EZl q(Zl)[log p(Dl|Zl, θ)], and the solution for ψ(θ; D, Dl) that maximizes the change in the KL distance (13) is ψ ( ; D, Dl) = Eq(θ)[k(θ, ) θ log p(θ; D, Dl) + θk(θ, )] . (14) Further details are provided in Appendix C. 5 Experiments For all experiments, we use a radial basis-function (RBF) kernel as in [15], i.e., k(x, x ) = exp( 1 h x x 2 2), where the bandwidth, h, is the median of pairwise distances between current samples. q0(θ) and q0(ξ) are set to isotropic Gaussian distributions. We share the samples of ξ across data points, i.e., ξjn = ξj, for n = 1, . . . , N (this is not necessary, but it saves computation). The samples of θ and z, and parameters of the recognition model, η, are optimized via Adam [9] with learning rate 0.0002. We do not perform any dataset-specific tuning or regularization other than dropout [32] and early stopping on validation sets. We set M = 100 and k = 50, and use minibatches of size 64 for all experiments, unless otherwise specified. 5.1 Expressive power of Stein recognition model Gaussian Mixture Model We synthesize data by (i) drawing zn 1 2N(µ1, I) + 1 2N(µ2, I), where µ1 = [5, 5]T , µ2 = [ 5, 5]T ; (ii) drawing xn N(θzn, σ2I), where θ = 2 1 1 2 and σ = 0.1. The recognition model fη(xn, ξj) is specified as a multi-layer perceptron (MLP) with 100 hidden units, by first concatenating ξj and xn into a long vector. The dimension of ξj is set to 2. The recognition model for standard VAE is also an MLP with 100 hidden units, and with the assumption of a Gaussian distribution for the latent codes [11]. Figure 1: Approximation of posterior distribution: Stein VAE vs. VAE. The figures represent different samples of Stein VAE. (left) 10 samples, (center) 50 samples, and (right) 100 samples. We generate N = 10, 000 data points for training and 10 data points for testing. The analytic form of true posterior distribution is provided in Appendix D. Figure 1 shows the performance of Stein VAE approximations for the true posterior; other similar examples are provided in Appendix F. The Stein recognition model is able to capture the multi-modal posterior and produce accurate density approximation. Figure 2: Univariate marginals and pairwise posteriors. Purple, red and green represent the distribution inferred from MCMC, standard VAE and Stein VAE, respectively. Poisson Factor Analysis Given a discrete vector xn ZP +, Poisson factor analysis [36] assumes xn is a weighted combination of V latent factors xn Pois(θzn), where θ RP V + is the factor loadings matrix and zn RV + is the vector of factor scores. We consider topic modeling with Dirichlet priors on θv (v-th column of θ) and gamma priors on each component of zn. We evaluate our model on the 20 Newsgroups dataset containing N = 18, 845 documents with a vocabulary of P = 2, 000. The data are partitioned into 10,314 training, 1,000 validation and 7,531 test documents. The number of factors (topics) is set to V = 128. θ is first learned by Markov chain Monte Carlo (MCMC) [4]. We then fix θ at its MAP value, and only learn the recognition model η using standard VAE and Stein VAE; this is done, as in the previous example, to examine the accuracy of the recognition model to estimate the posterior of the latent factors, isolated from estimation of θ. The recognition model is an MLP with 100 hidden units. Table 1: Negative log-likelihood (NLL) on MNIST. Trained with VAE and tested with IWAE. Trained and tested with IWAE. DGLM [27] 89.90 Normalizing flow [28] 85.10 VAE + IWAE [1] 86.76 IWAE + IWAE [1] 84.78 Stein VAE + ELBO 85.21 Stein VAE + S-ELBO 84.98 Stein VIWAE + ELBO 83.01 Stein VIWAE + S-ELBO 82.88 An analytic form of the true posterior distribution p(zn|xn) is intractable for this problem. Consequently, we employ samples collected from MCMC as ground truth. With θ fixed, we sample zn via Gibbs sampling, using 2,000 burn-in iterations followed by 2,500 collection draws, retaining every 10th collection sample. We show the marginal and pairwise posterior of one test data point in Figure 2. Additional results are provided in Appendix F. Stein VAE leads to a more accurate approximation than standard VAE, compared to the MCMC samples. Considering Figure 2, note that VAE significantly underestimates the variance of the posterior (examining the marginals), a well-known problem of variational Bayesian analysis [7]. In sharp contrast, Stein VAE yields highly accurate approximations to the true posterior. 5.2 Density estimation Data We consider five benchmark datasets: MNIST and four text corpora: 20 Newsgroups (20News), New York Times (NYT), Science and RCV1-v2 (RCV2). For MNIST, we used the standard split of 50K training, 10K validation and 10K test examples. The latter three text corpora consist of 133K, 166K and 794K documents. These three datasets are split into 1K validation, 10K testing and the rest for training. Evaluation Given new data x (testing data), the marginal log-likelihood/perplexity values are estimated by the variational evidence lower bound (ELBO) while integrating the decoder parameters θ out, log p(x ) Eq(z )[log p(x , z )] + H(q(z )) = ELBO(q(z )), where p(x , z ) = Eq(θ)[log p(x , θ, z )] and H(q( )) = Eq(log q( )) is the entropy. The expectation is approximated with samples {θj}M j=1 and {z j}M j=1 with z j = f η(x , ξj), ξj q0(ξ). Directly evaluating q(z ) is intractable, thus it is estimated via density transformation q(z) = q0(ξ) det f η(x,ξ) Table 2: Test perplexities on four text corpora. Method 20News NYT Science RCV2 Doc NADE [14] 896 2496 1725 742 DEF [24] - 2416 1576 - NVDM [17] 852 - - 550 Stein VAE + ELBO 849 2402 1499 549 Stein VAE + S-ELBO 845 2401 1497 544 Stein VIWAE + ELBO 837 2315 1453 523 Stein VIWAE + S-ELBO 829 2277 1421 518 We further estimate the marginal loglikelihood/perplexity values via the stochastic variational lower bound, as the mean of 5K-sample importance weighting estimate [1]. Therefore, for each dataset, we report four results: (i) Stein VAE + ELBO, (ii) Stein VAE + SELBO, (iii) Stein VIWAE + ELBO and (iv) Stein VIWAE + S-ELBO; the first term denotes the training procedure is employed as Stein VAE in Section 2 or Stein VIWAE in Section 3; the second term denotes the testing log-likelihood/perplexity is estimated by the ELBO or the stochastic variational lower bound, S-ELBO [1]. Model For MNIST, we train the model with one stochastic layer, zn, with 50 hidden units and two deterministic layers, each with 200 units. The nonlinearity is set as tanh. The visible layer, xn, follows a Bernoulli distribution. For the text corpora, we build a three-layer deep Poisson network [24]. The sizes of hidden units are 200, 200 and 50 for the first, second and third layer, respectively (see [24] for detailed architectures). 1 5 10 20 40 60 100 200 300 Number of Samples (M) Negative Log likelihood (nats) Negative Log likelihood Testing Time for Entire Dataset Training Time for Each Epoch Figure 3: NLL vs. Training/Testing time on MNIST with various numbers of samples for θ. Results The log-likelihood/perplexity results are summarized in Tables 1 and 2. On MNIST, our Stein VAE achieves a variational lower bound of -85.21 nats, which outperforms standard VAE with the same model architecture. Our Stein VIWAE achieves a log-likelihood of -82.88 nats, exceeding normalizing flow (-85.1 nats) and importance weighted autoencoder (-84.78 nats), which is the best prior result obtained by feedforward neural network (FNN). DRAW [5] and Pixel RNN [20], which exploit spatial structure, achieved log-likelihoods of around -80 nats. Our model can also be applied on these models, but this is left as interesting future work. To further illustrate the benefit of model averaging, we vary the number of samples for θ (while retaining 100 samples for Z) and show the results associated with training/testing time in Figure 3. When M = 1 for θ, our model reduces to a point estimate for that parameter. Increasing the number of samples of θ (model averaging) improves the negative log-likelihood (NLL). The testing time of using 100 samples of θ is around 0.12 ms per image. 5.3 Semi-supervised Classification We consider semi-supervised classification on MNIST and Image Net [29] data. For each dataset, we report the results obtained by (i) VAE, (ii) Stein VAE, and (iii) Stein VIWAE. MNIST We randomly split the training set into a labeled and unlabeled set, and the number of labeled samples in each category varies from 10 to 300. We perform testing on the standard test set with 20 different training-set splits. The decoder for labels is implemented as p(yn|zn, θ) = softmax( θzn). We consider two types of decoders for images p(xn|zn, θ) and encoder f η(x, ξ): (i) FNN: Following [12], we use a 50-dimensional latent variables zn and two hidden layers, each with 600 hidden units, for both encoder and decoder; softplus is employed as the nonlinear activation function. (ii) All convolutional nets (CNN): Inspired by [31], we replace the two hidden layers with 32 and 64 kernels of size 5 5 and a stride of 2. A fully connected layer is stacked on the CNN to produce a 50-dimensional latent variables zn. We use the leaky rectified activation [16]. The input of the encoder is formed by spatially aligning and stacking xn and ξ, while the output of decoder is the image itself. Table 3: Semi-supervised classification error (%) on MNIST. Nρ is the number of labeled images per class. [12]; our implementation. VAE Stein VAE Stein VIWAE VAE Stein VAE Stein VIWAE 10 3.33 0.14 2.78 0.24 2.67 0.09 2.44 0.17 1.94 0.24 1.90 0.05 60 2.59 0.05 2.13 0.08 2.09 0.03 1.88 0.05 1.44 0.04 1.41 0.02 100 2.40 0.02 1.92 0.05 1.88 0.01 1.47 0.02 1.01 0.03 0.99 0.02 300 2.18 0.04 1.77 0.03 1.75 0.01 0.98 0.02 0.89 0.03 0.86 0.01 Table 3 shows the classification results. Our Stein VAE and Stein VIWAE consistently achieve better performance than the VAE. We further observe that the variance of Stein VIWAE results is much smaller than that of Stein VAE results on small labeled data, indicating the former produces more robust parameter estimates. State-ofthe-art results [26] are achieved by the Ladder network, which can be employed with our Stein-based approach, however, we will consider this extension as future work. Table 4: Semi-supervised classification accuracy (%) on Image Net. VAE Stein VAE Stein VIWAE DGDN [21] 1 % 35.92 1.91 36.44 1.66 36.91 0.98 43.98 1.15 2 % 40.15 1.52 41.71 1.14 42.57 0.84 46.92 1.11 5 % 44.27 1.47 46.14 1.02 46.20 0.52 47.36 0.91 10 % 46.92 1.02 47.83 0.88 48.67 0.31 48.41 0.76 20 % 50.43 0.41 51.62 0.24 51.77 0.12 51.51 0.28 30 % 53.24 0.33 55.02 0.22 55.45 0.11 54.14 0.12 40 % 56.89 0.11 58.17 0.16 58.21 0.12 57.34 0.18 Image Net 2012 We consider scalability of our model to large datasets. We split the 1.3 million training images into an unlabeled and labeled set, and vary the proportion of labeled images from 1% to 40%. The classes are balanced to ensure that no particular class is over-represented, i.e., the ratio of labeled and unlabeled images is the same for each class. We repeat the training process 10 times for the training setting with labeled images ranging from 1% to 10% , and 5 times for the the training setting with labeled images ranging from 20% to 40%. Each time we utilize different sets of images as the unlabeled ones. We employ an all convolutional net [31] for both the encoder and decoder, which replaces deterministic pooling (e.g., max-pooling) with stridden convolutions. Residual connections [8] are incorporated to encourage gradient flow. The model architecture is detailed in Appendix E. Following [13], images are resized to 256 256. A 224 224 crop is randomly sampled from the images or its horizontal flip with the mean subtracted [13]. We set M = 20 and k = 10. Table 4 shows classification results indicating that Stein VAE and Stein IVWAE outperform VAE in all the experiments, demonstrating the effectiveness of our approach for semi-supervised classification. When the proportion of labeled examples is too small (< 10%), DGDN [21] outperforms all the VAE-based models, which is not surprising provided that our models are deeper, thus have considerably more parameters than DGDN [21]. 6 Conclusion We have employed SVGD to develop a new method for learning a variational autoencoder, in which we need not specify an a priori form for the encoder distribution. Fast inference is manifested by learning a recognition model that mimics the manner in which the inferred code samples are manifested. The method is further generalized and improved by performing importance sampling. An extensive set of results, for unsupervised and semi-supervised learning, demonstrate excellent performance and scaling to large datasets. Acknowledgements This research was supported in part by ARO, DARPA, DOE, NGA, ONR and NSF. [1] Y. Burda, R. Grosse, and R. Salakhutdinov. Importance weighted autoencoders. In ICLR, 2016. [2] L. Chen, S. Dai, Y. Pu, C. Li, and Q. Su Lawrence Carin. Symmetric variational autoencoder and connections to adversarial learning. In ar Xiv, 2017. [3] Y. Feng, D. Wang, and Q. Liu. Learning to draw samples with amortized stein variational gradient descent. In UAI, 2017. [4] Z. Gan, C. Chen, R. Henao, D. Carlson, and L. Carin. Scalable deep poisson factor analysis for topic modeling. In ICML, 2015. [5] K. Gregor, I. Danihelka, A. Graves, and D. Wierstra. Draw: A recurrent neural network for image generation. In ICML, 2015. [6] J. Han and Q. Liu. Stein variational adaptive importance sampling. In UAI, 2017. [7] S. Han, X. Liao, D.B. Dunson, and L. Carin. Variational gaussian copula inference. In AISTATS, 2016. [8] K. He, X. Zhang, S. Ren, and Sun J. Deep residual learning for image recognition. In CVPR, 2016. [9] D. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015. [10] D. P. Kingma, T. Salimans, R. Jozefowicz, X.i Chen, I. Sutskever, and M. Welling. Improving variational inference with inverse autoregressive flow. In NIPS, 2016. [11] D. P. Kingma and M. Welling. Auto-encoding variational Bayes. In ICLR, 2014. [12] D.P. Kingma, D.J. Rezende, S. Mohamed, and M. Welling. Semi-supervised learning with deep generative models. In NIPS, 2014. [13] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. [14] H. Larochelle and S. Laulyi. A neural autoregressive topic model. In NIPS, 2012. [15] Q. Liu and D. Wang. Stein variational gradient descent: A general purpose bayesian inference algorithm. In NIPS, 2016. [16] A. L. Maas, A. Y. Hannun, and A. Y. Ng. Rectifier nonlinearities improve neural network acoustic models. In ICML, 2013. [17] Y. Miao, L. Yu, and Phil Blunsomi. Neural variational inference for text processing. In ICML, 2016. [18] A. Mnih and K. Gregor. Neural variational inference and learning in belief networks. In ICML, 2014. [19] A. Mnih and D. J. Rezende. Variational inference for monte carlo objectives. In ICML, 2016. [20] A. Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel recurrent neural network. In ICML, 2016. [21] Y. Pu, Z. Gan, R. Henao, X. Yuan, C. Li, A. Stevens, and L. Carin. Variational autoencoder for deep learning of images, labels and captions. In NIPS, 2016. [22] Y. Pu, X. Yuan, and L. Carin. Generative deep deconvolutional learning. In ICLR workshop, 2015. [23] Y. Pu, X. Yuan, A. Stevens, C. Li, and L. Carin. A deep generative deconvolutional image model. Artificial Intelligence and Statistics (AISTATS), 2016. [24] R. Ranganath, L. Tang, L. Charlin, and D. M.Blei. Deep exponential families. In AISTATS, 2015. [25] R. Ranganath, D. Tran, and D. M. Blei. Hierarchical variational models. In ICML, 2016. [26] A. Rasmus, M. Berglund, M. Honkala, H. Valpola, and T. Raiko. Semi-supervised learning with ladder networks. In NIPS, 2015. [27] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, 2014. [28] D.J. Rezende and S. Mohamed. Variational inference with normalizing flows. In ICML, 2015. [29] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-fei. Imagenet large scale visual recognition challenge. IJCV, 2014. [30] D. Shen, Y. Zhang, R. Henao, Q. Su, and L. Carin. Deconvolutional latent-variable model for text sequence matching. In ar Xiv, 2017. [31] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller. Striving for simplicity: The all convolutional net. In ICLR workshop, 2015. [32] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. JMLR, 2014. [33] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. JMLR, 2010. [34] Y. Pu W. Wang, R. Henao, L. Chen, Z. Gan, C. Li, and Lawrence Carin. Adversarial symmetric variational autoencoder. In NIPS, 2017. [35] Y. Zhang, D. Shen, G. Wang, Z. Gan, R. Henao, and L. Carin. Deconvolutional paragraph representation learning. In NIPS, 2017. [36] M. Zhou, L. Hannah, D. Dunson, and L. Carin. Beta-negative binomial process and Poisson factor analysis. In AISTATS, 2012.